Runway Gen-3 Alpha is now available to everyone.

Figma pulls AI tool after criticism that it ripped off Apple’s design

In today’s email:

  • 😍 ElevenLabs partners with estates of iconic stars to bring their voices to the Reader App

  • 🧑🏻‍⚕️ Study reveals why AI models that analyze medical images can be biased

  • 🤑 AI coding startup Magic seeks $1.5-billion valuation in new funding round, sources say

  • 🧰 9 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.

Top News

Runway AI has launched its Gen-3 Alpha AI video model, which is now generally available and showcases significant advancements in fidelity, consistency, and motion over its predecessor, Gen-2. Gen-3 Alpha represents a new frontier in high-fidelity, controllable video generation, trained on a large-scale multimodal infrastructure. Users have been sharing impressive videos generated by Gen-3 Alpha, highlighting its capabilities to create striking visuals from text prompts, such as a monster rising from the Thames River and a time-lapse pencil drawing.

Users on social media platform X have been experimenting with Gen-3 Alpha, creating diverse and creative videos. Martin Haerlin used the model to generate a visual carousel of flowers, while Bilawal Sidhu showcased its potential for sci-fi movie production with impressive particle simulations and realistic motion graphics. Another user, vkuoo, demonstrated the model's ability to control camera speeds using text commands, creating a dynamic and visually captivating video.

Gen-3 Alpha's versatility extends to creating hyper-realistic visuals and lip-syncing capabilities, as demonstrated by users like Chrissie and Christopher Fryant. The model has received positive comparisons to OpenAI's Sora, with some users noting that it even exceeds Sora in certain aspects. Runway AI's Gen-3 Alpha, available through a subscription on the RunwayML platform, is a testament to the growing potential of AI in visual communication and creativity.

ElevenLabs has partnered with the estates of legendary stars Judy Garland, James Dean, Burt Reynolds, and Sir Laurence Olivier to bring their iconic voices to its Reader App. This collaboration allows users to listen to any digital text, such as articles, PDFs, ePubs, newsletters, and e-books, narrated by these beloved actors. The voices are exclusive to the app for individual streaming, offering an emotionally rich, context-aware experience. Liza Minnelli, daughter of Judy Garland, expressed excitement about her mother's voice being accessible to millions, believing it will attract new fans and delight existing ones.

The app, launched last week, transforms digital text into engaging voiceovers using AI technology. It aims to honor the legacies of these celebrated actors by allowing fans to experience their voices in new and meaningful ways. Users can enjoy classics like L. Frank Baum’s The Wonderful Wizard of Oz voiced by Judy Garland or Sherlock Holmes narrated by Sir Laurence Olivier. Dustin Blank, Head of Partnerships at ElevenLabs, highlighted the significance of this addition to their growing list of narrators, emphasizing the company's commitment to making content accessible in any language and voice.

Tina Xavie, Chief Marketing Officer of CMG Worldwide, praised the thoughtful approach of ElevenLabs in working with these iconic estates. She expressed enthusiasm for the new opportunities this partnership provides for their clients and anticipates more exciting developments in the future. This milestone reflects ElevenLabs' ongoing mission to enhance accessibility and enjoyment of digital content through innovative technology and collaboration with the estates of legendary figures.

Apple, in partnership with the Swiss Federal Institute of Technology Lausanne (EPFL), has launched a public demo of their 4M (Massively Multimodal Masked Modeling) AI model on the Hugging Face Spaces platform. This launch allows users to interact with the model, creating images from text, performing object detection, and manipulating 3D scenes with natural language inputs. This move signifies a shift in Apple’s traditionally secretive approach to R&D, aiming to foster developer interest and build an ecosystem around its advanced AI capabilities.

The timing of this release is notable, aligning with Apple’s recent market performance and its ongoing advancements in AI. Since May 1st, Apple’s shares have surged by 24%, adding over $600 billion in market value, positioning it as a leading "AI stock." The 4M model's unified architecture for diverse modalities suggests potential for more coherent and versatile AI applications across Apple’s ecosystem, such as enhanced Siri capabilities and automated video content creation in Final Cut Pro.

However, the release also raises questions about data practices and AI ethics. Apple, known for its strong stance on user privacy, must navigate the data-intensive nature of advanced AI models carefully to maintain user trust. This public demo, along with Apple's AI strategy unveiled at WWDC, highlights the company’s dual approach: practical AI for consumers and cutting-edge research with 4M, signaling its commitment to leading the AI revolution while preserving user privacy and seamless experiences.

Figma has pulled its new AI tool, Make Designs, after criticism that it generated designs too similar to Apple's iOS weather app. Figma CEO Dylan Field took responsibility for the oversight and explained that the AI models were not trained by Figma but were based on off-the-shelf models and a bespoke design system. Figma CTO Kris Rasmussen clarified that the issue likely stemmed from these third-party models, including OpenAI's GPT-4o and Amazon’s Titan Image Generator G1, which may have been trained on Apple’s designs without Figma's knowledge.

Andy Allen, CEO of Not Boring Software, highlighted the problem by showcasing how Make Designs produced near-replicas of Apple’s weather app. In response, Field emphasized that the AI was not trained on Figma's or any other specific app designs and blamed the low variability in the design system. Rasmussen mentioned that Figma will enhance its bespoke design system and take additional precautions before re-enabling the tool to ensure it meets quality standards and aligns with Figma's values.

Figma plans to improve its processes and may eventually train its own models to better integrate with its platform. Meanwhile, users can decide if they want their content used for future AI training, with a deadline to opt in or out set for August 15th. Other AI features in Figma remain in beta, and the company aims to re-enable Make Designs soon after addressing the current issues.

Other stuff

All your ChatGPT images in one place 🎉

You can now search for images, see their prompts, and download all images in one place.

Tools & LinkS
Editor's Pick ✨

Captions. The next generation of storytelling

Omni Parse - Ingest, parse, and optimize any data format ➡️ from documents to multimedia ➡️ for enhanced compatibility with GenAI frameworks

Ariglad - Auto-create & update knowledge base articles

Pretzel AI - The modern replacement for Jupyter Notebooks

Wanderboat AI - Your everyday AI companion for travel and outing ideas

Rapport Self-Service - Animate ChatGPT and other AIs

Released - Instantly generate release notes from Jira tickets

cre[ai]tion lite - A digital designer's muse powered by generative AI

Unclassified 🌀 

How did you like today’s newsletter?

Login or Subscribe to participate in polls.

Help share Superpower

⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!

Hope you enjoyed today's newsletter

Follow me on Twitter and Linkedin for more AI news and resources.

Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml

⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.

OR