- Superpower Daily
- Posts
- Runway unveiled Gen-3 Alpha
Runway unveiled Gen-3 Alpha
In today’s email:
🧠 Genius. A Home Assistant user hooked up GPT-4 Vision with their security cameras …
🔥 GPT-4o can also return multiple images as a part of a large text response.
📚 AI took their jobs. Now they get paid to make it sound human
🧰 7 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.



Runway has introduced Gen-3 Alpha, its latest AI model for generating video clips from text descriptions and still images, offering major improvements in speed, fidelity, and control over previous models. The new model excels at creating expressive human characters with a range of actions and emotions, and supports precise key-framing and imaginative transitions. While it has some limitations, such as a maximum footage length of 10 seconds and challenges with complex interactions, Gen-3 promises faster generation times and enhanced capabilities, with a 5-second clip taking 45 seconds and a 10-second clip taking 90 seconds to generate.
Runway has partnered with leading entertainment and media organizations to create custom versions of Gen-3 for more stylistically controlled and consistent characters. Despite the advancements, controlling generative models to align with a creator's artistic intentions remains a challenge. The company has implemented safeguards like moderation systems and C2PA authentication to ensure content authenticity and block inappropriate or copyrighted material. Runway is committed to addressing copyright issues by consulting with artists and exploring data partnerships.
The generative AI video tool landscape is becoming increasingly competitive, with startups like Luma and giants like Adobe and OpenAI entering the fray. Runway's significant investments and partnerships with the creative industry position it as a key player. However, the rise of AI-generated content poses a threat to traditional filmmaking, potentially disrupting jobs in the entertainment industry. A study by the Animation Guild estimates that by 2026, over 100,000 U.S. entertainment jobs could be impacted by generative AI, highlighting the need for strong labor protections to mitigate the impact on creative work.
Unlock the future of marketing with HubSpot's AI Prompt Library! This free ebook, crafted by HubSpot's top marketing minds, offers curated AI prompts designed to revolutionize your marketing strategy.
Elevate your content, drive higher conversions, and stand out in a competitive landscape with precise, impactful prompts. Download now and start transforming your marketing efforts today!

TikTok has unveiled new generative AI avatars for creators and stock actors, aiming to enhance branded content and ads on its platform. The "Custom Avatars" feature allows creators to replicate their likeness for multilingual and global brand collaborations. Additionally, "Stock Avatars" use licensed actors from diverse backgrounds to add a human touch to business content. TikTok ensures that creators have control over their likeness, including setting rates and licensing terms.
The platform also introduced an "AI Dubbing" tool, which can translate content into ten languages, including English, Japanese, Korean, and Spanish. This tool detects the original language of a video, then transcribes, translates, and dubs it into the desired language, helping creators and brands reach a global audience. These features are part of "TikTok Symphony," a suite of generative AI-powered ad solutions launched in May, which assists marketers in scriptwriting, video production, and asset enhancement.
Despite potential regulatory challenges in the U.S., TikTok continues to expand its advertising capabilities. The company noted that 61% of its users have made purchases directly on the app or after viewing an ad. These new AI tools aim to further strengthen TikTok's ads business, offering innovative ways for brands and creators to connect with audiences worldwide.

Over the past year, Google DeepMind has shifted its focus from pure research to developing commercial AI products, combining its two AI labs, Google Brain and DeepMind, into a single unit. This move aims to improve Google's track record on commercial AI products while maintaining its strength in foundational research. However, the transition has been challenging, with some researchers feeling frustrated by imposed roadmaps and less room for experimentation. The unit's primary focus is on Gemini, Google's flagship AI model, which has faced several issues, including generating historically inaccurate images.
DeepMind, led by CEO Demis Hassabis, has historically been research-focused, but now faces the challenge of balancing commercialization with scientific exploration. The merger has created a sense of fatigue among some employees, as the pressure to deliver commercial products intensifies. Despite this, there have been notable successes, such as the release of a new version of AlphaFold, a tool for predicting protein structures. However, there is internal debate about how much time and resources should be dedicated to such projects versus the primary goal of advancing Gemini.
The combination of the two labs, once operating separately with distinct cultures, aims to leverage Google's extensive talent pool in AI. However, the integration process has highlighted tensions between pure research and commercial ambitions. Google's leadership believes that commercial product development can enhance research by providing valuable user feedback. The merger has also led to concerns about resource allocation, particularly for teams focused on pure research. Despite these challenges, Google is investing heavily in retaining top talent and fostering an environment where foundational science and commercialization can coexist productively.
Other stuff
Genius. A Home Assistant user hooked up GPT-4 Vision with their security cameras and now can do things like find items in their home.
Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data
GPT-4o can also return multiple images as a part of a large text response.
Fruit farmers can predict crops using AI tool
Social Security uses secret AI to track sick leave and hunt for fraud
What policymakers need to know about AI (and what goes wrong if they don’t)
Tim Cook is ‘not 100 percent’ sure Apple can stop AI hallucinations
AI took their jobs. Now they get paid to make it sound human
How A.I. Is Revolutionizing Drug Development
All your ChatGPT images in one place 🎉
You can now search for images, see their prompts, and download all images in one place.


ElevenLabs Texts to Sounds Effects API is now live. Try it here

CodeGPT - download open-source models from Ollama with a single click and install them as code assistants in VSCode.

Roast your friends with AI

TokenCost - Easy token price estimates for 400+ LLMs

Olvy 3.0 - Speed up customer feedback analysis 10x with AI

Wunjo - AI-powered face swap, lip sync, remover, and content editing
Aware.ai Pregnancy App - Your smart pregnancy community, find moms who get you


How did you like today’s newsletter? |
Help share Superpower
⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!
Hope you enjoyed today's newsletter
Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml
⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.