Kate Middleton's Photo Editing Controversy

Grok will be open-sourced this week

In today’s email:

  • 🇺🇸 U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

  • 🤯 Jensen Huang says even free AI chips from his competitors can't beat Nvidia's GPUs

  • 🔥 Character Reference is finally available on MidJourney

  • 🧰 8 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.

Top News

The controversy surrounding a photo of Kate Middleton and her children, which some speculated to be AI-generated, prompted the Duchess of Cambridge to clarify that she had experimented with editing, apologizing for any confusion caused. Observers pointed out oddities in the photo, such as blurred hands and missing wedding rings, fueling further speculation. This incident coincided with Middleton's unusual absence from the public eye following her hospitalization for abdominal surgery, sparking a range of unfounded rumors about her whereabouts.

The editing anomalies in the photo, like a phantom sleeve and content-aware fill mishaps, stirred debates about the tools used and the authenticity of the image. High-profile figures, including Piers Morgan, questioned why the Royal Family didn't release the original photo to end the speculation. The situation highlights the growing challenge of distinguishing real images from edited or AI-generated ones in an era where advanced image manipulation is increasingly accessible.

The incident underscores the broader implications of AI in image generation and public trust. As AI-powered tools become more sophisticated and widespread, distinguishing between real and altered images becomes more difficult, raising concerns about the impact on public perception and discourse. Middleton's admission of photo editing brought attention to the importance of transparency in an age where image manipulation can easily sway public opinion or spark international debate.

A recent report commissioned by the U.S. government and conducted by Gladstone AI highlights the urgent national security risks posed by the development of advanced artificial intelligence (AI) and artificial general intelligence (AGI). The report, titled "An Action Plan to Increase the Safety and Security of Advanced AI," suggests comprehensive policy actions to mitigate these risks, including limiting the computing power used to train AI models and requiring government approval for deploying new models above certain thresholds. It emphasizes the potential of AI to destabilize global security and the necessity of immediate government intervention to prevent possible extinction-level threats to humanity.

The report outlines specific recommendations, such as making it illegal to train AI models beyond a set level of computing power, potentially outlawing the publication of AI model weights, and enhancing control over the manufacture and export of AI chips. These measures aim to slow down the AI development race and ensure that safety considerations are prioritized over rapid advancement. The document also suggests that the U.S. could invest in "alignment" research to develop safer AI technologies and proposes the establishment of a new federal AI agency to oversee these regulations.

Despite the comprehensive measures proposed, the report acknowledges the challenges in implementing such policies, including potential resistance from the AI industry and the limitations of U.S. jurisdiction on global AI development. The necessity of these actions is underscored by the rapid advancements in AI and the public's growing concern over the technology's potential risks. The report serves as a call to action for the U.S. government to take decisive steps to ensure the safe and secure development of AI, highlighting the balance between harnessing AI's benefits and preventing its potential dangers.

Elon Musk's AI company xAI plans to open source its Grok chatbot, challenging ChatGPT. The move comes amid Musk's lawsuit against OpenAI, accusing the company of straying from its open-source principles. Grok, launched last year, offers features like real-time information access and viewpoints free from political correctness, available through X's subscription service.

Musk's decision to make Grok's code public aligns with his history of supporting open-source initiatives, exemplified by Tesla's open-sourced patents and X's algorithms. His lawsuit criticizes OpenAI's partnership with Microsoft, claiming it contradicts the company's original mission to benefit humanity by becoming a closed-source entity focused on profit.

The legal battle between Musk and OpenAI has sparked a broader discussion in the tech community about the value of open-source AI, with opinions divided among industry leaders. While some view Musk's actions as a distraction, others defend the importance of open-source research in advancing technology and benefiting society.

Covariant recently unveiled its Robotics Foundation Model 1 (RFM-1), an AI platform described by co-founder and CEO Peter Chen as akin to a large language model for robot language. Developed from extensive data via Covariant’s Brain AI, RFM-1 aims to revolutionize robotics across various sectors, enhancing robots' adaptability and decision-making to function beyond traditional single-task programming. While currently focused on industrial arms, the platform seeks to support a broader range of robotic hardware, moving closer to general-purpose systems.

RFM-1's approach diverges from conventional robotics by enabling machines to interpret and respond to dynamic real-world scenarios. By integrating generative AI, the system allows robots to reason and plan actions based on complex inputs, akin to human cognitive processes. Users interact with RFM-1 through simple text or voice commands, with the AI generating and evaluating potential actions before execution, promising a more intuitive and versatile interface for robotic control.

Covariant, led by AI experts Peter Chen and Pieter Abbeel, envisions RFM-1 as a pivotal advancement in robotics, potentially compatible with most existing Covariant-enabled hardware. The platform's introduction marks a significant step toward creating more autonomous, intelligent, and adaptable robots, aiming to transform industries and everyday applications through enhanced AI-driven capabilities.

Other stuff

Superpower ChatGPT now supports voice 🎉

Text-to-Speech and Speech-to-Text. Easily have a conversation with ChatGPT on your computer

Superpower ChatGPT Extension on Chrome


Superpower ChatGPT Extension on Firefox


Tools & LinkS
Editor's Pick ✨

Bland web. An AI that sounds human and can do anything.

DataCurve - Premium curated coding data for applications and LLMs

sync. – an API for real-time lipsync

Picurious AI - Snap, solve & discover picture

Muse Pro - Ultimate AI-enhanced sketching and painting app

Soundry AI - Infinite sample packs and text-to-sound for musicians

DianaHR - An AI powered HR person for SMBs

Stock Market GPT for Investment Research - AI-powered stocks, balance sheets, analyst report comparison

Unclassified 🌀 

How did you like today’s newsletter?

Login or Subscribe to participate in polls.

Help share Superpower

⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!

Hope you enjoyed today's newsletter

Follow me on Twitter and Linkedin for more AI news and resources.

Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml

⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.

Superpower ChatGPT Extension on Chrome


Superpower ChatGPT Extension on Firefox