• Superpower Daily
  • Posts
  • Meta tried to buy Ilya Sutskever’s $32 billion AI startup, but is now planning to hire its CEO

Meta tried to buy Ilya Sutskever’s $32 billion AI startup, but is now planning to hire its CEO

OpenAI warns models with higher bioweapons risk are imminent

In today’s email:

  • 🧠 Andrej Karpathy: Software Is Changing (Again)

  • 🏔️ They Trusted ChatGPT To Plan Their Hike — And Ended Up Calling for Rescue

  • 🥷 The OpenAI Mafia: Why "Ex-OpenAI" is the New Golden Resume Line

  • 🧰 10 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.

Top News

Key Takeaway: Meta’s bid to buy Ilya Sutskever’s Safe Superintelligence was rebuffed, prompting Mark Zuckerberg to recruit CEO Daniel Gross and Nat Friedman and take a stake in their venture fund to supercharge Meta’s AI efforts.

More Insights:

  • Meta pursued a $32 billion acquisition of Safe Superintelligence earlier this year, but Sutskever declined the offer.

  • After talks stalled, Zuckerberg shifted gears and poached Daniel Gross—Safe Superintelligence’s CEO and OpenAI co-founder—as well as former GitHub chief Nat Friedman.

  • Both hires will work under Scale AI’s Alexandr Wang within Meta’s AI division, while Meta acquires an equity stake in Gross and Friedman’s NFDG venture fund.

  • This move follows Meta’s recent $14.3 billion investment in Scale AI and underscores the company’s aggressive compensation and hiring tactics in the global AI talent war.

Why it matters: In today’s AI arms race, access to world-class engineers and founders can be more valuable than outright acquisitions—Meta’s pivot from a blockbuster buyout to strategic poaching and VC stakes illustrates how tech giants will stop at nothing to secure the human capital they see as key to achieving true superintelligence.

60% of companies expect to be transformed by AI within two years. But many organizations aren’t even close to putting AI at the forefront of their business strategies.

Box's new white paper, Becoming an AI-First Company, helps break down the ever-evolving landscape so businesses can move at the speed of AI. Download the new white paper now for Box's key recommendations.

Key Takeaway: Software is entering a third paradigm—Software 3.0—where large language models become the new “code,” written and orchestrated in natural language, demanding fresh tools, interfaces, and infrastructure to manage their power and quirks.

More Insights:

  • Three Waves of Software: Software 1.0 is hand-written code, Software 2.0 is neural-network weights tuned by data, and Software 3.0 is LLM-driven programs authored in English.

  • LLMs as Fallible Spirits: They possess superhuman memory and knowledge but suffer hallucinations, lack persistent long-term memory, and exhibit erratic “jagged” intelligence.

  • Partial-Autonomy Apps: Real-world AI integration relies on specialized GUIs, human-in-the-loop verification, and an “autonomy slider” to balance speed and safety.

  • Agent-Friendly Infrastructure: To empower AI agents, we need LLM-optimized docs (Markdown, LLMs.txt), URL-based ingestion tools, and protocols that speak directly to models.

Why it matters: This revolution democratizes programming—anyone who can speak can now build software—forcing us to rethink development, audit, and deployment for an AI-powered future.


Key Takeaway: OpenAI cautions that its next-generation reasoning models could empower amateurs to create biological weapons, prompting the company to intensify safety testing and new mitigations.

More Insights:

  • Successors to OpenAI’s o3 reasoning model are expected to cross a “high risk” threshold for bioweapon development under the company’s preparedness framework.

  • To combat “novice uplift,” OpenAI is expanding model testing and embedding fresh precautions aimed at preventing misuse by non-experts.

  • The firm emphasizes that only near-perfect automated detection paired with swift human enforcement can stop harmful outputs from slipping through.

  • Other industry players, like Anthropic with Claude 4, are also activating stronger safeguards against AI-driven biological and nuclear threats.

Why it matters: As AI shifts from specialist labs into broader hands, the line between medical breakthroughs and biothreats vanishes—raising the stakes for global safety systems and regulatory readiness.

Other stuff

All your ChatGPT images in one place 🎉

You can now search for images, see their prompts, and download all images in one place.

Tools & LinkS
Editor's Pick ✨

2dto3D - Turn photos into 3D models that actually work

Second Brain - AI visual board and knowledge base

Liveblocks - Ready-made AI copilots and collaboration for your product

Martin - AI personal assistant like JARVIS

FetchAI - The fastest way to talk to any AI on Mac

ComputerX - Your smart agent that handles your computer work

Z3D - Turn your ideas into 3D reality with just a prompt

Meta - Building the next evolution of digital connection.

Entelligence.ai - Automate Your Code Reviews & Ship Faster

Accordio - Agentic contracts. Smarter business

Unclassified 🌀 

How did you like today’s newsletter?

Login or Subscribe to participate in polls.

Help share Superpower

⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!

Hope you enjoyed today's newsletter

Follow me on Twitter and Linkedin for more AI news and resources.

Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml

⚡️ Join over 300,000 people using the Superpower ChatGPT extension on Chrome and Firefox.

OR