Debunking the AI food delivery hoax that fooled Reddit

Artificial intelligence begins prescribing medications in Utah

In partnership with

In today’s email:

  • 🔉 OpenAI bets big on audio as Silicon Valley declares war on screens

  • 👗 Zara turns to AI-edited models amid shop closures

  • ☠️ Elon Musk’s Grok AI generates images of ‘minors in minimal clothing’

  • 🧰 9 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.

Top News

Key Takeaway: A viral Reddit “whistleblower” story about a food delivery app’s alleged fraud unraveled when the source tried to back it up with AI-generated “proof,” showing how LLMs now enable fast, convincing fabrications that waste reporters’ time and mislead the public.

More Insights:

  • A brand-new Reddit account posted detailed claims of platform manipulation, including a “desperation score” used to suppress high-paying orders for reliable drivers.

  • The post exploded: front-page traction, massive upvotes, awards, and viral spread on X.

  • When pressed for verification, the “whistleblower” sent a seemingly legit employee badge—later flagged as AI-generated via Gemini’s detection.

  • He also provided an 18-page “confidential” technical report packed with formulas and diagrams that initially looked credible but contained telltale nonsense and mismatched content.

  • Once asked for real identity verification (name/LinkedIn), the source abruptly bailed and deleted his Signal—classic exit behavior when the con collapses.

Why it matters: AI doesn’t just help people create content—it helps them manufacture credibility at scale, turning “evidence” into something cheap and disposable. The new threat isn’t that a single lie goes viral; it’s that verification becomes so time-consuming that truth can’t compete, and outrage becomes the easiest thing to “prove.”

Get the investor view on AI in customer experience

Customer experience is undergoing a seismic shift, and Gladly is leading the charge with The Gladly Brief.

It’s a monthly breakdown of market insights, brand data, and investor-level analysis on how AI and CX are converging.

Learn why short-term cost plays are eroding lifetime value, and how Gladly’s approach is creating compounding returns for brands and investors alike.

Join the readership of founders, analysts, and operators tracking the next phase of CX innovation.

Key Takeaway: Yann LeCun says Meta’s new 29-year-old AI leader, Alexander Wang, lacks research experience, and he warns Meta’s AI strategy shift could trigger a major employee exodus.

More Insights:

  • LeCun (who left Meta in November) publicly criticizes Meta’s AI direction and leadership choices.

  • He calls Wang “young” and “inexperienced,” arguing Wang doesn’t yet understand what attracts top researchers.

  • LeCun claims Zuckerberg “sidelined” much of Meta’s GenAI org after accusations that Llama 4 benchmark results were “gamed.”

  • He predicts more departures: “A lot of people have left… a lot… will leave.”

  • LeCun argues LLMs are a “dead end” for superintelligence and says his new lab will focus on “world models” using video/physical data beyond language.

Why it matters: Meta isn’t just competing on models—it’s competing on trust, culture, and scientific credibility; once researchers believe leadership is optimizing for optics over discovery, the real advantage (people who can invent the next paradigm) walks out the door.

Key Takeaway: Nvidia unveiled Alpamayo, an open-source suite of reasoning-focused AI models, tools, and data designed to help autonomous vehicles handle rare, complex driving situations more safely—and explain their decisions.

More Insights:

  • Alpamayo 1 is a 10B-parameter “chain-of-thought” vision-language-action model aimed at human-like driving reasoning.

  • It’s built to tackle edge cases (like traffic light outages) even without prior specific experience.

  • The core model code is available on Hugging Face, with options to fine-tune into smaller, faster variants.

  • Nvidia is releasing a dataset with 1,700+ hours of driving data across diverse conditions, including rare scenarios.

  • Nvidia also launched AlpaSim, an open-source simulation framework (on GitHub) for scalable validation in realistic driving environments.

Why it matters: If autonomy is going to earn real trust, it can’t just “react correctly” in normal conditions—it has to reason in the weird, unsafe, unpredictable moments humans handle instinctively, and it has to justify those choices in ways regulators, developers, and the public can audit.

Other stuff

Take ChatGPT to the next level 🎉

Add folders and subfolders, prompt manager, prompt optimizer, image gallery, side-by-side voice mod, PDF export, reference chats, chat notes, and many more features.

Tools & LinkS
Editor's Pick ✨

Brief My Meeting - AI meeting briefs delivered to your inbox. Open source

Giselle - Build and run AI workflows. Open source.

NoteGPT - Your summary AI note taker for meetings, videos & everything

Ray - An AI trainer that plans + adapts your workouts in real-time

Instruct - The most capable way to automate your work.

PostSyncer - AI Content Maker, for Social Media Publishing

Invoce.ai - Chat with AI to create invoices in seconds

Everyessay - AI essays, trained on winning human-briefs.

Foundire - First-round interviews on autopilot

Unclassified 🌀 

How did you like today’s newsletter?

Login or Subscribe to participate in polls.

Help share Superpower

⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!

Hope you enjoyed today's newsletter

Follow me on Twitter and Linkedin for more AI news and resources.

Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml

⚡️ Join over 300,000 people using the Superpower ChatGPT extension on Chrome and Firefox.

OR