- Superpower Daily
- Posts
- Ex-Google engineer charged with stealing AI tech
Ex-Google engineer charged with stealing AI tech
AI passes 100 IQ for the first time
In today’s email:
🎥 Meta is building a giant AI model to power its ‘entire video ecosystem,’ exec says
📣 Thanks to AI, the coder is no longer king: All hail the QA engineer
👀 AI Prompt Engineering Is Dead > Long live AI prompt engineering
🧰 7 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.
A Google engineer named Linwei Ding, also known as Leon Ding, has been indicted by a federal grand jury for allegedly stealing trade secrets related to Google's AI chip software and hardware, specifically around the company's tensor processing unit (TPU) chips. These chips are crucial for Google's AI workloads and the stolen files include software designs for TPU chips, as well as hardware and software specifications for GPUs in Google's data centers. Ding is accused of transferring over 500 confidential files to a personal cloud account while covertly working for China-based AI companies.
Ding reportedly copied data from Google's source files into the Apple Notes application on his Google-issued MacBook, converting them to PDFs to evade Google's detection systems. Shortly after beginning to steal files, he received an offer to become CTO of a Chinese machine learning company, Rongshu, and later founded a machine learning startup, Zhisuan, in China, all while still employed at Google. He resigned in December 2023, just before planning to leave for Beijing, amid inquiries from Google about his activities.
The U.S. Department of Justice charges Ding with four counts of theft of trade secrets, with potential penalties including up to ten years in prison and a $250,000 fine for each count. This case emerges amidst a broader context of an AI technology arms race and U.S. efforts to prevent China from accessing AI-related chips, reflecting growing international tensions over AI technology and intellectual property.
Meta is significantly investing in AI to enhance Facebook's video recommendation engine across all its platforms, as stated by Tom Alison, head of Facebook. This new AI model is expected to power not just the TikTok-like Reels but also traditional longer videos. Meta's shift to using Nvidia's GPUs for AI training represents a move to enhance efficiency and performance across its services, with the AI model showing promising results in increasing Reels watch time on Facebook.
Alison highlighted that Meta is progressing towards integrating this AI model across various products, aiming to provide more engaging and relevant content recommendations. This integration could lead to improved responsiveness, allowing users to see more content similar to what they enjoy on different parts of the platform, like Reels and the main Feed.
Beyond video recommendations, Meta plans to use its AI technology to develop digital assistants and integrate sophisticated chatting tools within its platforms. This includes enhancing user interactions in Groups and the core Feed, potentially allowing users to receive detailed information about specific topics or content, like posts about Taylor Swift, directly through a Meta AI-powered digital assistant.
A Microsoft engineer named Shane Jones has raised safety concerns about Microsoft's AI image generator, Copilot Designer, to the Federal Trade Commission. Jones reported that the tool could generate disturbing images, including violent and sexualized content, as well as inappropriate representations of Disney characters and sensitive political content. Despite his repeated warnings, Microsoft has not taken the tool down, leading Jones to share his concerns publicly.
In response to the issues raised, Microsoft emphasized its commitment to addressing employee concerns in line with company policies. The company highlighted its user feedback tools and internal reporting channels designed to investigate and address such issues. Microsoft has engaged with its product leadership and the Office of Responsible AI to review the reports.
The situation with Copilot Designer has sparked broader discussions on AI ethics, particularly after incidents involving explicit images of public figures and inaccurate historical representations by other AI image generators. This has prompted tech companies, including Microsoft and Google, to reassess and strengthen their AI safety protocols to prevent the generation of harmful content.
Maxim Lott conducted an experiment where he translated Norway Mensa's 35-question matrix-style IQ test into verbal descriptions to assess AI's ability to solve problems without relying on visual perception. This approach allowed ChatGPT-4 to score an IQ of 85, demonstrating an improvement in its logical reasoning capabilities when the visual element was removed. The experiment also included various versions of another AI, Claude, which showed progressive improvement in IQ scores with each new version, suggesting a trend in AI development where increased complexity and enhanced problem-solving abilities lead to higher IQ estimations.
Claude-3, the latest version tested, scored above the human average IQ, outperforming other AIs like Microsoft Bing and Google's AI in the same verbalized test. This progression hints at the potential for future AI models to achieve even higher levels of intelligence, as indicated by a speculative extrapolation suggesting that Claude-6 could surpass human intelligence within a few years. This highlights the rapid advancement in AI capabilities, particularly in problem-solving and logical reasoning.
The experiment's findings suggest that while AIs have traditionally struggled with tasks requiring visual interpretation, they demonstrate a form of intelligence when the information is presented verbally. This insight into AI's problem-solving abilities provides a glimpse into the future of AI development and its implications for society, emphasizing the need for ongoing monitoring and evaluation of AI's intellectual growth and its potential impact on various aspects of life.
Other stuff
This agency is tasked with keeping AI safe. Its offices are crumbling.
Thanks to AI, the coder is no longer king: All hail the QA engineer
The job applicants shut out by AI: ‘The interviewer sounded like Siri’
Introducing Zapier Central: Work Hand in Hand with AI Bots
AI Prompt Engineering Is Dead > Long live AI prompt engineering
Y Combinator CEO Garry Tan Discussed OpenAI Board Role (paywall)
Are you a p-zombie? ChatGPT 4: Sure. Similar.
Claude 3's retrieval ability over long content is so good that if you provide structured data, it essentially acts like a fine tune! 🤯
Superpower ChatGPT now supports voice 🎉
Text-to-Speech and Speech-to-Text. Easily have a conversation with ChatGPT on your computer
Circleback - Unbelievably good meeting notes
Copy AI - Run & scale your go-to-market — with AI
Sattellitor - Generate high-quality articles that auto-publish to WordPress
AI Assist by Dopt - Build remarkably relevant in-product assistance
UXPin Merge AI - UI builder for busy devs & designers
Reporfy - Insightful performance reports at the speed of AI
Voicepanel - A new way to get user feedback, using AI
Unclassified 🌀
WFH Team - Work from anywhere in the world
How did you like today’s newsletter? |
Help share Superpower
⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!
Hope you enjoyed today's newsletter
Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml
⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.