- Superpower Daily
- Posts
- X has opted all users into training its "Grok" AI Model
X has opted all users into training its "Grok" AI Model
Artificial intelligence breakthroughs create new ‘brain’ for advanced robots
In today’s email:
🤑 Hackers race to win millions in contest to thwart cyberattacks with AI
🤥 AI start-up Anthropic accused of ‘egregious’ data scraping
🐁 “Copyright traps” could tell writers if an AI has scraped their work
🧰 11 new AI-powered tools and resources. Make sure to check the online version for the full list of tools.
Elon Musk’s social media platform X is facing scrutiny from data regulators in the UK and Ireland after it was revealed that users are unknowingly consenting to their posts being used for training Grok, an AI chatbot developed by Musk’s xAI business. This consent is obtained via a pre-ticked box in the app's settings, a practice that violates UK and EU GDPR rules. Users can only turn off this setting on the web version of X, raising concerns about transparency and consent.
The UK’s Information Commissioner’s Office (ICO) and Ireland’s Data Protection Commission (DPC) have both expressed concerns and have contacted X regarding the issue. The ICO emphasized that platforms must be transparent and provide users with easy ways to opt out of data usage for AI training. The DPC, surprised by the default setting, noted it has been in discussions with X about data collection and AI models, with further engagements expected soon.
The controversy highlights the broader debate around the use of large language models like Grok and ChatGPT, which rely on vast amounts of internet data to function. This method has drawn criticism from various quarters, including news publishers, authors, and regulators, who argue it breaches copyright laws and lacks proper user consent. Recently, Meta decided not to release an advanced AI model in the EU, citing regulatory unpredictability as a key reason.
Create apps that are intuitive, customizable, and impactful—in moments, with Airtable Cobuilder.
Imagine a team of developers working on your behalf. Just tell Cobuilder what you want to build, and for who, and the AI will do it for you. The possibilities are endless—use Cobuilder to create your next product roadmap, content calendar, or OKR tracker. No coding is required. Just ideas.
You imagine it. AI builds it. The magic is in making it yours.
Over the past three years, Péter Fankhauser’s Zurich-based robotics start-up ANYbotics has seen its industrial robots evolve from simple stair-climbing to performing complex parkour-style tricks. These advancements are driven by new AI models, allowing the robots to adapt and learn from their environments. Major tech companies like Google, OpenAI, and Tesla are in a race to develop AI systems that could revolutionize industries, particularly in healthcare and manufacturing, by improving robots' autonomy and adaptability through enhanced computer vision and spatial reasoning capabilities.
Generative AI technology has enabled robots to better understand and interact with their surroundings and humans, reducing the need for extensive programming and lowering engineering costs. This shift is fostering the development of humanoid robots, with significant investments from major AI companies and investors. For instance, Google DeepMind and OpenAI are working on training robots to navigate environments safely and effectively, with OpenAI investing heavily in startups like 1X Robotics to develop domestic bots.
Despite the hype and significant investments, the robotics sector faces challenges such as high costs and technological limitations. Nonetheless, the market is growing rapidly, particularly in early-stage companies. Startups like Mytra and RobCo are raising substantial funds to innovate in robotic hardware and automation. The public's increasing acceptance of AI tools is also positively influencing attitudes toward robots, encouraging their use in public-facing roles and everyday environments.
In a Pentagon-backed competition known as the AI Cyber Challenge (AIxCC), hackers from Arizona State University, the University of California at Santa Barbara, and Purdue University are developing an AI program to scan open-source code for security flaws. The goal is to create a "cyber reasoning system" that can autonomously identify and fix vulnerabilities in millions of lines of code. Sponsored by DARPA, this two-year contest aims to address the critical security risks posed by flaws in widely used open-source software, which is integral to infrastructure and commercial systems.
Open-source software, while ubiquitous and essential, often lacks the rigorous testing and maintenance of proprietary software, leading to severe cybersecurity breaches. Notable incidents like the 2017 Equifax breach and the Log4j vulnerability have highlighted the risks of poorly maintained open-source code. DARPA's competition seeks to leverage AI to enhance the security of these codes, providing tools that can identify and patch vulnerabilities before malicious actors exploit them. The contest emphasizes the need for innovative solutions to secure the vast and interconnected digital landscape.
Teams like Shellphish, comprising hackers and computer scientists, are at the forefront of this effort. Utilizing advanced tools and AI-driven approaches, they aim to create programs that can quickly identify and fix low-hanging security flaws, potentially solving issues that would take humans months to address. As part of the contest, all finalists must release their programs as open-source, ensuring that the advancements in security can be broadly applied, ultimately enhancing the safety and reliability of software systems worldwide.
Other stuff
AI start-up Anthropic accused of ‘egregious’ data scraping 🔥
Scientists are trying to unravel the mystery behind modern AI
Apple Intelligence to Miss Initial Launch of Upcoming iOS 18 Overhaul
Sam Altman wants a US-led freedom coalition to fight authoritarian AI
One of America’s Hottest Entertainment Apps Is Chinese-Owned 🔥
How an AI bot-war Destroyed the online job market
“Copyright traps” could tell writers if an AI has scraped their work 🔥
China Is Closing the A.I. Gap With the United States
iFixit CEO takes shots at Anthropic for 'hitting our servers a million times in 24 hours' 🔥
All your ChatGPT images in one place 🎉
You can now search for images, see their prompts, and download all images in one place.
Friend - An open-source AI necklace
Granola - The AI notepad for people in back-to-back meetings
PixVerse V2 - Create breath-taking videos with PixVerse AI
XspaceGPT - Find the value of Twitter (X) spaces
Udio 1.5 - AI music generator
Last24.ai - World news in the last 24 hours, summarized in one mindmap
Photolapse AI - Create a face timelapse from your existing photo library
Brev.ai - Create quality music from text instantly, free & online
Unclassified 🌀
The first Deep Social Platform that merges groundbreaking neuroscience with cutting-edge AI to transform your social marketing. Try for free.
How did you like today’s newsletter? |
Help share Superpower
⚡️ Be the Highlight of Someone's Day - Think a friend would enjoy this? Go ahead and forward it. They'll thank you for it!
Hope you enjoyed today's newsletter
Did you know you can add Superpower Daily to your RSS feed https://rss.beehiiv.com/feeds/GcFiF2T4I5.xml
⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.
OR