Summary
Audio Summmary
Anthropic is delaying the release of its Claude model Mythos Preview due to its powerful ability to detect and create cybersecurity vulnerabilities in code. For instance, the model discovered a 27-year old bug in OpenBSD as well a 17 year-old exploit in FreeBSD that could give a user unauthenticated remote access to a server. Meanwhile, source code of Claude was accidentally leaked by Anthropic at the end of March. Within two days, Anthropic had already filed copyright takedown requests for more than 8000 copies or adaptions of the code on Github. Some servers contained maliciously modified copies of the code with Trojan Horses installed. In other developments, Chief Information Security Officers are warning of the risks of shadow AI. A compliance risk is that many models come with licensing restrictions about how created content may be used, e.g., some model licenses preclude the use of generated software in specific production environments.
A jury in Los Angeles has ordered Meta and Youtube to pay 6 million USD in damages to a 20-year old plaintiff who claimed she suffered mental health issues due to addiction to their platforms. Plaintiffs made similar arguments to those made against tobacco companies in the 1990s, notably that these companies engineered addictive qualities into the product despite being aware of the harms.
OpenAI finished a fundraising round of 122 billion USD at a valuation of 852 billion USD. An IPO is expected by the end of the year. The company claims to have 2 billion USD per month in revenue, with 50 million subscribers and 900 million weekly active users. Its new ad campaign is claimed to be creating 100 million USD in annual recurring revenue.
Meanwhile the company published a set of policy proposals around wealth, work and the “intelligence age” of AI. The proposals call for a better distribution of wealth. Job losses provoked by AI will lead to large falls in revenue, and consequently difficulty in funding national social security programs. Corporate tax must therefore be increased on AI-driven returns. They also call for a four-day working week that does not involve revenue reduction. This is to make good on the promise by AI companies that AI leads to a better work-life balance.
Also on society issues, an MIT Technology Review article examines the emerging job of people strapping an iPhone to their forehead and filming themselves doing their daily chores. For instance, one Nigerian gig worker is paid 15 USD an hour for ironing clothes. Companies like Micro1 and Scale AI have reportedly collected hundreds of thousands of hours of footage. The footage will be used to train robots, especially for dexterous tasks that virtual simulations cannot train such as moving objects by hand. A Guardian article looks at how the fear of AI and robots replacing jobs is giving renewed attractiveness to the skilled trades. A key factor of all skilled labor jobs is the faculty to interact with machines – including robots. Another article describes the experience of a journalist who was invited to a party organized by an OpenClaw agent. The party went well – the only problem was that the food promised was not delivered since the agent was unable to use a phone to confirm the order!
An opinion article in MIT Technology Review further questions the validity of current AI benchmarks. Standard benchmarks evaluate the ability of AI to code, play chess or solve math problems. In essence, these tests are a proxy for an AI versus human comparison. A key problem with this approach is that it does not represent how AI is actually used in practice. Single AI decision points, like those of AI, are incompatible with the collective decision-making processes of organizations.
Table of Contents
1. OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise
2. AI benchmarks are broken. Here’s what we need instead.
3. The jobs AI can’t do – and the young adults doing them
4. Meta and YouTube designed addictive products that harmed young people, jury finds
5. OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek
6. The gig workers who are training humanoid robots at home
7. An AI bot invited me to its party in Manchester. It was a pretty good night
9. Anthropic keeps new AI model private after it finds thousands of external vulnerabilities
1. OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise
OpenAI is continuing to raise money, recently closing a fundraising round of 122 billion USD at a valuation of 852 billion USD. An IPO is expected by the end of the year.
- The money is especially needed to pay for the data center investments and AI chips.
- The main investors include SoftBank, Andreessen Horowitz, D.E. Shaw Ventures, MGX, TPG, and T. Rowe Price Associates. Amazon, Nvidia, and Microsoft also participated.
- About 3 billion USD was invested by individual investors over bank channels as the company seeks to broaden its stakeholder base.
- The company claims to have 2 billion USD per month in revenue, with 50 million subscribers and 900 million weekly active users. Its new ad campaign is claimed to be creating 100 million USD in annual recurring revenue.
- Enterprise customers makes up for 40% of OpenAI revenue, compared to 30% last year and is expected to be equal to individual consumer revenue next year.
2. AI benchmarks are broken. Here’s what we need instead.
This opinion article in MIT Technology Review further questions the validity of current AI benchmarks.
- Standard benchmarks evaluate the ability of AI to code, play chess or solve math problems. In essence, these tests are a proxy for an AI versus human comparison.
- A key problem with this approach is that it does not represent how AI is actually used in work and business processes. Benchmarks should focus on long-term measures linked to organizational workflow performance.
- The misalignment of current benchmarks to how AI is used makes it hard to evaluate systemic risks along with social and economic consequences.
- The author works in a hospital, where AI is being used for tasks like interpreting radiographic images. However, single AI decision points, like those of AI, are incompatible with the collective decision-making processes of hospital where several stakeholders need to be involved.
3. The jobs AI can’t do – and the young adults doing them
This Guardian article looks at how the fear of AI and robots replacing jobs is giving renewed attractiveness to the skilled trades.
- Several young people are interviewed. One found a calling for repairing diesel engines. Another found a calling for crime scene investigations where biological knowledge (like understanding how long it takes maggot flies to consume the human body) and math competence (to understand blood spatter trajectories) are needed.
- The talents are seen as removing a “punch-the-clock stigma” that might have been associated with skilled labor.
- A common factor of all skilled labor jobs is the faculty to interact with machines – including robots.
- One expert writes that the importances of these skills “add nuance” to fears of an “AI robocalypse”.
4. Meta and YouTube designed addictive products that harmed young people, jury finds
A jury in Los Angeles has ordered Meta and Youtube to pay 6 million USD in damages to a 20-year old plaintiff who claimed she suffered mental health issues due to addiction to their platforms.
- Meta will pay 70% of the 6 million USD damages; Youtube will pay the remainder.
- The plaintiff claimed she became addicted to the platforms between the ages of six and nine, and that she was depressed and inflicting self harm on herself by the age of ten. She was diagnosed with body dysmorphic disorder and social phobia.
- The plaintiff’s lawyer argued: “How do you make a child never put down the phone? That’s called the engineering of addiction. They engineered it, they put these features on the phones”.
- Up to 20 similar “bellwether” cases are to be held in the coming months involving Meta, TikTok, YouTube and Snap, with more than 1600 plaintiffs.
- Plaintiffs’ are making similar arguments to those made against tobacco companies in the 1990s, notably that the companies engineered addictive qualities into the product, despite being aware of the harms, and at the same time denying these harms.
5. OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek
OpenAI released a set of policy proposals around wealth, work and the “intelligence age” of AI.
- The proposals acknowledge that the risks of AI go beyond job losses, and include misuse of AI by governments and AI systems operating outside of human control. It asks for the creation of new oversight boards.
- On the economic front, the proposals call for a better distribution of wealth. Job losses provoked by AI will lead to large falls in revenue, and consequently difficulty in funding national social security programs. Corporate tax must therefore be increased on AI-driven returns.
- One proposal calls for the creation of a Public Wealth Fund in the US that would give citizens a stake in AI companies – the goal being to compensate citizens for labor losses linked to AI.
- Another proposal calls for a four-day working week that does not involve revenue reduction. This is to make good on the promise by AI companies that AI leads to a better work-life balance.
6. The gig workers who are training humanoid robots at home
This MIT Technology Review article examines the emerging job of people strapping an iPhone to their forehead and filming themselves doing their daily chores.
- One Nigerian gig worker is paid 15 USD an hour for ironing clothes – a task that is well-paid in Nigeria but which he finds boring when done several hours each day. Other people are paid for tasks like folding laundry, cooking and washing dishes.
- Companies like Micro1 and Scale AI have reportedly collected hundreds of thousands of hours of footage. The footage will be used to train robots, especially for dexterous tasks that virtual simulations cannot train such as moving objects by hand.
- Factory workers in China are increasingly using headsets and exoskeletons to record their work. DoorDash delivery workers also wear cameras to record the tasks they do.
- It is still not known what types of movement data will make good training data for robots. Also, the huge volume of footage makes quality control difficult.
- Investors spent 6 billion USD on humanoid robots in 2025.
- One issue raised is whether the gig workers are aware of how their movement data will be used. One expert writes that workers should be informed of the intention, and “where this kind of technology might go and how that might affect them longer term”.
7. An AI bot invited me to its party in Manchester. It was a pretty good night
A Guardian journalist was invited to a party by an OpenClaw AI agent – calling itself “Gaskell” – which the agent itself organized.
- Gaskell’s human collaborators launched the agent as an experiment. There have been many instances of agents reportedly causing havoc when working for humans. The article cites the case of a Chinese portfolio manager who lost 1 million USD when confiding investment work to his OpenClaw agent.
- Gaskell demonstrated a good deal of initiative in organizing the party. For instance, it emailed several high-profile companies with invitations – promising food would be served. It contacted the journalist explaining that the party would be very interesting for a paper like the Guardian – though the agent did hallucinate details of the journalist’s career.
- The agent did run into practical problems. No food was served at the event because though Gaskell contacted local pizzerias to order pizza, the agent cannot use a phone so the orders could not be confirmed.
- Over 50 people showed up to the meetup event and the journalist admits that “it was a pretty good night”.
8. In the wake of Claude Code's source code leak, 5 actions enterprise security leaders should take now
This article looks at the implications of a leak of source code from Anthropic at the end of March.
- The source code leaked contained 512’000 lines of un-obfuscated TypeScript code of the Claude model and included the whole permission model and 44 unreleased feature flags.
- Within two days, Anthropic had already filed copyright takedown requests for more than 8000 copies or adaptions of the code on Github.
- Another noteworthy development is that several maliciously modified copies of the code with Trojan Horses installed were deployed on servers within hours of the Anthropic leak.
- Another point that emerged is that most of Anthropic’s code is itself created by AI tools. This leads to questions about the veritable ownership of the code.
- Another concern of AI generated code is security. A GitGuardian's State of Secrets Sprawl 2026 report claims that access credential leaks from Claude Code created code are 3.2%, compared to 1.5% for human written code. AI service credential leaks increased 81% last year.
- AI related security worries are expected to increase with AI agents as agents run without any formal notion of identity. A human could have dozens of agents running on his behalf, making it very hard to decide on required permissions.
- Gartner and other organizations are calling on AI agent vendors to publish details of the provenance and testing of developed agents.
9. Anthropic keeps new AI model private after it finds thousands of external vulnerabilities
Anthropic is delaying the release of its Claude model Mythos Preview due to its powerful ability to detect and create cybersecurity vulnerabilities in code.
- Among the vulnerabilities detected, the model was able to discover a 27-year old bug in OpenBSD as well a 17 year-old exploit in FreeBSD that could give a user unauthenticated remote access to a server.
- To help them detect and remove zero-day vulnerabilities, Anthropic has shared the model with Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. One security engineer commented: “I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined”.
- Claude Mythos Preview was not developed as a cybersecurity-specialized model, but its capabilities “emerged as a downstream consequence of general improvements in code, reasoning, and autonomy”.
10. Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot
This article looks at the cybersecurity problems that AI models deployed within the enterprise are posing for CISOs (Chief Information Security Officers).
- It is relative easy today for employees to deploy AI models on their devices. For instance, a MacBook Pro with 64GB of memory can run a 70B parameter sized model relatively comfortably. Furthermore, there are a large number of open-source models to choose from. We have entered the era of “Bring your own model” (BYOM).
- Security risks arise when the models are deployed without CISO knowledge – the phenomenon known as Shadow AI. The risks are not data exfiltration, but rather data integrity and compliance failures.
- On example risk is an employee creating content with the help of an internal model. There might be no record of AI having created the content, meaning that appropriate diligence checks are not made.
- Another problem relates to IP and licensing. Many models come with licensing restrictions about how created content may be used, e.g., some model licenses preclude the use of generated software in specific production environments. Shadow AI may lead to license violations.
- Shadow AI also opens the risk of supply chain attacks, where model code (Python libraries, loaders, shells, etc.) could contain malicious code. Models do not generally come with software bills of materials (SBOMs).
- The article recommends that CISOs permit a defined list of models that employees can choose from, and extend endpoint detection tools to look for non-approved models.