Summary
Audio Summmary
The OpenClaw open-source agent AI project is now being sponsored by OpenAI, and its creator is joining OpenAI to “work on bringing agents to everyone”. The platform has seen large growth since being released last November. It supports the combined agent features of tool access, memory persistence, easy integration with messaging Apps, and sandboxed code execution. Meanwhile, the financial sector anticipates AI development in 2026 to focus on the integration of agents that actively run business processes. Currently, the major pain points in the sector are data silos, legacy systems and compliance approvals. A VentureBeat article looks at an ETL-style data pipeline for AI applications where instead of hard-coding transformations on incomplete data, the pipeline identifies inconsistencies, infers missing structure and generates classifications with AI support. Elsewhere, an InfoWorld article uses the term “AI hype hangover” for the frustration felt by organizations about being unable to achieve return on investment from AI.
Several high-profile employees have recently left AI firms, in protest over companies sacrificing AI safety for profit. Zoë Hitzig left OpenAI in protest of the decision to introduce advertisements on the platform, writing that “advertising built on [user content] creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent”. At Anthropic, the head of the Safeguards Research team, Mrinank Sharma, left the company and published an article on X entitled the “The World is in Peril” in which he questions the pace and focus of AI work at Anthropic. Meanwhile, a mental health expert has criticized Google AI Overviews for giving incorrect information. Examples of untruths that AI Overviews came up with were that starvation was healthy and that mental health problems are caused by chemical imbalances. Elsewhere, Meta CEO Mark Zuckerberg appeared as a defendant in a trial where social media platforms are accused of not doing enough to protect children and people with mental disabilities. Currently, psychologists do not classify social media addiction as an official diagnosis. Nevertheless, several researchers have documented the harmful effects of social media on young people.
Reuters is reporting that OpenAI is continuing to prepare the groundwork for an IPO that it hopes will be valued at 1 trillion USD. OpenAI's revenue in 2025 was 13 billion USD, and it spent 8 billion USD during the year on operating costs. The company has reportedly told investors that the cost of running its AI models – the inference cost – increased by a factor of 4 in 2025.
An MIT Technology Review article discusses whether generative AI is really contributing to increased cyber-insecurity. On the one hand, generative AI is seen as a “productivity tool” for cybercriminals to write viruses and phishing mails. However, cyber-criminals face the same limits with generative AI as legitimate users. In one AI-based cyberattack, the chatbot often hallucinated credentials, so the cyber-criminals spent a lot of time manually validating recovered credentials. Another article in the journal looks at efforts by Microsoft to quantify the current state of technical research into detecting AI-generated content. The current state of technical research does not permit a user to indicate if an image is fake or not, only whether it has been digitally doctored. One question is whether there is really economic incentive for social media platforms to add AI generated labels to content on their platform. An audit by Indicator found that only 30% of its test posts on Instagram, LinkedIn, Pinterest, TikTok, and YouTube were correctly labeled as AI-generated.
Table of Contents
1. OpenAI's acquisition of OpenClaw signals the beginning of the end of the ChatGPT era
2. AI is already making online crimes easier. It could get much worse.
3. Zuckerberg grilled in landmark social media trial over teen mental health
4. The cure for the AI hype hangover
5. Microsoft has a new plan to prove what’s real and what’s AI online
6. The 'last-mile' data problem is stalling enterprise agentic AI — 'golden pipelines' aim to fix it
7. ‘Very dangerous’: a Mind mental health expert on Google’s AI Overviews
8. Why are AI leaders fleeing?
9. How financial institutions are embedding AI decision-making
10. OpenAI expects compute spend of around $600 billion through 2030, source says
1. OpenAI's acquisition of OpenClaw signals the beginning of the end of the ChatGPT era
The OpenClaw open-source agent AI project is now being sponsored by OpenAI, and its creator is joining OpenAI to “work on bringing agents to everyone”.
- OpenClaw was first released in November 2025 under the name “Clawdbot”. The platform included the combined agent features of tool access, memory persistence, easy integration with messaging Apps (Telegram, Discord, WhatsApp), and sandboxed code execution.
- The platform encountered a “hockey stick” rate of adoption, despite fears from cybersecurity administrators that allowing AI agents such power in a system constitutes a major security risk.
- The challenge for OpenAI is to develop a “safe” version of the platform. The acquisition can help the company reboot efforts around agentic AI after relatively muted success for the Atlas agentic AI browser and Agents SDK platform.
- For the CEO of LangChain, the success of OpenClaw relies on three factors that will influence future agent platforms: 1) natural language must be the primary interface to the platform, 2) agent memory will be the critical enabler for users to “build something without realizing they're building something”, and 3) automatic code generation to provide the agency.
- The acquisition could be a milestone, signifying a shift from focusing on “what AI can say” to “what AI can do”.
2. AI is already making online crimes easier. It could get much worse.
This article discusses whether generative AI is really contributing to an increased level of cybersecurity.
- On the one hand, generative AI is seen as a “productivity tool” for cybercriminals. A Microsoft report found said the company blocked 4 billion USD scams in the year leading to April 2025, many of which were “likely aided by AI”. Language models now generated over half of all spam mails, and 14% of spear-phishing emails.
- On the other hand, the advent of deepfake technology is clearly a departure from traditional cyber-criminality. A famous example is a 2024 case where a cyber-criminal extorted 25 million USD from a British company after appearing in an online meeting, deepfaked as the company’s CFO and ordered a bank transfer.
- The discussion on the importance of generative AI took an interesting turn with PromptLock – a ransomware that exploits a language model to generated customized code in real-time in order to avoid detection. PromptLock turned out to be written by researchers at New York University who wanted to highlight the possibilities of language models for criminals.
- The best data about criminal use of generative AI comes from the AI companies themselves. A Google report in November 2025 wrote that criminals were using Gemini to dynamically alter malware behavior to avoid detection. A report from Anthropic wrote that a Chinese state-sponsored criminal network had used Claude Code to help automate 90% of a large-scale cyber attack.
- However, cyber-criminals face the same limits with generative AI as legitimate users. In the case of the Chinese cyberattack, the chatbot often hallucinated credentials, so the cyber-criminals spent a lot of time manually validating recovered credentials.
- Cybercriminals are using language models to generate malware. Major chatbot providers like Anthropic, OpenAI and Google, have guardrails in their services to prevent malware generation – even if these are not always effective. For instance, Google Gemini was jail-broken when a criminal told the chatbot he was researching malware for a “capture-the-flag” exercise. The article highlights that open-source AI models pose a more serious concern, because cyber-criminals can remove all guardrails.
3. Zuckerberg grilled in landmark social media trial over teen mental health
Meta CEO Mark Zuckerberg appeared as a defendant in a trial where social media platforms are accused of not doing enough to protect children and people with mental disabilities.
- The current trial centers on a 20-year-old woman whose compulsive use of YouTube and Instagram worsened her depression. The case is seen as a “bellwether” case, designed to gauge jury reaction to the issue.
- TikTok and Snap settled before the trial, but other similar cases are pending.
- Zuckerberg defended Meta against the claim that the platform was not doing enough to identity underage users. A lawyer for the plaintiffs nevertheless asked “You expect a nine-year-old to read all of the fine print?”.
- A review in 2025 by the non-profit organization Fairplay found that even though social media platforms promised to add safety features for young children, only 20% are fully functional and 64% can be easily worked around.
- Currently, psychologists are not classifying social media addiction as an official diagnosis. Nevertheless, several researchers have documented the harmful effects of social media on young people.
4. The cure for the AI hype hangover
This InfoWorld article uses the term “AI hype hangover” to designate the frustration felt by many organizations about being unable to achieve return on investment from AI.
- The article attributes the problem to AI hype, which has incited many organizations to try AI, leading to a large number of AI pilots.
- The fundamental fallacy is that AI can provide a generalized business solution, whereas successful AI projects address very specific business problems.
- The only unifying factor across all pilots is the significant cost of extracting and preparing data, and then the cost of servers, security, compliance, and hiring AI talent.
- The three items of advice put forwards by the article for organizations wishing to adopt AI are: 1) Identify high-value business problems in the organization; 2) Invest in data infrastructure for improved data quality; and 3) Define KPIs for governance and for measuring return on investment.
5. Microsoft has a new plan to prove what’s real and what’s AI online
This article reports on efforts by Microsoft to quantify the current state of technical research into detecting AI-generated content. The work comes as even governments, such as the US and Russia, are using deepfakes for political purposes – embarrassing ICE protestors in the US case, and discouraging Ukrainians from joining their armed forces in the case of the Russia.
- California’s AI Transparency Act will take effect next August in an effort to ensure platforms enable users to distinguish AI-generated content. This law prompted Microsoft to conduct their research.
- California’s Act may be challenged by President Trump whose executive order of last year seeks to curtail state-based AI regulations that are “burdensome” to the Big Tech industry.
- The current state of technical research does not permit a user to indicate if an image is fake or not, only whether it has been digitally doctored. This can also be exploited for socio-criminal purposes. For instance, a bad actor may take a true, politically charged, photo and have some unimportant part of the photo modified, and later have the technical tools show that the image is AI-manipulated.
- One question is whether there is really economic incentive for social media platforms to add AI generated labels to content on their platform. An audit by Indicator found that only 30% of its test posts on Instagram, LinkedIn, Pinterest, TikTok, and YouTube were correctly labeled as AI-generated.
- Further, lawmakers are concerned that the companies developing techniques for AI-generated content are the same companies to which the techniques need to be applied to.
6. The 'last-mile' data problem is stalling enterprise agentic AI — 'golden pipelines' aim to fix it
This article reports on a tool that uses a technique labeled the “golden pipeline” to make the data pipeline for AI applications more efficient.
- Existing ETL (extract-transform-load) tools funnel structured data to reporting applications. AI applications, in contrast, often need to deal with raw, incomplete and evolving operational data. The company developing the golden pipeline approach calls this difference “inference integrity” versus “reporting integrity”.
- After extracting the data from different source types (files, databases, APIs), the tool cleans data using a combination of deterministic preprocessing and AI-assisted normalization. “Instead of hard-coding every transformation, the system identifies inconsistencies, infers missing structure and generates classifications based on model context”.
- The approach is particularly aimed at organizations without a mature data warehouse or lake infrastructure.
- Empromptu, the company behind the tool, claims the product is HIPAA compliant and also SOC 2 certified.
7. ‘Very dangerous’: a Mind mental health expert on Google’s AI Overviews
A mental health expert has criticized Google AI Overviews for giving incorrect information in relation to mental health.
- Examples of untruths that AI Overviews came up with were that starvation was healthy and that mental health problems are caused by chemical imbalances.
- For the expert, AI Overviews is “flattening information about highly sensitive and nuanced areas into neat answers”.
- Google Search worked quite well for the expert since links to reputable sources of information on the subject would rise to the top of the search results list. However, AI Overviews “replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness”.
8. Why are AI leaders fleeing?
This ComputerWorld article highlights some of the high-profile employees who have recently left AI firms, in protest over companies sacrificing AI safety for profit.
- Zoë Hitzig left OpenAI in protest of the decision by the company to introduce advertisements on the platform. She wrote a resignation letter in the New York Times entitled “OpenAI Is Making the Mistakes Facebook Made. I Quit.”.
- In the article, she wrote “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
- OpenAI also recently disbanded the “mission alignment” team, working on means to make AI safer.
- At Anthropic, the head of the Safeguards Research team, Mrinank Sharma, is leaving the company and published an article on X entitled the “The World is in Peril” in which he questions the pace and focus of AI work at the company.
9. How financial institutions are embedding AI decision-making
This article looks at how the financial sector anticipates use of AI in the coming year. The main objective seems to be the integration of agents that actively run business processes. Currently, the major pain points are data silos, legacy systems and compliance approvals.
- An agent-based enterprise architecture would detect signals about customer behavior, automate and communicate decisions, route the important events to human operators as well as continuously learn from what is happening in the business environment.
- Compliance needs to be involved in all stages of the business process, e.g., to validate prompts.
- One area where AI has to improve is over-personalization. For instance, a customer in financial distress should not see recommendations for certain loan types. Over-personalization is seen as leading to trust erosion.
- A final challenge mentioned for the sector is how to address the replacement of Web Search by large-language models (LLMs). This means that content published by organizations on their web sites must be interpretable to LLM agents crawling for training data, and be crafted so that the chatbot cites this content in response to user queries. This is the challenge of Generative Engine Optimization (GEO).
10. OpenAI expects compute spend of around $600 billion through 2030, source says
Reuters is reporting that OpenAI is continuing to prepare the groundwork for an IPO that it hopes will be valued at 1 trillion USD.
- OpenAI's revenue in 2025 was 13 billion USD, ahead of the projected 10 billion USD. The company spent 8 billion USD during the year on operating costs.
- The company still expects 280 billion USD in total revenue by 2030, and expects to spend 600 billion USD on computing infrastructure by then. Total costs, including energy, could be as high as 1.4 trillion USD.
- The company has reportedly told investors that the cost of running its AI models – the inference cost – increased by a factor of 4 in 2025.