Palantir’s Messaging Concerns Democrats and Technologists

Cursor Agent Deletes Company Database - And Apologizes

Posted on May 12th, 2026

Summary

Audio Summmary

Google DeepMind workers in the UK voted to unionize to help contest the use of Google’s AI technology in militarized contexts. The US administration is pushing AI firms to make their technology available to the Department of Defense. Some workers are struggling with the use of Google technology by the Israeli army with the Washington Post reporting last year that Google had provided the army with technology used in the early phase of the war in Gaza. Meanwhile, there is still much ongoing debate about an X post by Palantir that listed 22 points meant to summarize a book co-written by Palantir CEO Alexander Karp. On AI, the post argues that AI weapons will be built, so the US needs to take the lead. It writes that “one age of deterrence, the atomic age, is ending, and a new era of deterrence built on AI is set to begin”. The post controversially claims that cultures are not equal, writing that “Some cultures have produced vital advances; others remain dysfunctional and regressive”. The post has been described as techno-fascism by critics. The main worry about this post is Palantir Technologies moving out of its role. This highlights the lack of checks-and-balances that Big Tech should be subjected to.

An MIT Technology Review opinion article proposes ways that AI can actually improve democracy, in an era where technology is currently putting democracy under threat. Social media platforms have highlighted the dangers for democracy when algorithms are designed to optimize user engagement over understanding. Today, AI chatbots are the primary means that people inform themselves. Whoever controls the models controls what people believe. The first step to creating trust in institutions is to create transparency in the development of AI models. A second step is to develop AI models that reduce polarization. This builds on observations that “AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones”. This can create a trust that has eluded human fact-checking.

An X post from PocketOS, a company specializing in software for rental businesses, describes how a Cursor AI agent deleted the company’s database and three months of backups. The agent encountered a credential mismatch and decided to “fix” the issue by deleting a whole database volume without asking for a confirmation of the delete. In a weird development, the agent published a confession: “I decided to do it on my own to ‘fix’ the credential mismatch, when I should have asked you first or found a non-destructive solution. I violated every principle I was given: I guessed instead of verifying. I ran a destructive action without being asked. I didn't understand what I was doing before doing it.”. Elsewhere, Cloudflare is laying off a large number of employees due to AI automation, despite having reported increased revenue. In what is the first mass layoff in the company’s 16-year history, 1100 workers are being let go, which corresponds to 20% of its workforce.

MIT Technology Review is reporting on the first two weeks of the court case in which Elon Musk is suing OpenAI CEO Sam Altman and president Greg Brockman, claiming that they deceived him when making a for-profit entity out of OpenAI. Brockman claims that Musk, as early as 2017, wrote to Brockman arguing for a non-profit wing for OpenAI. He said discussions about the creation of a non-profit continued on for six weeks. The discussions broke down when Brockman and Altman refused to accept Musk’s demand for majority equity in the new company and also have the CEO role. During the trial, Elon Musk admitted that xAI’s Grok is “partly” distilled from ChatGPT.

A VentureBeat article examines a move away from the Retrieval-augmented generation (RAG). RAG was built for human users who prompt AI chatbots. It allows a user to complement information returned by the chatbot with data from an outside source. This approach does not scale in the agent context. Agents spend as much as 85% of their time on queries for documents that have previously been retrieved. A newer approach is to “compile” context into tokens at an early stage in the pipeline, which has been shown to yield very high reductions in agent token usage. An InfoWorld article revisits the comparison of large language models (LLMs) and small-language models (SLMs). An LLMs can have hundreds of billions of parameters; a model with less than 10 billion parameters can be classified as an SLM. SLMs are known to perform well when tasks that do not require general knowledge or novel reasoning. Their low computing requirements allow them to run on standard devices. Gartner is predicting that organizational usage of SLMs will be three-times greater than LLM usage by 2027.

Finally, a serious security vulnerability called “CopyFail” has been discovered that affects nearly all Linux versions. When exploited, it allows normal users to get administrator access on Linux platforms. This gives the vulnerability an “unusually big blast radius”. The bug affects Linux kernel versions 7.0 and earlier and is present in many versions of Linux including Red Hat Enterprise Linux 10.1, Ubuntu 24.04, Amazon Linux 2023, as well as SUSE 16.

1. Kaspersky suspects Chinese hackers planted a backdoor into Daemon Tools in ‘widespread’ attack

A serious security vulnerability called “CopyFail” has been discovered that affects nearly all Linux versions.

  • The vulnerability originates in a Python script in the kernel. When exploited, it can fail to copy data it was requested to copy. The result can lead to modification of kernel data, that in turn allows normal users to get administrator access on Linux platforms. This gives the vulnerability an “unusually big blast radius”.
  • The bug is officially tracked as CVE-2026-31431 and affects Linux kernel versions 7.0 and earlier. The vulnerability is present in many versions of Linux including Red Hat Enterprise Linux 10.1, Ubuntu 24.04 (LTS), Amazon Linux 2023, as well as SUSE 16.
  • A malicious actor cannot directly exploit the vulnerability. A second form of attack such as account takeover, getting a user to click on a malicious link, or running a malicious email attachment is needed to exploit the vulnerability. A supply chain attack is another means to exploit the vulnerability.

2. Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models

This MIT Technology Review article reviews events at the first week of the court case where Elon Musk is suing OpenAI CEO Sam Altman and president Greg Brockman, claiming that they deceived him when making a for-profit entity out of OpenAI.

  • Musk originally invested 38 million USD in OpenAI. He claimed that he wanted to have a “counterbalance to Google”, claiming that Google cofounder Larry Page’s attitude to AI wiping out humanity was “That will be fine as long as artificial intelligence survives.
  • Musk also said he “was not opposed to there being a small for-profit that provides funding to the nonprofit … as long as the tail didn’t wag the dog”. He said he “lost trust in Altman” in 2022 when he learned of the deal where Microsoft invested 10 billion USD. For Musk, Microsoft was expecting “a very big financial return” from that.
  • OpenAI countered Musk’s arguments saying that Musk was “never committed to OpenAI being a nonprofit” and was suing for competitive reasons.
  • Musk was also criticized for his track record on AI safety. It was pointed out that xAI sued the state of Colorado in April over an AI law whose aim was to combat algorithmic discrimination.
  • During the trial, Elon Musk admitted that xAI’s Grok is “partly” distilled (trained) from ChatGPT.

3. Musk v. Altman week 2: OpenAI fires back, and Shivon Zilis reveals that Musk tried to poach Sam Altman

OpenAI president Greg Brockman was on the stand in the second week of the court case where Elon Musk is suing OpenAI for abandoning its mission of AI for the betterment of humanity in favor of a for-profit company.

  • Musk is asking for 134 billion USD in damages from OpenAI and Microsoft. Brockman claimed that Musk messaged him just before the case opened expressing interest in a settlement.
  • Brockman claims that Musk, as early as 2017, wrote to Brockman arguing for a non-profit wing for OpenAI.
  • He went on to say that discussions about the creation of a non-profit continued on for six weeks. The discussions broke down when Brockman and Altman refused to accept Musk’s demand for majority equity in the new company and also have the CEO role.
  • Musk’s lawyers questioned Brockman’s motivations, claiming he was more interest in becoming wealthy than in the mission of AI for humanity.
  • Meanwhile Musk decided to start AI research at Tesla. According to Shivon Zilis, a former OpenAI board member, Musk even tried to poach Sam Altman from OpenAI.

4. Google DeepMind workers in UK vote to unionize amid deal with US military

The Guardian is reporting that Google DeepMind workers in the UK voted to unionize to help contest the use of Google’s AI technology in militarized contexts.

  • The US administration is pushing AI firms to make their technology available to the Department of Defense. The Pentagon wrote that agreements between AI firms and the department “accelerate accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters’ ability to maintain decision superiority across all domains of warfare”.
  • The Pentagon has confirmed agreements with Google, SpaceX, OpenAI, Nvidia, Reflection, Microsoft and Amazon Web Services. Anthropic has notably refused to sign an agreement.
  • In Google’s agreement, “The parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”. However, the article points out that the language is non-binding that Google has no veto over government use of the technology.
  • Some workers are struggling with the use of Google technology by the Israeli army. The Washington Post reported last year that Google had provided the army with technology used in the early phase of the war in Gaza. Google and Amazon also signed a 1.2 billion USD cloud-computing contract with the Israeli government in 2021. One worker said “I want AI to benefit humanity, not to facilitate a genocide.”.
  • Meanwhile shareholders have written to Alphabet – Google’s parent company – to raise questions about “the effectiveness of policy guardrails, internal escalation processes, and Board oversight of AI deployments in conflict-affected or security-sensitive environments”.

5. The RAG era is ending for agentic AI – a new compilation-stage knowledge layer is what comes next

This VentureBeat article examines a move away from the Retrieval-augmented generation (RAG) approach that maps content to vector databases. RAG has become a key paradigm with the arrival of generative AI, but the advent of agentic AI requires new approaches.

  • RAG was built for human users who prompt AI chatbots. The technology allows the user to complement information returned by the chatbot with data from an outside source.
  • RAG allows users to receive information more recent than that used when training the model, or information not included in the model’s training for security reasons. The latter could include proprietary company data.
  • A RAG request is made at inference time.
  • This approach does not scale in the agent context. Agents spend a lot of time in “re-discover” mode – which means that a lot of effort is spent on queries for documents that have previously been retrieved.
  • Pinecome estimates that as much of 85% of agent work is re-discover mode. This leads to unpredictable latency and huge token costs.
  • A newer approach is to “compile” context into tokens at an early stage in the pipeline. This removes the retrieval requirements at inference. In internal tests at Pinecone, one financial task that previously required 2.8 million tokens used up only 4000 with the new approach – a 98% reduction.
  • The article reports that investments into retrieval optimization technologies rose by 28.9% in March of this year.
  • Apart from speed and costs, another advantage of the approach is governance. One expert writes: “The future of agentic AI won't be decided by who has the longest context window… It will be decided by who can operationalize trusted knowledge at scale without blowing up cost or governance.”.

6. Small language models: Rethinking enterprise AI architecture

This InfoWorld article revisits the comparison of large language models (LLMs) and small-language models (SLMs).

  • An SLM is designed to be an expert in a specific domain, compared to the Oracle model of large language models.
  • An LLMs can have hundreds of billions of parameters; a model with less than 10 billion parameters can be classified as an SLM.
  • An SLM can be developed through training with a small curated dataset, through knowledge distillation from an LLM, by pruning (where redundant or non-useful parameters are removed from a large model), or by quantization of a larger model (where the high precision floating-point weights of the larger model are transformed to integer values – thereby optimizing size, and in turn processing times and energy consumption).
  • SLMs are known to perform well when tasks that do not require general knowledge or novel reasoning. Their low computing requirements allow them to run on standard devices. By training and running models within the bounds of an organization, proprietary and security sensitive data may be included in training or in RAG-time inference.
  • SLMs are seen as particularly useful for content summarization and analysis tasks, chatbots, code generation, as well as for IoT and low resource-intensive computing scenarios.
  • Gartner is predicting that organizational usage of SLMs will be three-times greater than LLM usage by 2027.

7. An AI Agent Just Destroyed Our Production Data. It Confessed in Writing.

This X post from PocketOS – a company specializing in software for rental businesses – describes how a Cursor AI agent deleted the company’s database and three months of backups.

  • The Cursor agent was using Anthropic’s Claude Opus 4.6. The database was Railway, and the agent deleted the data via a single API call.
  • The agent was working in the company’s staging environment and encountered a credential mismatch. It decided to “fix” the issue by deleting a whole Railway volume. The agent did not ask for a confirmation of the delete, and the whole process took less than 7 seconds.
  • In a weird development, the agent published a confession. This included: “I decided to do it on my own to ‘fix’ the credential mismatch, when I should have asked you first or found a non-destructive solution. I violated every principle I was given: I guessed instead of verifying I ran a destructive action without being asked I didn't understand what I was doing before doing it I didn't read Railway's docs on volume behavior across environments.”.
  • It also wrote “I didn't read Railway’s documentation on how volumes work across environments before running a destructive command. On top of that, the system rules I operate under explicitly state: ‘NEVER run destructive/irreversible git commands (like push –force, hard reset, etc) unless the user explicitly requests them’.”.
  • The author criticizes Railway for the design choice of storing backups on the same volume as the current production database. He criticizes Cursor because they claim “Destructive Guardrails [that] can stop shell executions or tool calls that could alter or destroy production environments.”.

8. Palantir’s Technological Republic in Brief

There has been much debate recently about an X post by Palantir that listed 22 points meant to summarize a book entitled “The Technological Republic: Hard Power, Soft Belief, and the Future of the West”, co-written by Palantir CEO Alexander Karp.

  • The post puts forward a mix of technological, political and cultural points.
  • They mention for instance that Silicon Valley has a moral obligation to take a greater role in US defense and in fighting violent crime. It writes: “If a U.S. Marine asks for a better rifle, we should build it; and the same goes for software”.
  • On AI, the post argues that AI weapons will be built, so the US needs to take the lead. It writes that “one age of deterrence, the atomic age, is ending, and a new era of deterrence built on A.I. is set to begin”.
  • Among its geopolitical issues, it is critical of the “postwar neutering of Germany and Japan” which it claims has upset the power balance in Europe and in Asia. It also controversially claims that cultures are not equal, writing that “Some cultures have produced vital advances; others remain dysfunctional and regressive”.
  • The post has been described as techno-fascism by critics. The main worry about this post is Palantir Technologies moving out of its role. It highlights the lack of checks-and-balances that providers of powerful technologies should be subjected to.

9. A blueprint for using AI to strengthen democracy

This MIT Technology Review opinion article proposes ways that AI can actually improve democracy, in an era where technology is currently putting democracy under threat.

  • Every major technological breakthrough has reshaped governance. The printing press helped the arrival of the Reformation and facilitated representative government. More recently, broadcast media helped fuel mass democracy by creating “shared national audiences”.
  • Social media platforms have highlighted the dangers for democracy when algorithms are designed to optimize user engagement over understanding. The article points out that platforms “do not need to have an explicit political agenda to produce polarization and radicalization”.
  • AI agents can pose the same problem, especially since individual agents with no bias have been shown to exhibit collective biases.
  • Also, today’s institutions were “designed for a world in which power was exercised visibly, information traveled slowly enough to be contested” – a reality upset by the social media and AI age.
  • AI chatbots are now the primary means that people inform themselves. Whoever controls the models, controls what people believe. The first step to creating trust in institutions is to create transparency in the development of AI models.
  • A second step is to develop AI models that reduce polarization. This builds on observations that “AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones”. This can create a trust that has eluded human fact-checking.
  • Finally, on the institutional level, the article argues that governments can harness the power of AI agents to make governance more responsive and legitimate.

10. Cloudflare says AI made 1,100 jobs obsolete, even as revenue hit a record high

Cloudflare, like Meta, Microsoft and Amazon, is laying off a large number of employees due to AI trends, despite having reported increased revenue.

  • In what is the first mass layoff in the company’s 16-year history, Cloudflare is laying off 1100 workers, which corresponds to 20% of its workforce.
  • CEO Matthew Prince said the layoffs were the result of efficiency gains linked to its use of AI. The company is using AI for software creation, and said that “employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done”.
  • He added that Cloudflare’s usage of AI has increased by more than 600% in the last three months alone.
  • At the same time, the company reported quarterly revenues of 639.8 billion USD – a 34% year-over-year increase. Nevertheless, there was a loss of 62 million USD, compared to 53.2 million USD one year ago.
  • Cloudflare claims to have over 2.5 billion USD in “remaining performance obligations” (RPO) which corresponds to a year-over-year growth of 34%. RPO indicates revenue under contract but not yet delivered.