Risk of AI-Inequality When AI Bubble Bursts

Google, Meta, LinkedIn, TikTok Again Under EU Scrutiny

Posted on December 13th, 2025

Summary

Audio Summmary

Walt Disney has announced a 1 billion USD equity investment in OpenAI in which the AI company will be allowed to use images of Walt Disney characters on its Sora 2 video generation platform. At the same time, Disney has sent a cease-and-desist letter to Google claiming copyright infringement on a massive scale on the Gemini AI platform.

The MIT Technology Review reports on a conversation between Tech journalists on the evolution of AI over the next five years. The expectation of an industrial revolution sized change in the next 5 years is rejected since, even if technology changes are coming fast, humans need far longer time to accept and to adjust to technologies. The AI bubble will burst soon, which will lead to a spike in AI operating costs. This could lead to a world of AI-haves and AI-have-nots who will work with an Internet full of AI Slop. OpenAI seems too big to fail due to a chain of interdependencies within Silicon Valley.

On society issues, research in Nature reports that, in recent elections, an AI chatbot can be more effective at shifting voter intentions than political advertisements. The results contradict a long-standing belief in political science that partisan voters do not listen to facts and figures that contradict their beliefs. The danger is that the “facts” presented by the chatbots may be inaccurate or intentionally misleading. Meanwhile in the US, a telecom company has trained an AI model on seven years of phone and video calls, text messages, and emails made by inmates of federal prisons in Texas. The company claims the AI model improves detection of crimes, but advocacy groups are claiming violation of prisoner rights.

On the regulatory front, the EU has launched an investigation into Google over its use of AI Overviews which publishes content from websites without the explicit consent of websites’ owners, or without any compensation for the websites. Meta is also the subject of a probe over its use of AI in the WhatsApp messaging platform. The Italian antitrust commission argued that Meta’s move to integrate AI “may limit production, market access or technical developments in the AI Chatbot services market. LinkedIn and TikTok are being investigated by the Irish media commission over the use of “deceptive interface designs” that make it difficult for users to report illegal content, thereby violating a provision in the recent Digital Services Act. Meanwhile, the EU has introduced a whistleblower tool related to the AI Act, and has introduced some legal simplifications in the context of its Digital Omnibus initiative. The Digital Omnibus is an initiative to come up with a digital rulebook that simplifies and makes coherent a range of digital legislation enacted over recent years, including the Digital Services Act, GDPR and the AI Act.

According to a Limitless podcast interview with Google’s VP for Search, Google’s strategy for AI is driven by a belief that deep personalization is the key to AI success. Google has been integrating Gemini into Workspace apps like Gmail, Calendar and Drive. Google’s primary advantage over other AI Tech companies is its access to training data. Finally, the US Department of Commerce has approved the export of high-grade Nvidia H200 chips to China. The effect of the sales for Nvidia remains to be seen as China has banned its home companies from buying Nvidia chips, leaving the market to the Chinese companies Alibaba and Huawei.

1. The EU AI Act Newsletter #91: Whistleblower Tool Launch

The European Commission has introduced a whistleblower tool related to its 2024 AI Act, and has introduced some legal simplifications in the context of its Digital Omnibus initiative.

  • The whistleblower channel is a confidential and secure service for reporting breaches of the AI Act to offices in Brussels. It is concerned with uses of AI that endanger fundamental citizen rights, health and safety, or public trust.
  • The EU’s Digital Omnibus is an initiative to come up with a digital rulebook that simplifies and makes coherent a range of digital legislation enacted over recent years, including the Digital Services Act, GDPR and the AI Act.
  • Modifications proposed for Digital Omnibus include extending the grace period for AI systems developed prior to 2024 by six months before they need to be compliant with the Act, and delaying application of the law until all implementation standards have been published.
  • One key amendment proposed is to extend simplifications designed for SMEs to apply to mid-sized companies. This would save 225 million EUR annually on technical documentation.
  • European politicians and companies appear divided over delays to the AI Act’s implementation. Some favor delay in order to encourage innovation, while others fear capitulating to US pressure and compromising Europe’s regulatory leadership.

2. One of Google’s biggest AI advantages is what it already knows about you

This TechCrunch article analyzes an interview with Robby Stein, VP of Product for Google Search, on the Limitless podcast.

  • Google’s strategy for AI is driven by a belief that deep personalization is the key to AI success. Google has been integrating Gemini into Workspace apps like Gmail, Calendar and Drive.
  • One application of personalized AI is product recommendations which Google says is one of top types of query made to its Gemini AI system.
  • Google’s advantage over its AI competitors is its access to data – including emails, documents, photos, location history, and browsing behavior.
  • Its Gemini Deep Research has begun processing personal data from its application suite.

3. An AI model trained on prison phone calls now looks for planned crimes in those calls

In the US, a telecom company has trained an AI model on seven years of phone and video calls, text messages, and emails made by inmates of federal prisons in Texas.

  • The company, Securus Technologies, claims that the trained AI is capable of detecting repeat offenses as well in-prison racketing and contraband crimes. It argues the AI allows authorities “to detect and understand when crimes are being thought about or contemplated, so that you’re catching it much earlier in the cycle.”.
  • The prison advocacy rights group Worth Rises has criticized the practice, saying that though prisoners are aware that their conversations are recorded, this consent is “coercive” since “there’s literally no other way you can communicate with your family.
  • The group added that “since inmates in the vast majority of states pay for these calls, ... not only are you not compensating them for the use of their data, but you’re actually charging them while collecting their data.”.
  • The Federal Communications Commission issued a reform in 2024 that forbade telecom companies passing the costs of recording and surveilling calls on to inmates. However, this reform was removed this year by the new FCC director who wrote that AI surveillance systems “lead to broader adoption of beneficial public safety tools that include advanced AI and machine learning.”.

4. Media watchdog launches TikTok and LinkedIn investigations over illegal content reporting concerns

The Irish media commission, whose responsibility includes controlling compliance to the EU’s Digital Services Act, is leading an investigation into both LinkedIn and TikTok over the use of “deceptive interface designs.

  • Under the Digital Service Act, social media platforms and related websites must allow users to report content that they suspect is illegal.
  • The head of the Irish commission wrote that “There is reason to suspect that their illegal content reporting mechanisms are not easy to access or user-friendly, do not allow people to report child sexual abuse material anonymously, as required by the Digital Services Act, and that the design of their interfaces may deter people from reporting content as illegal.”.
  • A key concern of the commission is deceptive interface designs, also known as “dark patterns”, where users who believe they are reporting content as illegal and are actually simply reporting that some content or user has breached that platform’s terms and conditions.
  • Breaches of the Digital Services Act can lead to fines of up to 6% of a company’s annual turnover.

5. EU to launch antitrust probe into Meta over use of AI in WhatsApp

The European Commission is threatening an antitrust investigation into Meta over its use of AI features in the WhatsApp messaging platform.

  • The Italian antitrust commission already launched an investigation into Meta’s move to integrate AI because it “may limit production, market access or technical developments in the AI Chatbot services market”.
  • The announcement in Brussels follows the announcement of a probe into Alphabet (Google’s parent company) under the Digital Markets Act. The company is under investigation for the way it ranks news outlets in its search results.
  • The EU is under pressure from the US about its regulations which it sees as constraining US tech companies on the continent.
  • Big Tech is also under pressure in the US over monopolistic practices. Meta recently won an antitrust case brought by the US Federal Trade Commission that sought to make the company reverse its acquisition of WhatsApp and photo app Instagram.

6. Department of Commerce approves Nvidia H200 chip exports to China

The US Department of Commerce has approved the export of high-grade Nvidia H200 chips to China.

  • Only chips that are older than 18 months can be exported according to the decision. The US government will take 25% of the sales.
  • The effect of the sales for Nvidia remains to be seen. In September, China banned its home companies from buying Nvidia chips, leaving the market to the Chinese companies Alibaba and Huawei.
  • The situation also leaves a political uncertainty in the US, as Senators recently introduced the Secure and Feasible Exports Act (SAFE) Chips Act which requires the Department of Commerce to deny any export license on advanced AI chips to China for 30 months. It is not yet clear when a vote will take place on this legislation.
  • President Trump seems to have become increasingly in favor of exports to China in the past 6 months, insisting in trade agreements that the US government get at least 15% of sales revenue.

7. EU launches antitrust probe into Google’s AI search tools

The EU has launched another investigation into Google, this time over its use of AI Overviews which publishes content from websites without the explicit consent of websites’ owners, or without any compensation for the websites.

  • Another concern is whether website owners may refuse Google AI Overviews without being disadvantaged in Google Search.
  • For instance, the company prohibits uploads of videos if the owner does not allow Google to use the video’s data.
  • Google is also under investigation for not permitting other AI companies to use Youtube videos to train their models.
  • There are many lawsuits underway, including one taken against AI search tool Perplexity by news companies like the New York Times, The Chicago Tribune, News Corp, and the New York Post. The difference in this case, according to the article, is that these companies are looking for leverage to strike a compensation deal with the AI companies for their content.

8. The State of AI: A vision of the world in 2030

This MIT Technology Review article reports on a conversation between the journal’s Senior AI editor Will Douglas Heaven and the Financial Time’s Tech correspondent Tim Bradshaw on the evolution of AI over the next five years.

  • One cites two recent visions of an AI Future. The AI Futures Project, led by a former OpenAI researcher, sees AI’s impact in the next decade as being greater than the impact of the industrial revolution. This view is comforted by a recent Microsoft report that wrote that “In less than three years, more than 1.2 billion people have used AI tools, a rate of adoption faster than the internet, the personal computer, or even the smartphone.”.
  • The alternative view is put forward by researchers at the Normal Technology team at Princeton University. These researchers argue that even if technology changes are coming fast, humans need far longer time to accept and to adjust to technologies.
  • As the performance improvements of large AI models since ChatGPT emerged seem to be arriving at a plateau, the differentiator may be applications of AI. This is evidenced by the new agent browsers and range of frameworks for agentic applications that appeared this year.
  • For the Financial Times correspondent, the AI bubble will burst soon. AI developers will be out of business overnight, and “learn the hard way that you can’t sell services that cost 1 USD for 50 cents without a firehose of VC funding”.
  • Investors are increasingly pushing for returns from OpenAI – after having spent 500 billion USD. This makes the company’s original promise of developing an AI to “advance digital intelligence in the way that is most likely to benefit humanity as a whole” untenable. That said, OpenAI seems too big to fail due to a chain of interdependencies within Silicon Valley.
  • The AI bubble burst will force the price of AI to increase, leading to a world of AI-haves and AI-have-nots who will have to work in a world of an Internet full of AI Slop. For instance, though AI robots seem affordable to build, the cost of training and operating these robots will turn them into luxury items.
  • The current AI boom could be giving Silicon Valley companies tunnel vision, in that they are developing models without incentive to look at leaner designs or different chipsets. The journalists argue that this increases the likelihood that the next wave of AI innovation will come from outside of the US.

9. AI chatbots can sway voters better than political advertisements

This article reports on research that suggests that, in recent elections, an AI chatbot can be more effective at shifting voter intentions than political advertisements.

  • Researchers writing in Nature measured the impact of chatbots in the 2024 US election, the 2025 Canadian federal elections, and the 2025 Polish presidential election. Several AI models were used for the chatbots, including OpenAI and DeepSeek models.
  • In the US, research found that 3.9% of Donald Trump supporters who chatted with an AI chatbot in favor of Kamala Harris (the Democrat candidate) moved towards Harris. 2.3% of Harris supporters moved towards Trump after talking with a Trump supporting chatbot.
  • The impact of chatbots seemed larger in Canada and Poland, where voter shifts of up to 10% were observed.
  • The results contradict a long-standing belief in political science that partisan voters do not listen to facts and figures that contradict their beliefs.
  • The danger is that the “facts” presented by the chatbots may be inaccurate. In the study, it was observed that chatbots advocating right-leaning candidates made more false statements than those of left-leaning candidates. The chatbots are generally trained on political manifestos, which often contain misleading, biased or factually incorrect statements.

10. Disney to invest $1bn in OpenAI, allowing characters in Sora video tool

Walt Disney has announced a 1 billion USD equity investment in OpenAI in which the AI company will be allowed to use images of Walt Disney characters on its Sora 2 video generation platform.