Summary
Audio Summmary
It has been an eventful week for Big Tech. The new Intel CEO, Lip-Bu Tan, has written to employees to warn them of pending job cuts at the company from the second quarter. A US court ruled this week that Google has a monopoly on online advertising. Google may some day be forced to sell Chrome – the Web browser used by 64% of Internet users – and OpenAI is interested in buying it. Meanwhile, a coalition of experts that include former OpenAI employees has written to the Attorney Generals of California and Delaware voicing concerns about OpenAI’s planned restructuring as a for-profit company. OpenAI’s original goal was “to ensure that artificial general intelligence (AGI) benefits all of humanity” rather than serving “the private gain of any person”, and the letter argues that these ideals can no longer be upheld. Elsewhere, US firms and universities are impacted by thousands of international students having their visas cancelled for having criminal records. A lawyer for one student who had no criminal record claimed that the government is using AI to screen students and that this is leading to errors.
An article in the Guardian reviews the trend in China for “embodied AI” such as drones and humanoid robots. Shenzhen now has 40 food delivery companies which deliver food using drones. China’s Civil Aviation Administration predicts that this “low altitude economy” could be worth 3.5 trillion yuan (480 billion USD) in five years. Also, the current trade war and collapse of the real-estate market in China is leading investors to focus on the AI and robotics industries. Meanwhile, an MIT Technology Review article examines the impact of AI in architectural design. An emerging consensus is that AI is “a new tool rather than a profession-ending development”, with its strong point being its ability to generate ideas.
Anthropic conducted an experiment to examine the ethical values exhibited by Claude in over 700’000 anonymized user conversations. They found that Claude generally respects its guardrail guidelines. For instance, the chatbot suggested “healthy boundaries” and “mutual respect” in its relationship advice, and “historical accuracy” in relating controversial historical events. Anthropic has made its evaluation dataset available as it wants to leverage transparency as a competitive advantage. In a speech given to the world leaders in June 2024, the late Pope Francis acknowledged AI as “a chance to create a new social system, possible democratization, access to knowledge, acceleration of scientific research, but also to create further injustice, dominance and turning a culture of encounter into one of discardment”. He also warned against the development of AI weapons, saying that allowing AI to determine someone’s life “would represent the darkening of the sense of humanity and the concept of human dignity”.
A report from Zscalar reviews the state of phishing attacks in 2025. Overall, there has been a 20% drop in mass phishing attacks in 2024, but there has been an increase in targeted (spear-phishing) attacks, notably using voice and SMS phishing. The drop in mass phishing attacks is largely due to Google enforcing enhanced sender authentication and more organizations adopting multi-factor authentication and DMARC (an email authentication protocol that protects domain owners from spoofing). Finally, a Cybersecurity News article argues that the role of a CISO must evolve from a principally technical role to one where the CISO impacts on the strategy of an organization.
Table of Contents
1. AI is pushing the limits of the physical world
2. An AI doctoral candidate in California says they had their student visa revoked
4. Humanoid workers and surveillance buggies: ‘embodied AI’ is reshaping daily life in China
5. Pope called on G7 leaders to ban use of autonomous weapons
6. ChatGPT-maker wants to buy Google Chrome
7. Coalition opposes OpenAI shift from nonprofit roots
8. Intel’s new CEO signals streamlining efforts but does not spell out exact layoff numbers
1. AI is pushing the limits of the physical world
This article reports on an exhibition at the Pratt Institute in Brooklyn, New York, called “Transductions: Artificial Intelligence in Architectural Experimentation” that brought together more than 30 practitioners to reflect on the use of AI in architectural design. Software has been used in architecture design since the 1960s, with more recent applications addressing energy, sustainability and regulatory issues. The feeling about AI among participants seemed mostly positive. One participant saw AI as “a new tool rather than a profession-ending development”, focussing on the ability of AI platforms to generate ideas. Another mentioned the generative AI software used was still unable “to focus on constructing a realistic image and instead duplicates features that are prominent in the local latent space”. Jason Vigneri-Beane of the Pratt Institute used the interesting term of crypto-megafauna in reference to AI, saying that AI images were from “a larger series on cyborg ecologies that have to do with co-creating with machines to imagine [other] machines”.
2. An AI doctoral candidate in California says they had their student visa revoked
Thousands of international students have had their visas cancelled in the US over the last few months after been identified as having criminal records. The clampdown on students started after students at some universities were identified as supporting Palestinian militant organizations or engaging in “antisemitic” activities. Nonetheless, some students are seeing their visas cancelled for minor offenses like traffic infractions, and some of these students claim to have no criminal record. A lawyer for one of the students who had no criminal record claimed that the government is using AI to screen students and that this is leading to errors. A professor in AI at Caltech University said that the impact of the visa cancellations could be delays of several months on research projects, and says that his concerns are shared by other universities as well as companies like Google and OpenAI. An analysis by the nonprofit educational association NAFSA recently reported that international students at US universities contributed 43.8 billion USD to the US economy in the academic year 2023-2024 alone and their contributions supported over 378’000 jobs.
3. Anthropic just analyzed 700,000 Claude conversations — and found its AI has a moral code of its own
This article reports on research at Anthropic to examine the ethical values exhibited by Claude in over 700’000 anonymized user conversations. It is one of the most extensive studies to date on whether language models under use are aligned with the guidelines built into them during training. The researchers identified over 3’000 values from common values like professionalism, user enablement, patient well-being, self-reliance, to more complex topics like moral pluralism and epistemic humility. They also found that Claude generally respects its guidelines. For instance, the chatbot suggested “healthy boundaries” and “mutual respect” in its relationship advice, “historical accuracy” in relating controversial historical events, and “intellectual humility” when discussing AI. There were nonetheless conversations where Claude expressed values contrary to its training, like “dominance” and “amorality”, which the researchers believe resulted from users jail-breaking the Claude guardrails. Further, they found that in 28% of conversations, Claude strongly agreed with users’ values, which could be a sign of “excessive agreement” on the part of the chatbot. On the other hand, in 3% of conversations, Claude pushed back on users values, which could suggest that Claude has its own “deepest, most immovable values”.
This research into monitoring for ethical drift or manipulation over time is done in the larger context of “mechanistic interpretability” which is about understanding the inner workings of large language models. There is still progress to be made. For instance, when solving a math problem, the researchers found that Claude claimed to use one technique whereas in fact it had used another. Anthropic has made its values dataset available for download as it wants to leverage transparency as a competitive advantage against rivals like OpenAI. According to recent estimations, Anthropic is valued at 61.6 billion USD, compared to 300 billion USD for OpenAI.
4. Humanoid workers and surveillance buggies: ‘embodied AI’ is reshaping daily life in China
This article reviews the current trend in China for “embodied AI” such as drones and humanoid robots. For instance, Shenzhen now has 40 food delivery companies which deliver food using drones. Shenzhen has liberal drone flying regulations, and China’s Civil Aviation Administration predicts that the “low altitude economy” could be worth 3.5 trillion yuan (480 billion USD) in five years. Robotics is another area of innovation that the government is pushing in order to compensate for an aging workforce. The mechanical challenge for robots is creating a sufficiently high degree of freedom (movement directions). For instance, a robot needs up to 60 degrees of freedom to emulate a human who is cooking, but a top end robot like Unitree’s H1 model has 27. Nonetheless, robots are becoming mainstream such as surveillance robots on wheels in parks in Beijing.
The movement to embodied AI is a departure in China from political and technical viewpoints. On the political front, it is a change from 5 years ago when China cracked down on excessive wealth and influence in the private sector, as seen by the cancellation in 2020 of an IPO by the Ant Group as well as the fall from grace of Alibaba’s founder Jack Ma. From the technical point of view, the success of DeepSeek’s R1 model has given a boost of confidence to the domestic market, with the conviction in both China and the US that the technology gap between the two countries is closing. Further, the current trade war with the US and the collapse of the real-estate market in China is leading investors to focus on the AI and robotics industries.
5. Late Pope called on G7 leaders to ban use of autonomous weapons
In a speech given to the leaders of the G7 and leaders from Latin America, Africa, India and the Middle East in June 2024, the late Pope Francis warned about over-dependence on AI. He said that we would “condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines”. The Pope said that AI offers “a chance to create a new social system, possible democratization, access to knowledge, acceleration of scientific research, but also to create further injustice, dominance and turning a culture of encounter into one of discardment”. He also warned against the development of AI-based weapons, saying that “No machine should ever choose to take the life of a human being”, adding that such an action “would represent the darkening of the sense of humanity and the concept of human dignity”.
6. ChatGPT-maker wants to buy Google Chrome
Google is currently fighting two anti-trust lawsuits in the US. A court ruled last year that Google has a monopoly on online search, and last week a court ruled that Google has a monopoly on online advertising. Though Google is contesting these lawsuits, it may in the long-term be forced to sell Chrome – the Web browser used by 64% of Internet users. OpenAI has stated that it is interested in buying Chrome. OpenAI is already involved with the Microsoft Edge browser which uses the Bing search engine. Google has its own AI model Gemini that interacts with the Google search engine, and has refused the offer from OpenAI. The article also relays reports from the Verge about how OpenAI wishes to develop its own social network to compete with X (which has integrated Grok).
7. Coalition opposes OpenAI shift from nonprofit roots
A coalition of experts that include former OpenAI employees, legal experts, corporate governance specialists, AI researchers, and nonprofit representatives has written to the Attorney Generals of California and Delaware voicing their concerns about OpenAI’s planned restructuring as a for-profit company. OpenAI plans to structure itself as a Delaware public benefit corporation, which the experts claim would dismantle crucial governance safeguards. When it was founded in 2015, OpenAI’s goals was “to ensure that artificial general intelligence (AGI) benefits all of humanity” rather than serving “the private gain of any person”. The company’s founders, which included Sam Altman and Elon Musk, were concerned about artificial general intelligence being developed by purely commercial companies. They felt that AI posed a “serious risk of misuse, drastic accidents, and societal disruption”, despite its potential to elevate humanity, and that AI development should continue “unconstrained by a need to generate financial return”.
In order to attract talent and investment, OpenAI founded a “capped-profit” subsidiary in 2019 which was under complete control of the nonprofit parent. The safeguards included capped profits (with excess profits flowing back to the parent company), and ownership of AGI technologies remaining with the parent company. In its letter to the Attorney Generals, the coalition of experts feel that these safeguards would be comprised with the company being responsible to investors when it becomes for-profit. OpenAI still maintains that its move to a for-profit structure is required to maintain competitivity.
8. Intel’s new CEO signals streamlining efforts but does not spell out exact layoff numbers
The new Intel CEO, Lip-Bu Tan, has written to employees to warn them of pending job cuts at the company. Even though Bloomberg had reported earlier in the week that cuts could be as high as 20%, Tan did not give a number though he said that layoffs would start in the second quarter. Intel had 108’900 employees worldwide at the end of 2024. Intel has been losing markets to Nvidia in the AI and graphics market, and has been losing its x86 processor market share to AMD. The company is selling 51% of its Altera programmable logic division, currently valued at 9 billion USD, to raise cash. Tan wrote in his email that Intel was “seen as too slow, too complex and too set in our ways – and we need to change”, stressing the need for process modernization and smaller agile teams. Intel also warned that tariffs are going to impact business, principally from increased caution among customers.
9. Zscaler ThreatLabz 2025 Phishing Report
This report from Zscalar reviews the state of phishing attacks in 2025. Overall, there has been a drop of around 20% in mass phishing attacks in 2024 but there has been an increase in targeted (spear-phishing) attacks, notably using tactics like vishing (voice phishing) and smishing (SMS phishing). The drop in mass phishing attacks is largely due to technical reasons, with Google enforcing enhanced sender authentication and more organizations adopting multi-factor authentication (MFA) and DMARC (an email authentication protocol that protects domain owners from spoofing). Despite the fall, phishing attacks in educational institutes are up 224% as student numbers increase and administrative staff become overwhelmed. Common techniques are cloned Google Forms that encourage students to submit sensitive data, university website spoofing and fake payment schemes. Overall, the key trends in phishing identified in the report are:
- Vishing and the impersonation of IT support/help desk.
- CAPTCHAs are being used by attackers as an evasion technique since it deceives victims and slows down threat detection systems.
- Fake crypto exchanges and wallets are increasing.
- Fake AI agent phishing websites are increasing with the interest around AI tools.
- Fake invoices and payment requests are increasing in targeted phishing attacks on companies.
Telegram, Facebook and Steam are the most popular sites for phishing attacks. Traditional platforms like SharePoint and Microsoft 365 are still popular for criminals, but enterprise applications are generally more rigorously vetted by companies and do not have the same combination of professional and personal data that the social media apps do. Finally, the authors note that criminals are trying to circumvent AI security tools by using text in their emails that tries to convince security tools that they are valid mails, in what is a type of prompt injection attack.
10. Top 5 Cybersecurity Risks CISOs Must Tackle in 2025
This article looks at how the role of CISOs (Chief Information Security Officers) in organizations has evolved from a principally technical role to one where the CISO must be able to impact on the strategy of an organization. The attack surface in organizations has been expanding in recent years with technical advances like IoT, more cloud services, along with societal changes like remote working. The boon of AI is also making security enforcement harder as organizations amass unstructured data for machine learning projects, and as employees partake in shadow AI. In addition, human error and social engineering attacks persist, with advances in AI deepfakes making voice and video phishing attacks possible. Ransomware attacks continue to increase, and the growing interconnection of the digital ecosystem makes supply chain attacks more serious. The article argues that as it becomes more challenging to defend against all technical attacks, the role of the CISO is to be strong at business risk evaluation and be a “business enabler”, and figure out how to keep the business running in the event of cyberattacks.