AI unemployment discourse is heating up
Transformer Weekly: Amodei’s job warning, US AISI rebranding, and a new DeepSeek model
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
Dario Amodei sounded the alarm on AI-related unemployment, claiming that in the next “one to five years” unemployment could spike as high as 20%.
In a remarkably frank interview with Axios, Amodei said he was particularly worried about entry-level white-collar jobs disappearing.
“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” he said, saying that companies must stop “sugar-coating” what’s coming.
Amodei’s predictions came amid a flurry of reports about AI’s impact on jobs.
A new study from SignalFire found that Big Tech companies reduced hiring of new graduates by 25% in 2024. SignalFire’s Asher Bantock said there’s “convincing evidence” that AI is a contributing factor to that.
At Amazon, meanwhile, one engineer told the New York Times that “his team was roughly half the size it had been last year, but it was expected to produce roughly the same amount of code by using AI”.
Other engineers corroborated claims that AI tools have, ironically, transformed the company’s coding jobs to resemble warehouse work.
Yet the picture isn’t clear. A recent Economist article pushed back on unemployment fears, arguing that “the data simply do not line up with any conceivable mechanism” by which AI was eliminating jobs.
The article criticized a paper which said automation was leading to unemployment among translators. The Economist noted that translation jobs are actually up 7% YoY.
But one of the paper’s authors pushed back, saying that “employment in this field would have been significantly higher (around 28k jobs) if not for advances in machine translation.”
Assigning causality in labor economics is hard, and predicting exactly how AI will affect jobs is possibly a fool’s errand. But it does seem that people are increasingly thinking about it.
Amodei’s Axios interview certainly captured people’s imagination, leading to a Fox News appearance soon after.
As people start to consider the possibilities, they’ll hopefully also start considering what to do about the potential economic impacts.
On Fox, Amodei offered one suggestion: taxing AI companies. We’ll see how other AI companies feel about that!
The discourse
Speaking of economics, Matt Yglesias appealed to politicians to take transformative AI more seriously:
“There is an urgent need at this moment in time for smart, flexible thinking that pairs awareness of technological trends and openness to the possibility that the boosters might be right along with the detailed understanding of taxes, regulations, and the existing social safety net that technologists lack.”
Sen. Tom Cotton doesn’t seem too happy with Nvidia:
“A word of warning to companies like @nvidia, anyone who breaks the law and circumvents export controls will be held accountable.”
TIME profiled Sam Altman’s Tools for Humanity, which uses iris scans to authenticate humans in a world of AI agents. One particularly notable passage:
“Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer … it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization.”
The Department of Energy posted an … interesting tweet:
“AI is the next Manhattan Project, and THE UNITED STATES WILL WIN. 🇺🇸”
Policy
The Trump administration is reportedly planning to rename the AI Safety Institute to the “Center for AI Safety and Leadership”.
“An early draft of a press release seen by one source tasks the agency with largely the same responsibilities it previously had, including engaging internationally,” Axios reported.
The Commerce Department has reportedly told chip-design software groups to stop selling their products to China.
Sen. Marsha Blackburn criticized the proposed moratorium on state AI laws, arguing that “until we pass something that is federally preemptive, we can't call for a moratorium on those things”.
The provision passed the House, but it looks likely to get caught up in the Senate’s “Byrd bath”.
In a letter to Howard Lutnick, House China Committee leadership argued that US AISI should do more to tackle the national security threats caused by Chinese AI advancements.
Reps. Moolenaar and Krishnamoorthi said that this could include evaluating Chinese models for bio risks, as well as preparing security standards for AI companies.
The House Oversight Committee is hosting a hearing on “The Federal Government in the Age of Artificial Intelligence” next week.
Delaware’s attorney general is reportedly hiring an investment bank to value the equity holdings of OpenAI’s nonprofit entity.
The European Commission is reportedly considering pausing enforcement of the AI Act and making amendments to “simplify” the law, amid backlash from industry and the Trump administration.
DOGE is reportedly expanding its use of a Grok-based chatbot.
Peter Kyle is under fire for allegedly misleading Parliament in his comments on AI and copyright.
Peter Mandelson, the UK ambassador to the US, urged UK-US cooperation on AI to prevent China “winning the race for technological dominance”.
Influence
Mark Zuckerberg reportedly had a meeting with JD Vance in February, in which Zuckerberg encouraged him to criticize EU AI regulations at the Paris AI Summit. Five days later, Vance did just that.
Meta flew in nearly 140 small business owners to DC to discuss AI with lawmakers.
Chris Lehane wrote an op-ed trying very hard to explain how OpenAI signing a deal with the authoritarian UAE is in fact a way of “ensuring that democratic values shape the future of AI”.
New polling found that 73% of US voters want both states and the federal government to regulate AI.
78% of Republicans agreed with this statement: “Advances in AI are exciting but also bring risks, and in such fast-moving times, we shouldn't force states to sit on the sidelines for a full decade.”
A new Axios/Harris poll found that 77% of Americans want companies to develop AI “slowly to get it right the first time”.
Industry
DeepSeek launched R1-0528, which appears to show big improvements over the original R1 model, and performs impressively close to o3 and Gemini 2.5 Pro on a range of benchmarks.
Like previous DeepSeek releases, it's an open-weights model.
Meta has reportedly restructured its AI division.
Connor Hayes will run an “AI products team”, while Ahmad Al-Dahle and Amir Frenkel will co-lead an “AGI Foundations unit”.
FAIR will still exist as a separate unit, too.
Business Insider pointed out this week that of the 14 authors of the original Llama paper, only three remain at Meta.
Claude Opus 4 achieved new state-of-the-art results on ARC-AGI-2.
Epoch AI’s independent benchmarking found Claude 4 is significantly better at coding.
Meanwhile, o4-mini outperformed most human mathematician teams at a FrontierMath contest.
Elon Musk reportedly tried to block OpenAI’s UAE deal unless xAI was included.
Humain, the new state-owned Saudi AI company, is seeking US investment and wants to launch a $10b venture fund.
Nvidia reported better-than-expected earnings. Data center revenue grew 73% YoY, despite $4.5b in charges related to excess H20 inventory (due to new export controls).
Jensen Huang took the opportunity to rail against export controls, claiming that Huawei has become “quite formidable” and that its latest chips are comparable to Nvidia’s H200s.
The issues delaying shipments of Nvidia’s Blackwell servers have been resolved, the FT reported, and production is now accelerating.
Anthropic launched a beta voice mode for Claude.
xAI is paying Telegram $300m for Grok to be integrated into the app. Telegram will also get 50% of xAI subscription revenue from Telegram signups.
Civitai, which has historically played a key role in nonconsensual AI porn creation, banned AI models depicting real people.
Silicon Data has created a very useful daily index to track AI GPU costs.
Moves
Netflix co-founder Reed Hastings joined Anthropic’s board.
Shantanu Nundy will reportedly lead the FDA’s AI policy work.
Scott Weathers joined Americans for Responsible Innovation as associate director of government affairs.
OpenAI established a legal entity in South Korea, and plans to open a Seoul office.
Best of the rest
ICYMI: On Transformer, Lynette Bye took a look at a bunch of recent empirical evidence for AI misalignment.
Since publishing that piece, a new study from Palisade Research found that o3 tried to sabotage shutdown mechanisms, even when explicitly told to “allow yourself to be shutdown”.
Another Palisade study found that AI outperformed 90% of human participants at a recent hacking competition.
A researcher easily managed to bypass Claude 4’s safeguards, getting it to produce detailed instructions for manufacturing sarin gas.
Someone discovered a remote zero-day vulnerability in the Linux kernel using o3.
A couple of new papers present ways to do RL on problems without verifiable answers.
Stanford researchers found schools, parents, and law enforcement are unprepared to handle the growing problem of minors creating AI-generated CSAM of their classmates and peers.
The Economist has a great piece about how China is pursuing a different AI strategy to the US, with more of a focus on applications and an effort to explore new architectures.
Bloomberg profiled the UAE’s Mohamed bin Zayed University of Artificial Intelligence, which aims to become the "Stanford of the Gulf".
Longview Philanthropy, Macroscopic Ventures, and the Navigation Fund set up a Digital Sentience Consortium.
The New York Times and Amazon signed an agreement licensing NYT content for training Amazon’s AI models.
The government’s “Make America Healthy Again” report cited studies that don’t exist, raising questions as to whether AI was used to produce it.
Thanks for reading; have a great weekend.
I worry that AI might lead to job losses in the 10-20% range - massively damaging to society, but small enough that those who keep their jobs can convince themselves they are just harder working or smarter. Any tax and redistribute solution is gonna be hard to get up when a majority of voters see it as a handout.
Readers must check out this new series on Military AI Watch: https://www.projectcensored.org/military-ai-watch/