Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
The DoJ and FTC finally divvied up AI antitrust jurisdictions. The DoJ is looking into Nvidia; the FTC’s taking on Microsoft and OpenAI.
To start with, the FTC’s looking at whether the Microsoft-Inflection deal was designed to avoid antitrust scrutiny, reports the WSJ.
The DoJ’s Jonathan Kanter said he’s examining “monopoly choke points and the competitive landscape” in AI.
On Transformer: Leopold Aschenbrenner said OpenAI fired him, in part, for raising security concerns to the board.
In an interview with Dwarkesh Podcast, Aschenbrenner said he was also ousted for sharing a document that OpenAI alleged contained sensitive information, a charge which he denies.
Before he was fired, Aschenbrenner said a lawyer asked him “about my views on Al progress, on AGI, the appropriate level of security for AGI, whether the government should be involved in AGI, whether I and the superalignment team were loyal to the company, and what I was up to during the OpenAI board events”.
Aschenbrenner dominated the discourse this week; not only because of the above.
He also published a giant essay series arguing that AGI could well be here by 2027, and superintelligence may follow not much later. He also framed things as a race between the US and China.
And he announced his new AGI-focused investment firm, backed by Patrick Collison, John Collison, Nat Friedman, and Daniel Gross.
One particular graph from Aschenbrenner was mocked a bunch on Twitter; elsewhere Max Read suggested his focus on China was a cynical attempt to curry favour with VCs. Others really didn’t like the race and securitisation framing.
In a timely paper, Kerry McInerney said the China race dynamic “draws on previous racialised configurations of anti-Asian sentiment”.
I liked Kelsey Piper’s analysis: that both Aschenbrenner and his critics are “wildly overconfident”.
A group of current and former OpenAI and DeepMind employees warned of the risks from AI and called for better whistleblower protections.
One of the group, ex-OpenAI employee Daniel Kokotajlo, said he’s “lost hope that [OpenAI] would act responsibly”. William Saunders, another, said safety concerns he raised there were “not adequately addressed”.
Kokotajilo said OpenAI let Microsoft use GPT-4 in India without the approval of a safety board. Microsoft initially denied this to the New York Times, but after publication it admitted Kokotajilo was telling the truth.
Bafflingly, Fast Company says the whistleblowers’ concern for AI safety could “wind up helping” OpenAI.
SB 1047 drama finally broke through to the mainstream media.
Bloomberg and the FT covered it this week, both adopting the “fight over open source” frame.
Scott Wiener told Semafor that the prospect of a Trump EO rollback motivated the bill:
“We know Donald Trump and his team [couldn’t] care less about protecting the public [from AI], so we think it’s important for California to take steps to promote innovation and for safe and responsible deployment of extremely large models.”
The bill was amended this week, as I noted in my piece last week.
In The Information, Lauren Wagner said the bill would “stifle innovation”.
Gary Marcus said people like Andrew Ng “don’t know the slightest thing about how the actual world works”.
Though not focused on SB 1047, Forbes had a big piece echoing the same “tech people are fighting” angle. Vinod Khosla and Reid Hoffman are framed as leading the “regulate’ side, while Marc Andreessen, Yann LeCun and Bill Gurley shout “regulatory capture”.
The discourse
António Gutteres said we need AI rules, now:
“We cannot sleepwalk into a dystopian AI future.”
Mark Cuban said Silicon Valley’s increasing love for Trump is self-interested:
“The more influence they have over AI regulation the better the opportunity to advantage themselves.”
Omar Sultan Al Olama said he wants an AI “marriage” between the UAE and US:
“When you look at the frontier technology, at the most cutting edge, that needs to be in coordination with the US players and there needs to be reassurances that are given to the US.”
Ellen Huet’s new podcast series on Sam Altman doesn’t paint a rosy picture:
“In accruing immense influence before the age of 40, he has alienated many of his former allies. What was once seen as charm is now viewed by many as duplicity—a tendency to tell people whatever will get him what he wants.”
Paris Marx pointed out how terrible Kara Swisher’s OpenAI reporting has been:
“Swisher’s narrative and outright advocacy for Altman skillfully downplayed other factors whose details started to become clearer after his swift return as CEO … it’s become very apparent that Swisher was echoing the Altman line and defending his interests.”
Former White House advisor Susan Rice is worried about China:
“They are going to want to take advantage of what we have … Whether it’s through purchasing and modifying our best open source models, or stealing our best secrets. We really do need to look at this whole spectrum of how do we stay ahead, and I worry that on the security side, we are lagging.”
New ASML CEO Christophe Fouquet said ASML will just do what it’s told:
“Our role is not to make politics; it’s not to decide what is right, what is wrong.”
Helen Toner doesn’t like the near vs. long term fight in AI:
“There are many throughlines between issues we’re seeing here and now, and issues we might need to anticipate.”
Sneha Revanur of Encode Justice said young people want a say on AI:
“For humanity’s sake, we must all take back the mantle and write the story of our collective future for ourselves. And we must do it now.”
A survey this week found that 40% of Harvard students think AI extinction risk should be a global priority.
Policy
Gina Raimondo said AISI will soon test all new advanced models before deployment, and that frontier companies have agreed to give AISI access. She also said she’s worried about AI-enabled bioterrorism.
The FCC and FEC are fighting about who should regulate AI’s usage in elections. The DNC, meanwhile, is struggling to get campaign committees to agree on how to use AI in their campaigns.
The Treasury Department asked for public comments on AI use in financial services.
Labour might not put AI legislation in their first King’s Speech, Politico reports.
The White House launched a push to increase power line capacity, with energy demand expected to soar because of AI.
Mistral was seemingly in talks with Microsoft while lobbying against the EU AI Act, news which will undoubtedly infuriate European policymakers.
Influence
Anthropic joined TechNet, which now counts all the frontier labs as members.
Y Combinator’s lobbying arm got a very uncritical writeup in Politico. Garry Tan and co are all in on the “little tech” framing.
Tan recently met with Sens. Schumer, Warren, Vance, “White House officials and several other lawmakers”.
Donald Trump and Jeff Zients will attend the Business Roundtable’s plenary meeting in DC next week, where a heavy tech CEO presence is expected. Biden was invited but will be at the G7 summit instead.
David Sacks held his Trump fundraiser on Thursday. Jacob Helberg and JD Vance were there; Peter Thiel, Marc Andreessen and Keith Rabois were not.
IBM met “hundreds of lawmakers” this week, Politico reported.
Andreessen Horowitz’s Martin Casado admitted that statements a16z made about AI safety to the White House, Senate, and House of Lords were inaccurate.
NOYB asked European privacy authorities to stop Meta from using people’s personal data to train its AI models.
Tech group Chamber of Progress launched a “Generate and Create” campaign to show “how AI lowers barriers for producing art” and “defend the longstanding legal principle of fair use”.
GovAI provided comments on NIST’s Draft Profile on Generative AI.
Lobbying shop J.A. Green & Co., which represents Palantir and other companies, is launching a $100m VC fund with Anzu Partners.
The WSJ’s got a giant investigation into Joshua Wright who, they allege, was having affairs with his students and seemingly working with Google to push a certain antitrust narrative to his former FTC colleagues.
Lots of fancy people were at the Future of Privacy Forum’s AI-focused event this week.
Industry
Apple will release a suite of “Apple Intelligence” AI features next week. Bloomberg’s got the lowdown.
The WSJ has an interesting piece on how Apple fell so far behind in AI.
An awful lot of chip news this week:
Nvidia surprise-unveiled Rubin, its next generation of chips which will ship in 2026. Jensen Huang said the company’s now on an annual release cadence.
AMD announced its Instinct MI325X GPU, which it says outperforms the H200 on bandwidth, and the Ryzen AI 300 AI laptop chip.
Stratechery’s got an in-depth interview with Lisa Su, too.
Intel said its Gaudi 3 accelerators would cost $125k for an eight-chip kit, about half the price of H100s. It also announced its sixth-gen Xeon server processors and Lunar Lake, its new AI laptop chip.
ASML opened a test lab for its new High NA EUV machines. TSMC’s getting one later this year; Intel’s already got one.
It became the second-biggest stock in Europe.
The Information reports that ByteDance has been renting H100s through Oracle’s cloud platform.
Chinese companies MetaX and Enflame are designing less powerful chips so they can keep using TSMC despite export controls, Reuters reported.
SMIC has previously been allocating advanced capacity entirely to Huawei, though it’ll reportedly start supplying Chinese AI chip companies as well.
SemiAnalysis says OpenAI’s poaching top Google TPU talent for its chip team.
TSMC said it’s held talks about moving its plants off Taiwan, but that doing so is impossible.
NXP and Vanguard are building a $7.8b wafer plant in Singapore.
VCs are encouraging their portfolio companies to ditch Chinese investors, the FT reports.
Elon Musk reportedly told Nvidia to send Tesla’s AI chips to X and xAI. He may also have exaggerated the scale of his orders.
xAI’s planning on building a supercomputer in Memphis.
Google’s scaling back AI Overview prevalence, after last week’s drama.
The NYT reports on how the overviews are hurting publishers.
Mistral launched a bunch of fine-tuning services.
It seems Llama 3-V was plagiarised.
Stability released an audio generation model. Its weights are open, but it can’t be used commercially — and there’s a better paid version available.
Emad Mostaque announced his new company, Schelling AI.
Hugging Face detected “unauthorised access” to its platform.
Cohere has reportedly raised $450m at a $5b valuation, from Nvidia, Salesforce, Cisco and others.
Humane is reportedly trying to sell itself to HP for $1b.
The FTX estate sold its remaining 15m Anthropic shares, for around $450m. G Squared bought the bulk of this round.
Shutterstock made $104m from AI licensing last year.
Microsoft said it’s investing $3.2b on cloud infrastructure in Sweden.
Cisco announced a $1b AI investment fund. It launched some new AI “cluster solutions” too.
Bloomberg reported on how data centre companies are trying to snatch up crypto mining facilities so they can repurpose the capacity for AI work.
Core Scientific, a miner, rejected a $1b takeover offer from CoreWeave this week.
Helion, the Altman-backed nuclear fusion company, is reportedly in deal talks with OpenAI.
Moves
Carroll Wainwright became the latest person to leave OpenAI over safety concerns, joining a growing exodus.
Google’s chief privacy officer Keith Enright is leaving the company, along with head of competition law Matthew Bye, Forbes reports.
Gillian Hadfield’s joining the new Johns Hopkins School of Government and Policy.
Josh New, formerly a tech policy executive at IBM, is SeedAI’s new director of policy.
Brian Waldrip is the Center for AI Policy’s new government relations director.
Luca Bertuzzi is now senior AI correspondent at MLex.
Tom Simonite is joining the Washington Post as tech companies editor.
Graham Fraser is joining the BBC Tech team as a senior reporter.
Ryan Heath, formerly of Axios, is now head of comms at Robin AI.
CC Wei is now officially chairman of TSMC; Nikkei reports that he’s looking for a CEO successor too.
Microsoft is reportedly cutting “as many as 1,500” jobs in its Azure teams.
Paul Triolo’s left CSIS; he still works at Dentons.
Best of the rest
The WSJ estimates Sam Altman’s various investments as worth $2.8b. He’s got a debt line with JP Morgan that he uses to invest in startups.
OpenAI released some new interpretability research, focused on sparse autoencoders. The paper was done by the now-defunct Superalignment team, and profusely thanks Ilya Sutsekever and Jan Leike in the acknowledgements.
Security researchers found that Microsoft’s new Recall tool is really easy to hack. Microsoft’s now making it opt-in.
ChatGPT, Claude and Perplexity all went down for a few hours this week. It’s not clear why.
Forbes profiled White Stork (aka Project Eagle), Eric Schmidt’s new military drone company, which has hired talent from Apple, SpaceX and Google.
Epoch AI’s new research suggests we’ll have billion-dollar models by 2027. Time profiled Jaime Sevilla about Epoch’s great work.
Janet Yellen’s pretty worried about AI’s impacts on markets.
Gemini and Copilot refuse to say who won the 2020 election.
Anthropic explained how it’s testing for election-related risks.
Deepfaked misinfo seems to not have been a big deal in India’s election.
California election officials are trying to “prebunk” AI generated misinformation.
The BBC found that fake AI-generated videos are targeting young voters with misinformation on TikTok.
TechCrunch reports on the SPVs being set up to buy small chunks of Anthropic and xAI.
The Hill says AI is supercharging bullying in schools, particularly when it comes to deepfake nudes.
Cara, an anti-AI social media platform, is taking off.
Rest of World has an interesting piece on how Chilean activists are opposing new data centres being built in the country, where there’s a significant drought.
Bloomberg’s Parmy Olson reported on the “dead-end jobs” of AI data workers.
Unbabel says its new translation model outperforms GPT-4.
The LA Review of Books interviewed Alison Gopnik and Melanie Mitchell about AI.
In a widely shared piece, Anthropic’s Avital Balwit said these next few years might be the “last few years that I work”.
Workday wrote an ASEAN-focused op-ed on how great AI governance is.
Roman Yampolskiy went on Lex Fridman.
Edward Snowden said AI regulation might stifle its potential.
Zoom’s CEO wants AI “digital twins” to speak on your behalf in meetings.
Wired reports on a bizarre AI beauty pageant.
Coming up
June 10: Apple hosts its WWDC keynote.
June 10-12: London Tech Week.
June 12-13: AI Summit London.
June 13-15: G7 leaders meet in Italy. Politico says that Pope Francis is coming to talk about AI, while Georgia Meloni will focus on AI’s “effects on the labour market”, helping African countries develop AI ecosystems, and more AI R&D funding. There’ll probably be more work on the Hiroshima Process, too.
Thanks for reading; see you next week.