Does Zuck believe in superintelligence?
Transformer Weekly: H20 fights, OpenAI fundraising, and Gemini 2.5 Deep Think
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
This week Mark Zuckerberg laid out his vision for "Personal Superintelligence,” a manifesto of sorts for Meta's new AI efforts.
There's just one problem: reading it, it seems like Zuckerberg doesn't really believe in — or understand — what artificial superintelligence actually is.
As a reminder: the commonly accepted definition of artificial superintelligence is software that exceeds human cognitive capacities in every domain.
One can quibble about whether such technology is possible But most believe that if it is, it would lead to a rapid increase in economic and technological growth. We would have "a country of geniuses in a data center," be able to automate all labor, and significantly speed up scientific research and development. The world would rapidly look like something out of science fiction.
But here's Zuckerberg's vision:
"I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress. But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose.
As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be …
Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices.”
When you talk to other AI executives about what superintelligence could achieve, they say things like "cure cancer," "build Dyson spheres," or "colonize Mars" — because, by definition, a limitless supply of smarter-than-human intelligence could do these things.
Zuckerberg, meanwhile, offers us smart glasses.
It's a pitch devoid of both vision and understanding. Elsewhere in the essay, he says that "Meta's vision is to bring personal superintelligence to everyone," noting that "this is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work."
On multiple levels, this doesn't make sense. For a start, we can only get the "abundance" that Zuckerberg claims superintelligence will bring if we automate work — the benefits won't just magically appear.
More importantly, it doesn't matter what Meta or Zuckerberg are aiming for here. As former OpenAI researcher Steven Adler notes, "if you 'bring personal superintelligence to everyone' (including business-owners), they will personally choose to automate others' work, if they can."
Some have argued that Zuckerberg does understand the possibilities of superintelligence, and is just downplaying it for marketing reasons and to appease investors (a strategy that seems to be working).
Max Kesin, who works on machine learning at Meta, pushed back on my dismissal, telling me that Zuckerberg is "more interesting/thoughtful internally."
But I'm not so sure. Even on podcasts where he's discussing AGI with other people that "get it", Zuckerberg consistently fails to provide a vision that suggests he understands the technology's implications.
And as documented in books such as Cade Metz's Genius Makers, he has a long track record of being dismissive of AI progress — missing out on acquiring DeepMind in part because Shane Legg and Demis Hassabis felt he "didn’t share their ethical concerns over the rise of artificial intelligence, in either the near term or the far," and trying to argue to Elon Musk in 2014 that "all this talk about the dangers of superintelligence didn’t make much sense" because "a neural network was still a long way from superintelligence."
Meta's many AI struggles, in fact, stem precisely from this lack of vision — the company has been consistently late to the party because its leadership has failed to see the future coming.
Others seem to sense this. This week, WIRED reported that Meta has made offers to more than a dozen people at Thinking Machines Lab, offering more than $1 billion to one person. Not one person has accepted. Zuckerberg has also seemingly struggled to hire a chief scientist for Meta’s Superintelligence team, instead promoting someone he hired back in June.
There are doubtless many reasons for not wanting to work at Meta, but I suspect a major one is that researchers can sense the lack of vision and ambition.
Given that “when Zuckerberg tries to recruit people he reportedly talks about how a self-improving AI would become really good at improving Reels recommendations,” you can hardly blame their skepticism.
Still, there is at least one glimmer of foresight in Zuckerberg’s manifesto.
“Superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source,” he writes. That, at least, is true.
The discourse
Chinese Premier Li Qiang said we need international cooperation on AI:
“We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."
WIRED has a good piece on all the AI safety talk at the World AI Conference in Shanghai this week.
Former White House NSC official Julian Gewirtz warned that US-China AI competition could precipitate conflict, even though China does not publicly appear to be pursuing AGI:
“Whether Xi doubts AGI, feels constrained, or is concealing his intentions, there is a profound risk on both sides of ‘technological surprise’ — when a rival gains an unexpected technological capability, it can escalate the risk of conflict.”
Dario Amodei really doesn’t like being called a doomer:
“I get really angry when someone's like, ‘This guy's a doomer. He wants to slow things down … my father died because of cures that could have happened a few years [earlier]. I understand the benefit of this technology.”
Jakub Pachoki said OpenAI’s superalignment mission isn’t dead — just different:
“Two years ago the risks that we were imagining were mostly theoretical risks … The world today looks very different, and I think a lot of alignment problems are now very practically motivated.”
Policy
Top Democrats including Sen. Chuck Schumer, Mark Warner, and Elizabeth Warren urged the Commerce Department to maintain export controls on H20s.
A separate group of natsec experts — including two former Trump White House officials — made the same argument, as did The Economist.
Nvidia reportedly ordered 300,000 H20s from TSMC last week.
China's cyberspace regulator, meanwhile, summoned Nvidia to explain potential "back-door safety" risks in its chips. Nvidia denies any such thing exists.
Related, some news on the replacement to the diffusion rule:
Semafor reported that the Trump admin is debating whether to replace the rule at all, or just abandon it.
The Bureau of Industry and Security has reportedly been told to “avoid tough moves on China” as Trump seeks a trade deal with China.
But Michael Kratsios suggested that he does want KYC requirements and monitoring of large-scale AI training.
He also criticized the 10^26 FLOPs threshold, claiming it was “a number pulled out of thin air by some think tank that got sort of absorbed into the Biden ecosystem.”
Sens. Warren and Rounds urged the administration to “maintain provisions that incentivize companies to keep a majority of their computing infrastructure used to train frontier models in the United States,” as well as imposing “robust security guardrails on overseas data centers.”
NIST released an outline “proposing a direction and structure” for the AI testing, evaluation, verification, and validation “zero draft.” It’s accepting comments on it through September 12.
Google and Microsoft agreed to sign the EU's AI code of practice. xAI said it will sign the safety and security chapter, but not the rest.
The European Commission confirmed today that the code of practice “is an adequate voluntary tool … to demonstrate compliance with the AI Act”.
And it acknowledged that the compute threshold is under review.
The European Commission pledged to purchase €40bn worth of US AI chips to power EU "AI gigafactories.”
Italy's antitrust authority launched an investigation into Meta for allegedly abusing its dominant position by integrating Meta AI into WhatsApp without user consent.
At an ASEAN-China forum this week, Malaysia’s digital minister talked up the ASEAN AI Safety Network it’s establishing, and reportedly “proposed the establishment of certification frameworks for AI systems.”
On Transformer: The UK government announced a £15mn international effort to research AI alignment and control.
Influence
CSIS published a report titled “Why Tocqueville Would Embrace AI Benchmarking,” arguing that “transparent, participatory benchmarking is essential to preserving human agency in an increasingly algorithmic world.”
Industry
Microsoft and OpenAI are reportedly close to renegotiating their agreement. Per Bloomberg, the new terms would give Microsoft access to OpenAI’s technology even after AGI.
OpenAI is reportedly keen to ensure its nonprofit has a “significant stake” in the company, and also to “guarantee that Microsoft adheres to strict safety standards when deploying OpenAI’s technology.”
OpenAI has reportedly raised $8.3bn at a $300bn valuation, led by a $2.8bn check from Dragoneer Investment Group.
It’s reportedly reached $13bn in annualized revenue, and expects to hit $20bn by the end of 2025.
Rumor has it that OpenAI is launching a new open-source model imminently (perhaps today).
Anthropic is reportedly set to raise $5bn at a $170bn valuation — almost triple the $61.5bn valuation it raised at in March.
Its annual recurring revenue reportedly hit $5bn last month.
Google DeepMind launched Gemini 2.5 Deep Think, a version of the model which won IMO gold last month.
The model card is here; it seems to be extremely capable.
Google says it “has enough technical knowledge in certain CBRN scenarios and stages to be considered at early alert threshold,” and has applied mitigations accordingly.
Microsoft will reportedly spend about $120bn on capex — presumably AI infrastructure — in the next year. Azure revenue was up 39% last quarter, it reported this week.
Amazon shares fell on disappointing AWS revenue growth. Andy Jassy said the company will spend over $100bn on AI this fiscal year.
Tim Cook said Apple will “significantly grow” its AI investments, and that he’s open to buying a company to do so.
A judge chastised both Elon Musk and Sam Altman for "gamesmanship" in the lawsuit over OpenAI’s nonprofit status — but partially granted Elon Musk's motion to strike some of OpenAI's defenses.
Arm said it’s exploring designing its own chips.
Meta has reportedly explored acquiring video-gen companies Pika and Higgsfield.
Harmonic, the company co-founded by Robinhood CEO Vlad Tenev, launched its AI model — which achieved gold on the International Math Olympiad.
Z.ai said its new open-sourced GLM-4.5 model can run on just eight H20 chips, making it cheaper to run than DeepSeek R1.
Vast Data is reportedly in talks to raise from Nvidia and CapitalG at a $30bn valuation.
Surge AI is reportedly in talks to raise $1bn at a $25bn valuation.
Cohere is reportedly projecting $200mn in annualized revenue by the end of the year. It’s reportedly seeking to raise at a $6.3bn valuation.
Groq, an AI chipmaker, slashed its 2025 revenue projections from $2bn to $500mn. It’s reportedly nearing a $600mn funding round at a $6bn valuation.
The Artificial Intelligence Underwriting Company emerged from stealth with a $15mn seed round, led by Nat Friedman. The founders think insurance could be a way to tackle AI risks.
Moves
Al Verney is Google DeepMind’s new global head of communications.
Bowen Zhang left Apple for Meta Superintelligence Labs — Apple’s fourth loss in a month.
Megi Llubani joined Americans for Responsible Innovation as policy research manager.
Anton Korinek is launching an Economics of Transformative AI Initiative program at the University of Virginia. They’re hiring.
Best of the rest
METR found that Grok 4's time horizon for multi-step software engineering tasks was about 1hr 50min, slightly longer than o3 “but not by a statistically significant margin.”
Ryan Greenblatt wonders if we should update against seeing “relatively fast AI progress” in the next year or so.
The WSJ has a piece on the fight of who should pay for the massive power infrastructure needed for AI data centers: tech companies, or everyone?
The WSJ also has a piece on China’s efforts to build a domestic AI ecosystem.
The “Model-Chip Ecosystem Innovation Alliance” and “Shanghai General Chamber of Commerce AI Committee”, two new industry alliances, were launched at the conference.
Shanghai AI Laboratory and Concordia AI released a Frontier AI Risk Management Framework.
Amazon’s AI training deal with the New York Times is reportedly worth $20-25mn annually.
The Center for Future Generations published a paper that aims to offer “a nuanced path for open-weight advanced AI.”
AI video companies Runway and Luma AI are targeting robotics companies as new revenue sources, expecting that to be a bigger market than Hollywood.
Runway announced an AI film festival in partnership with IMAX this week, prompting a lot of social media backlash.
Strike 3 Holdings sued Meta for allegedly pirating its pornographic content to train AI models.
Thanks for reading; have a great weekend.