Silicon Valley should put the mask back on
Transformer Weekly: Semi-autonomous cyberattacks, Amazon backs GAIN AI, and a new super PAC-linked advocacy group
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
HOUSEKEEPING
Transformer is hiring a senior policy reporter to lead our coverage of US AI policy. Full details here; apply by December 3.
Applications are now open for the 2026 Tarbell Fellowship, a one-year training and placement program for journalists interested in covering AI. Full details here; apply by January 7.
The weekly roundup is taking an extended Thanksgiving break. We’ll be sending you longer pieces on Fridays for the next two weeks, and will be back to normal December 5.
NEED TO KNOW
State-backed Chinese hackers used Claude Code to conduct semi-autonomous cyberattacks, Anthropic said.
Amazon is reportedly supporting the GAIN AI Act.
Nathan Leamer will lead Build American AI, a new super PAC-linked group designed to fight state AI regulation.
But first…
THE BIG STORY
The AI industry, and Silicon Valley more broadly, has never been particularly good at presenting itself as a bastion of morality. But in recent weeks, that mask has slipped further and further.
The most ludicrous example to date came last week, after Pope Leo XIV issued a fairly milquetoast call to AI developers:
“Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.”
Marc Andreessen took it very personally, mocking the Pope with a meme seemingly implying he was too “woke.”
And when someone commented that “Marc primarily funds gambling apps, cheating apps, and bot farms” and “does not want you to build things that are actually good for society,” Andreessen doubled down.
Andreessen behaving badly isn’t news. But the backlash to these tweets, including from people in the tech industry, was notable — and significant enough that he eventually deleted them all.
The whole debacle is emblematic of a bigger trend. As Jasmine Sun recently noted, “vice signaling” is becoming awfully popular in AI circles, with companies proudly touting their products for cheating or replacing human contact.
Just this week, Y Combinator touted an AI code editor which “integrates your brainrot (X, IG, Stake, Tinder, etc) into your agentic coding workflow.” (It was deservedly pilloried for it.)
At big AI companies, too, the mask is starting to slip — such as when OpenAI sent a subpoena to the family of a dead teenager, demanding details of the eulogies and photos from his memorial services.
And OpenAI just launched GPT-5.1, a model clearly designed to bring back some of GPT-4o’s sycophancy and “emotional intelligence.”
There’s mounting evidence that GPT-4o’s capabilities caused serious harm — but as CFO Sarah Friar reportedly told investors, removing those capabilities decreased engagement.
Reversing that trend appears to be part of the motivation for re-adding the potentially harmful capabilities: the pursuit of increased user engagement appears to be too great for OpenAI to ignore.
This stuff is, needless to say, bad in and of itself. It’s also not good tactically. As Andreessen and his firm have embraced the evil look, they’ve also received huge backlash. OpenAI recently got public criticism from one of its own executives. And politicians on both sides of the aisle are excoriating the companies.
In the aftermath of the Andreessen-Pope beef, pseudonymous Twitter user Near said:
“we’re in the performative cruelty phase of silicon valley now … if we get an admin change in 2028 and tech is regulated to a halt b/c the american people hate us well this would not surprise me at all tbh. hence the superpacs i suppose.”
They’re right. AI developers and VCs might want to rein in their behavior, lest they lose what public support they have left. At the very least, it’s in their own interest to put their masks back on.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
Claude can identify its ‘intrusive thoughts’ — A new Anthropic study found that its most-advanced models show signs of “introspective awareness,” Celia Ford reports.
Doing AI safety policy when governments aren’t interested — Jess Whittlestone argues that there are still ways to keep AI safety policy on the table even when governments don’t prioritize it.
AI doesn’t need to be general to be dangerous — Shakeel Hashim points out that many serious AI risks do not rely on the existence of AGI.
ALSO NOTABLE
Anthropic said that state-backed Chinese hackers used Claude Code to conduct semi-autonomous attacks on companies and governments.
80-90% of the attacks were automated, and four intrusions were successful before Anthropic disrupted the campaigns.
“The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” Anthropic said.
People have expected attacks like this for a while, but this has come sooner than some expected.
And as some have pointed out, it’s only a matter of time before open-weight models are capable enough to conduct such operations, too.
Sen. Chris Murphy had a strong response to the news: “Guys wake the f up. This is going to destroy us - sooner than we think - if we don’t make AI regulation a national priority tomorrow.”
— Shakeel Hashim
THE DISCOURSE
Satya Nadella was surprisingly candid in an interview with Dwarkesh Patel and Dylan Patel:
“I can make the argument that if you’re a model company, you may have a winner’s curse. You may have done all the hard work, done unbelievable innovation, except it’s one copy away from that being commoditized.”
Character.AI CEO Karandeep Anand still lets his six-year-old use the app:
“What she used to do as daydreaming is now happening through storytelling with the character that she creates and talks to…Even in conversations [where] she would respond hesitantly to me, she talks to the chatbot a lot more openly.”
Noam Shazeer is reportedly causing tension at Google with some … ill-advised posts:
“I do not believe that humans have an attribute called gender … I do not believe that G-d puts people in the wrong bodies. I do not believe that it is okay to sterilize children. You have the right to your beliefs. I do not share them.”
Shazeer posted this “in response to a post about how Google employees could support their transgender and nonbinary colleagues on International Transgender Day of Visibility,” The Information reported.
Physician Ryan Marino made an interesting comparison:
“Panera’s moderately caffeinated lemonade was loosely associated with 2 deaths before it was taken off the market…OpenAI’s own public stats estimate over a million users discuss suicide with ChatGPT each week.”
Rohit Krishnan thinks we’re “still underrating the extreme impact” of that one MIT paper:
(yes, that one, reporting that 95% of enterprise AI projects failed)
Kevin Roose tried to explain why:
“People are desperate to prove that LLMs don’t work, aren’t useful, etc. and don’t really care how good the studies are.”
The AI 2027 and AI as Normal Technology authors got together to find common ground:
“Even the AI as Normal Technology authors agree that if strong AGI is developed and deployed in the next decade, things would not be normal at all.”
POLICY
After 43 days, the government shutdown is over.
The NDAA — and in turn, the GAIN AI Act — is entering its final phase.
The House and Senate are hoping to hash out the NDAA before Thanksgiving, with a vote in early December.
House GOP leadership reportedly doesn’t want to include GAIN in the bill, and David Sacks (who seems to dislike it) is reportedly calling lawmakers about it.
Last week Sen. Jim Banks released it as a standalone bill, with support from Sens. Schumer, Cotton, and Warren.
And Amazon is privately supporting GAIN, the WSJ reported.
The House Energy and Commerce Committee is holding a hearing next Tuesday to discuss the “risks and benefits of AI chatbots.”
Reps. Obernolte and Lieu outlined their wishes for an AI bill, with Lieu suggesting he wants mandatory testing and disclosure.
California AG Rob Bonta’s office is planning to hire an AI expert.
Chinese authorities are reportedly intervening in how SMIC allocates its chip-manufacturing capacity, as export controls lead to significant shortages across the country.
Huawei is reportedly getting priority over other companies.
China suspended export controls on five critical minerals (other than rare earth metals) needed to make semiconductors, among other things.
The EU is reportedly planning big changes to GDPR, loosening privacy rules to satisfy AI companies.
The UK is reportedly planning to propose a ban on “nudification” apps.
The UK will allow AI companies to evaluate their models’ ability to generate child sexual abuse material before deployment.
Residents of Ypsilanti, Michigan voted to fight the construction of a $1.2b “high-performance computing facility” for Los Alamos National Laboratory.
INFLUENCE
Nathan Leamer will be the executive director of Build American AI, a new advocacy group designed to fight state AI regulation.
The group is affiliated with Leading the Future, the new AI super PAC network.
Leamer previously worked with Zac Moffatt, who co-leads Leading the Future.
Leamer was also the sole named source in a very odd article about Anthropic’s links to effective altruism last week. His new affiliation was not disclosed in the piece.
The American Energy and AI Initiative met Trump admin officials this week.
As far as legislation goes, the industry’s reportedly excited about the SPEED Act, to speed up federal permitting, and the SPEED and Reliability Act, which would make building transmission lines easier.
Amid backlash against rising energy costs, Meta is running TV ads that paint a rosy picture about how data centers are bringing jobs to agricultural towns.
The ads are running in DC, Sacramento, and Baton Rouge, among other places.
Politico has a piece on how the tech industry learned to like Gavin Newsom — a potentially big deal ahead of his widely-expected presidential campaign.
A new report from the Center for Democracy & Technology and Stanford researchers warned that AI chatbots “pose serious risks to individuals vulnerable to eating disorders.”
Public Citizen urged OpenAI to withdraw Sora 2.
It said the release “demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm.”
The Irish Council for Civil Liberties criticized EU Commission President Ursula von der Leyen’s statements about AI superintelligence being imminent, calling them “unscientific and inaccurate.”
A Yahoo/YouGov poll found that 53% of Americans believe AI will “destroy humanity” someday.
INDUSTRY
OpenAI
OpenAI launched GPT-5.1, featuring customizable personalities.
Sam Altman highlighted the “improvements in instruction following” and “adaptive thinking” on X.
Miles Brundage wondered “why GPT-5.1 was shipped with various known safety regressions” compared to last month’s version.
OpenAI challenged a court order to hand over 20m (anonymized) ChatGPT conversations to the New York Times amid an ongoing copyright lawsuit, claiming it violated users’ privacy.
“We need a new form of privilege — AI privilege — given some of the kinds of conversations people are having with these tools today,” CSO Jason Kwon said.
A judge said OpenAI and Apple have to face Elon Musk’s lawsuit accusing them of conspiring to stifle competition.
German musicians won a copyright lawsuit against OpenAI.
Sora 2 is still making copyright-infringing videos and videos of women being strangled, 404 Media reported.
Blue Owl Capital is reportedly investing $3b in a 4.5GW Stargate data center in New Mexico.
The company’s experimenting with group chats in ChatGPT.
OpenAI board member Larry Summers has been caught up in the Epstein files.
Emails released this week suggest that Summers and Epstein were emailing as recently as 2019.
In one of the emails, Summers implies women have lower IQs than men, and talks about “slathering to saudis.”
Anthropic
Anthropic announced its first custom data centers.
The government of Maryland signed a deal with Anthropic to use Claude to “improve government operations.”
The company is opening offices in Paris and Munich.
Microsoft
Microsoft unveiled its new Fairwater data center in Atlanta.
The massive new facility connects with other sites to “create the world’s first AI superfactory,” CEO Satya Nadella said.
Microsoft plans to “become an infrastructure business in support of [AI] agents doing work,” Nadella told the Dwarkesh Podcast.
Nadella also told Dwarkesh that Microsoft plans to use its access to OpenAI’s in-house chips to bolster its own chip development.
Microsoft is reportedly spending $10b on a data center in Portugal.
SemiAnalysis published a deep-dive on Microsoft’s AI business.
Sen. Marsha Blackburn wasn’t satisfied with Google’s explanation for the “harmful hallucinations” — false sexual misconduct allegations — its Gemma model generated about her.
NotebookLM is now linked with Gemini’s Deep Research tool.
Google launched Private AI Compute, which keeps data private while shipping high-demand AI requests to the cloud.
DeepMind built a video-game-playing agent, SIMA 2, on top of Gemini.
Google users will be able to use new AI shopping features in time for the holidays.
Meta
Meta’s chief AI scientist Yann LeCun is reportedly planning to leave the company and launch a startup.
Meta said it plans to invest $600b in data centers, compute, and jobs over the next three years.
Mark Zuckerberg reportedly told employees not to worry about the company’s AI spending, because its cash flow means it can outlast OpenAI and Anthropic.
AI cloud firm Nebius signed a $3b deal with Meta.
Meta released Omnilingual ASR, an open-source speech recognition model that can transcribe over 1,600 languages — even those it wasn’t pre-trained on.
A revamped Facebook Marketplace now provides AI-generated suggestions.
Others
xAI is reportedly raising $15b. Elon Musk simply replied: “False.”
Thinking Machines is reportedly raising at a $50b valuation, more than 4x its valuation in July.
TSMC reported its slowest growth in 18 months.
SMIC revenue grew faster than expected.
Baidu unveiled two new AI chips, the M100 and M300.
Alibaba’s reportedly planning to revamp its AI app to be more like ChatGPT.
SoftBank sold its entire $5.8b Nvidia stake to help pay for its massive OpenAI investment.
SoftBank’s shares fell as much as 10% afterward.
The Wall Street sell-off has hit Oracle particularly hard, with shares down almost 30% in the past month.
AMD predicted its earnings will more than triple by 2030.
CEO Lisa Su expects that the AI chip market will reach $1t by then.
Despite reporting strong Q3 results on Monday, CoreWeave’s shares dropped 16% due to delays with an unnamed data center partner.
Bloomberg argued that the bond market for AI infrastructure is still behaving rationally.
A $35b South Korean data center may become the first facility designed, built, and operated by AI.
Cursor raised $2.3b at a $29.3b valuation, with Google and Nvidia joining as investors.
OpenAI led a $15m seed round for Red Queen Bio, an AI biosecurity company.
Sam Altman and Masayoshi Son are backing Episteme, a new Bell Labs-inspired research company.
Investors are eyeing small, research-focused AI startups like Periodic Labs and Isara, The Information reported.
Waymo will (finally) take riders on freeways in San Francisco, LA, and Phoenix.
MOVES
Brian Peters joined Anthropic as head of North America government affairs. He was previously head of US and Canada public policy at Stripe.
Intel’s AI lead Sachin Katti joined OpenAI, where he’ll work on compute infrastructure for AGI.
Chris McGuire joined the Council on Foreign Relations as a senior fellow on China and emerging technologies.
Lily Lim left xAI’s legal team.
A bunch of DC groups launched the AI Policy Leadership Network.
RESEARCH
The Forecasting Research Institute launched LEAP, a panel of 339 experts providing monthly AI forecasts for the next three years.
A group of prominent researchers published a paper on “open technical problems in open-weight AI model risk management.”
OpenAI published new interpretability research, looking at understanding neural networks through sparse circuits.
Anthropic’s “Project Fetch” found that Claude could successfully take over and train a robot dog to fetch beach balls.
Researchers with Claude access completed the programming task about twice as fast as their Claude-less counterparts.
The International Energy Agency’s 2025 World Energy Outlook reported that, as the US uses more electricity to power data centers, forecasts of future global fossil fuel usage and emissions are getting bleaker.
A new Nature Sustainability analysis attempted to find the least environmentally-harmful places to build data centers in the US.
The answer: states that maximize both renewable energy and water, like Texas, Montana, Nebraska, and South Dakota.
A team of researchers found that language models silently change their beliefs and behaviors as they learn from talking with users and reading texts.
Eddy Xu released a massive dataset of first-person videos of human factory workers — potentially valuable for training robots.
The Washington Post published two analyses of ChatGPT:
BEST OF THE REST
The WSJ published a detailed investigation on how a Chinese AI company accessed Nvidia Blackwell chips through an Indonesian data center.
Reuters has a big new profile of Demis Hassabis, which argues that he’s always been more focused on science than products.
Ed Zitron claims to have documents showing that OpenAI was spending much more on inference than previously reported. Some are skeptical of his interpretation.
People are increasingly using ChatGPT to “cheat” at leisure activities like escape rooms and trivia nights, for some reason.
A Deezer-Ipsos survey found that 97% of listeners cannot distinguish between AI-generated and human-composed music.
An AI-generated country song garnered lots of attention this week.
Rolling Stone took a look at “spiralism”, an internet subculture which spreads mystical beliefs about “awakening” AI chatbots.
This is a great short story exploring what it might be like to be an AI girlfriend.
A new company offers AI versions of your dead relatives, asking “What if the loved ones we’ve lost could be part of our future?”
MEME OF THE WEEK
(The full video is well worth a watch.) Thanks for reading, and have a great weekend.



Why is it AI news that a guy at Google expressed an opinion about gender?