Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
OpenAI was hacked last year, the New York Times reported.
The hacker — who seemed to be a private individual — gained access to the company’s internal discussion platform.
OpenAI told its employees and board, but not the FBI or law enforcement. It seems this is the issue that led Leopold Aschenbrenner to raise security concerns to the board, which he says is part of what got him fired.
Notably, OpenAI spokesperson Liz Bourgeois categorically denied this to the NYT.
In another security screw up, the ChatGPT Mac app was found to be storing conversations in plain text. (It’s since been updated to fix that.)
On Transformer: Open-source pioneer Lawrence Lessig is very worried about freely available AI model weights.
In a recent interview with Transformer, Lessig said that “open-weight [AI models] create a unique kind of risk”.
“You basically have a bomb that you're making available for free, and you don’t have any way to defuse it necessarily,” he said.
Lessig argued that “we ought to be anxious about how, in fact, [AI] could be deployed or used, especially when we don’t really understand how it could be misused”. He noted that though current models are unlikely to pose a significant risk, future models might.
He also dismissed comparisons to previous technologies, where access to program code is considered to have improved security and fostered innovation. “It’s just an obviously fallacious argument,” he said. “We didn’t do that with nuclear weapons: we didn’t say ‘the way to protect the world from nuclear annihilation is to give every country nuclear bombs.’”
“It’s not inconsistent to recognise at some point, the risks here need to be handled in a different kind of way ... The fact that we believe in GNU Linux doesn’t mean that we have to believe in every single risk being open to the world to exploit,” he added.
On Transformer: The Labour Party’s won a massive majority in the UK. Here’s what that means for AI.
Peter Kyle is the new Secretary of State for Science, Innovation and Technology, putting him in charge of DSIT, the government department looking after AI policy.
Labour’s manifesto commits the party to introducing “binding regulation” on the companies developing the most powerful AI models, a ban on deepfake creation, and making it easier to build data centres.
Kyle has also said that Labour will put the AI Safety Institute on “a statutory footing” and “legislate to require the frontier AI labs to release their safety data”.
The regulation is likely to look like a formalised version of the voluntary commitments developers made at Seoul, committing companies to produce and stick by responsible scaling policies, and conduct dangerous capability evaluations (presumably in partnership with AISI). But that’s aspirational: it’s a pretty safe bet that companies will lobby like hell to water any regulation down.
The timing of all this is unclear. Politico previously reported that an AI bill is unlikely to make it into Labour’s first King’s Speech (which will set the legislative priorities for the first year of their government). But Politico has also reported that DSIT civil servants have already started drafting legislation for Labour.
Labour’s getting support on AI policy from a wide range of external figures, too. Labour Together’s Kirsty Innes has been reported to be “effectively writing the party’s AI policy”, while Faculty AI had a staff member seconded into Kyle’s office to work on AI policy.
The discourse
Mary Meeker thinks AI is a very big deal:
“The advance of AI should drive the fastest and biggest transformations, disruptions, and platform shifts in technology ever seen.”
Sequoia’s David Cahn thinks the AI bubble is “reaching a tipping point”:
“We need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick, because AGI is coming tomorrow.”
Endeavor CEO Ari Emanuel does not like Sam Altman:
“He’s a con man … I don’t know why I would trust him.”
In The Atlantic, Jonathan Zittrain says “we need to control AI agents now”:
“We need to stay in the driver’s seat rather than be escorted by an invisible chauffeur acting on its own inscrutable and evolving motivations.”
Policy
France is reportedly set to charge Nvidia with antitrust violations.
The European Commission is preparing an antitrust investigation into Microsoft’s OpenAI investment, though it will not proceed with a merger review.
Commission Vice-President Margrethe Vestager also lambasted Apple for not launching its AI features in the EU.
Brazil told Meta to stop using Brazilians’ data to train its AI models.
The EU AI Office is hiring consultants to manage the GPAI code-of-practice drafting process.
France reportedly told EU countries that next year’s AI Action Summit will “shift the attention from existential risks to using AI for public goods while pushing governance and regulatory convergence”.
Politico has an overview of how the Chevron ruling will affect AI policy — something which Justice Kagan explicitly flagged as a problem with the ruling.
The WSJ has a great report on how Chinese companies are smuggling Nvidia chips to get around export controls.
The UN adopted a Chinese-sponsored resolution urging wealthy countries to close the AI gap.
Influence
Andreessen Horowitz launched a Stop SB 1047 site, encouraging Californians to contact their representatives about the bill. Y Combinator also put out a letter from startup founders opposing the bill.
In a scathing response, Sen. Scott Wiener said that “various statements by YC and a16z about SB 1047 are inaccurate, including some highly inflammatory distortions”, calling statements made by YC in particular “categorically false — and, frankly, irresponsible”.
The bill passed the Assembly Judiciary Committee this week; it will next go to the Appropriations Committee.
Semafor has a nice overview of how the AI world is preparing for a Trump presidency.
Industry
Google’s emissions grew 13% year-on-year in 2023, driven by the energy used for its AI data centres. It said there’s “significant uncertainty” about whether it’ll hit its 2030 net-zero target.
The Guardian has a really good and measured look at AI’s environmental impact.
Google released Gemma 2, its latest open-weights model. It ran dangerous capability evaluations beforehand.
Tech companies are reportedly talking to nuclear power plant owners to get electricity for their data centres. AWS is reportedly close to a deal with Constellation Energy, America’s largest nuclear plant owner.
Samsung’s Q2 operating profit was up 1,452% year-on-year, seemingly driven by soaring memory chip prices.
SK Hynix said it would invest $75 billion over the next four years, with 80% of that going to HBM chips.
Huawei and Wuhan Xinxin are reportedly partnering to develop HBM chips (though they denied the reports). Jiangsu Changjiang Electronics Tech and Tongfu Microelectronics will also reportedly help to provide CoWoS packaging.
Nvidia will make $12b from selling AI chips to China this year, according to Semianalysis.
Meta released a code completion model that uses its multi-token prediction approach.
Salesforce released a new “micro model”, which it says outperforms GPT 3.5 and Claude 3 Sonnet on function calling.
Google reportedly backed Character.AI with a convertible note.
Northern Data is reportedly considering listing its AI cloud and data centre businesses at a valuation of $10-16b. Two former executives at the company allege they were sacked when they raised concerns that its CEO and COO were “knowingly committing tax evasion”.
Lambda Labs, which rents out AI servers, is reportedly raising $800m. It was valued at $1.5b in February.
Runway is reportedly in talks to raise $450m at a $4b valuation.
Harvey, the legal AI company, is reportedly raising $100m at a $1.5b valuation, down from the $600m at a $2b valuation it was supposedly aiming for.
Magic, which is building a GitHub Copilot competitor, is reportedly raising $200m at a $1.5b valuation.
Sentient, which is “building an open platform for AGI development”, raised $85m co-led by Founders Fund.
ElevenLabs signed deals with the estates of Judy Garland, James Dean, and Laurence Olivier to use their voices in its app.
Moves
Apple’s Phil Schiller will reportedly get an observer role on OpenAI’s board.
Amazon hired most of Adept’s team, in what The Verge dubbed a “reverse acquihire”.
Richard Susskind was appointed special envoy for justice and AI to the Commonwealth’s secretary-general.
Zhou Bowen, former chief scientist of IBM’s Watson Group, is the new head of the Shanghai AI Laboratory.
My fellow Tarbell journalist-in-residence Nathaniel Popper is The Information’s new bureau chief for AI and enterprise software.
Tom Simonite is the Washington Post’s new tech companies editor.
Emilia David is now senior AI reporter at VentureBeat.
Best of the rest
Researchers found that a virus can use ChatGPT to avoid detection and spread itself.
Researchers developed a method of “covert malicious finetuning”, which let them covertly jailbreak GPT-4 and get it to act on harmful instructions 99% of the time.
Last week Microsoft reported on “Skeleton Key”, another jailbreaking technique.
Quora’s Poe chatbot gives users access to HTML files of paywalled news articles, Wired found.
Anthropic will fund researchers to develop new AI benchmarks.
The Information has a long profile of Alexandr Wang and Scale AI, which argues that he’s “become a polarising figure” who has “overpromised investors and customers”.
The UN’s IP agency said China’s requested more AI patents than anyone else: 38,200 in the past decade, compared to 6,300 from the US.
Bloomberg reports that Chinese AI companies are moving to Singapore.
Far-right parties in France are using AI-generated content to whip up anti-immigrant hysteria.
British politicians were targeted with deepfake pornography during the election.
Instead of “Made with AI”, Meta apps will now tag AI-produced or -edited images as “AI info”.
The Economist notes that AI’s impact on the economy is really not showing up in the data.
BCG claims it earns a fifth of its revenue from “work related to artificial intelligence”. (I expect that is because it has a very generous definition of “related to”.) Consultants do seem to be profiting from the boom, though.
Bill Gates thinks scaling will work for two more generations of AI systems, but after that we’ll need to develop meta-cognition.
The New York Times has a piece on how Ukraine’s using AI for military purposes.
Thanks for reading; have a great weekend.