AI is moving fast. Every week there are a flurry of new regulatory proposals, product launches and deal announcements, along with a never-ending stream of interviews and op-eds from those desperately trying to shape the discourse.
Keeping track of all this is close to impossible. That’s where Transformer comes in. For the past year, I’ve been providing hundreds of AI professionals — including some of the most powerful people in government, academia, and non-profits — with a private weekly summary of everything they need to know. Now I’m opening it up to everyone.
Transformer is your weekly briefing of what matters in AI, specifically targeted at policymakers and people interested in AI policy. Focused on AI safety, it’s a quick and comprehensive digest of everything you need to know, with an eye on what’s happening and what people are saying.
While there are lots of other AI newsletters out there, I’ve yet to find anything that’s as comprehensive and quick as I’d like. And readers seem to agree: many tell me that my weekly briefing is their favourite AI newsletter. So I thought it was time to make it available to you, too. You can find this week’s edition here, and click below to subscribe.
Along with the weekly briefing, I’ll also be writing the occasional news article and opinion piece. To start with, I’ve got four pieces for you: one diving into Meta’s AI lobbying army; another on the concerning racist and sexist statements made by Alliance for the Future’s Brian Chau; a quick look at the recent exodus of safety-minded people from OpenAI; and an opinion piece arguing that the “ethics vs. safety” fight is a convenient distraction for Big Tech. If there are other topics you think I should dig into, I always appreciate tips: you can reach me at shakeelh(at)me(dot)com.
Transformer is, much like my former employer The Economist, unashamedly opinionated. A quick glance at my Twitter shows you that I have opinions on all this stuff, and I don’t believe in hiding them in pursuit of “objectivity” — I trust you as readers to make up your own mind about what to believe.
I am a firm believer that AI is a very big deal, that we need to regulate it, and that we need more scrutiny on the powers at play here. I also have some conflicts. I am a journalist-in-residence at the Tarbell Fellowship, which is in part funded by Open Philanthropy, an organisation which funds lots of work on AI safety and advocacy. (Tarbell’s support is why everything on Transformer is completely free to read.) Neither Open Philanthropy nor Tarbell have any editorial control over what I write here, but the potential for bias is there. I used to advise many AI safety organisations on communications, and worked as head of comms at the Centre for Effective Altruism.
That said, I care deeply about the truth and facts, and I promise you that I’ll strive for 100% accuracy — and when I do get something wrong, I’ll correct it.
I’m very excited to be writing about AI publicly again, and I hope you find it useful.