How MAGA learned to love AI safety
The AI industry is engaging in one of “ the most blasphemous endeavors,” a leading MAGA figure told us

By Nicky Woolf
Texas state senator Angela Paxton first heard about the proposed sweeping 10-year moratorium on states passing any regulation of the AI industry in late June or early July, after the Texas legislature had already finished its biennial session.
“ My first reaction was: wait a minute, we just passed important legislation that will protect children that is related to AI technology,” Paxton, who is a Republican, says. “This moratorium [would] undo that — and it’s two years before we’ll be back where we can do anything about it.”
The provision was slipped into the “One Big Beautiful Bill” — the name President Trump gave the 2025 reconciliation bill, which essentially sets the federal budget — quietly enough that opposition only really picked up after it had already passed the House. But when it did, it spanned the political spectrum: a rare alliance of left-wing Democrats like Senators Ed Markey and Bernie Sanders, union leaders, ultra-religious conservatives, and far-right populists like Steve Bannon and Marjorie Taylor Greene.
While opposition from the left might have been expected, the strength and breadth of pushback from across the MAGA movement has come as more of a surprise, not least as it put many of Trump’s traditional supporters at odds with his AI-boosting administration.
Taylor Greene, a firebrand Republican congresswoman from Georgia best known for links with the Qanon conspiracy, told reporters she wasn’t aware of the moratorium when she voted to pass the bill in the House. “I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there,” she posted on X.
In Texas, Paxton wrote a bombshell open letter calling for the Senate to remove the provision. “ I want Texas to continue to be a light touch regulation state so that we don’t inhibit innovation,” she tells Transformer, “and there’s a lot of great innovation happening around, around AI. At the same time, it’s really important we have guardrails – in particular when it has to do with protecting children.”
“ I was a teacher and a school counselor before I came to the legislature, for more than 20 years,” she continues. “I’m a mom and a grandmother, and, you know, I’m watching my own children parent through this stuff, and I just think it’s really important that we don’t beta-test on our kids.”
Arkansas governor and former Trump White House press secretary Sarah Huckabee Sanders also wrote to Congressional leadership. “As Republican Governors, we support the One, Big, Beautiful Bill and President Trump’s vision of American AI dominance,” the letter, which was also signed by 16 other state governors, said. “But we cannot support a provision that takes away states’ powers to protect our citizens.”
Missouri Republican senator Josh Hawley “was one of the first lawmakers who said, ‘we’ve gotta burn this proposal to the ground’,” Chris MacKenzie, vice-president of communications at Americans for Responsible Innovation (ARI), a bipartisan policy organisation pushing for AI safeguards, tells Transformer. “He was pretty unequivocal about that.”
On paper, Republicans had the votes in the Senate to bypass the filibuster and pass the bill, including the moratorium, without Democratic support. But Hawley, along with other GOP senators like Kentucky’s Rand Paul and Wisconsin’s Ron Johnson, rebelled.
A proposed last-minute compromise negotiated by Texas senator Ted Cruz, who strongly backed the moratorium, with Tennessee’s Marsha Blackburn — which would have changed the moratorium to five, instead of 10, years and carved out some limited exemptions — floundered after Bannon and others reached out to Blackburn’s office in outrage. “You just can’t let these tech bros not have any regulations for a decade. It doesn’t work,” Bannon told the Wall Street Journal that night.
“It came down to like a one o’clock in the morning play on the Senate floor,” Mark Beall, president of government affairs for the AI Policy Network, another bipartisan group, says. “Everyone sort of realised that the votes weren’t there.” Blackburn and Cantwell introduced an amendment stripping the moratorium from the bill entirely, and it passed by 99 votes to one. “WE DID IT,” Paxton tweeted afterwards.
“I think it was the first time in recent memory I’ve seen the tech industry suffer a defeat like that,” Beall tells Transformer.
The clash brought renewed attention to a fascinating crack that has been deepening within Trump’s MAGA coalition since the beginning of his second term. On one side are the technofuturist-libertarians, particularly Elon Musk, as well as other Silicon Valley luminaries such as Marc Andreessen and Peter Thiel — and their friends on Capital Hill like Ted Cruz. On the other are the nationalist populists who traditionally form Trump’s base — people like Bannon, Taylor Greene and Tucker Carlson.
It’s not just AI causing this rift to grow. In June, Trump caught heat from the populists over strikes on Iran and the administration’s unwillingness to release the Epstein files has also caused friction, particularly among the more conspiracy theory minded elements. Immigration too has also pitched the tech-futurists such as Musk and White House AI czar David Sacks against a MAGA base virulently opposed to even the well-paid workers of Silicon Valley coming from overseas.
Before Trump assumed the presidency, the likes of far-right influencer Laura Loomer were already publicly attacking the admin for planning to bring in Sriram Krishnan, now a key White House advisor on AI under Sacks. The HB-1 visas that many employees at top tech companies rely on remain contentious among the right, with recent huge hikes in fees to $100,000 failing to placate key MAGA voices.
But AI has become a particularly potent flashpoint that looks set to only intensify because there are so many potential objection-points across the ideological and political spectrum. It’s not just concerns about copyright, or harm and child-protection; there are economic and unemployment concerns, and even theological objections to choose from.
The loose but steadily-coalescing AI-skeptic movement has “two big buckets” of concerns, according to Peyton Hornberger, who runs the communications team at the Alliance for Secure AI, a non-profit which advocates for a cautious approach to artificial intelligence. One is acute harms; the other existential threats and long-term risks.
“Right now there’s a big wave of state policy addressing the immediate harms, like – for example – kids’ online safety… chatbots being exploitative or exacerbating mental health crises, [kids] reaching out to bots for support instead of working with professionals or relying on their families,” Hornberger says. “That’s having some pretty damaging impacts already.”
Some of these impacts are uncomfortably front of mind. In September, Matthew and Maria Raine gave heartbreaking testimony before Congress. Their son Adam, who was 16, took his own life in April after being encouraged to do so by an AI chat bot which even offered to write his suicide-note for him, Matthew Raine told Senators. The Raines are suing the company that made the chatbot for wrongful death – “the first [such case] against OpenAI,” Hornberger says. “So we’ll see how that plays out. But a lot of people are waking up to this, and they’re not happy.”
Brendan Steinhauser, a campaign strategist who now serves as CEO of the Alliance for Secure AI, where he works alongside Hornberger, describes another strain of AI-skepticism — this one emerging from the evangelical movement.
“It comes out of this idea … that we were created by God and in his image, we have a special place on Earth … and that we are not to be superseded by anything else,” Steinhauser says. “I really think that’s the crux of what is going on on the right. It’s [about] man’s place in the cosmos.”
In May, a group of Christian ministers wrote to the White House calling for caution. “We do not want to see the AI revolution slowing, but we want to see the AI revolution accelerating responsibly,” they wrote. Echoing Jurassic Park, the letter’s authors said people should “pay attention especially not only to what AI CAN do but also what it SHOULD do.”
Jeff Grenell, a pastor and educator who was one of the signatories to the letter, tells Transformer he worries AI is replacing critical thought. “ I train pastors,” he says. “I’ve taught at the university level for about 17 years. All of my courses are theology and religious practice … and I tell them all the time: let AI write you an outline but not a manuscript... The content of your research needs to come out of your mind and your heart, or you are short-changing the congregation.”
“ Back in the day, we used to talk about ‘garbage in garbage out,’ and that a computer is only as accurate as the human intelligence that goes into it, right?” Grenell says. “Oftentimes, when it comes to AI, we forget that the most powerful computer on the planet is the human mind. And I think sometimes we skip that. We skip this design that our creator has given to us.”
“ I consider it to be among the most blasphemous endeavors that one could embark upon,” says Joe Allen, an AI critic who has written for the Federalist and works on the War Room podcast with Bannon, who also wrote the foreword to his book, Dark Aeon: Transhumanism and the War Against Humanity.
“ I’m a heterodox Christian, so I don’t consider myself to be fundamentalist in any way,” Allen continues. “But it’s pretty clear that the intention of creating this image, first of the human mind, and then to create something greater than the human mind, something that’s a de facto God. Even a true dogged atheist should be disturbed by this.”
“ The thing the Reagan Revolution brought to the United States was this notion that capital deserved a lot of advantages in our economy,” the activist investor, musician and venture capitalist Roger McNamee tells Transformer. “That capital deserves to be protected from taxation. It deserves to be protected from regulation. [That] corporations deserve to have all the benefits of humanity without any of the responsibilities.”
“And if you do that for long enough,” he said, “eventually you get where we are now.”
“Remember — what is an LLM? It’s a statistical process applied to historical data,” McNamee continues. “And what are the use cases where that’s helpful? The answer is, there are use cases where it’s helpful. But they’re not constraining themselves to that. They’re saying it can solve everything. It can be your best friend, it can be your therapist. It can create your code, it can find your tumor.”
“When you’re selling to people who want to believe, and you need to raise trillions of dollars, you have to make it appear to be revolutionary. This is why they started talking about p-doom,” McNamee says. “I mean, there is no way to get to existential risks from a thing that applies statistical processes to stored data.”
“The only way you get to existential risk is by un-employing all the people, finding no work for them, and then having a revolution in which gazillions of people die,” he adds.
Silicon Valley, for its part, has made it clear it’s not going to be passive as this debate proceeds. On August 26th, the same day as the Raines’ gut-wrenching suit against OpenAI was filed, a group of tech companies and figures including Meta, Andreessen Horowitz, Perplexity AI and OpenAI founder Greg Brockman came together to pledge $200m to two new super PACs which will target AI-skeptic politicians in next year’s midterm elections.
“That’s just a staggering amount of money for this kind of issue,” Hornberger says. “To us, they don’t seem sympathetic to [compromise]. The idea is ‘any guardrails will slow us down’,” she adds. “And that just doesn’t fly.”
Beall says the institution of those two new super PACs “suggests AI is now starting to become a campaign trail issue. That’s something I think is gonna be worth watching.”
“We have a very contentious set of midterms coming up,” Beall says. “How the candidates talk about AI, whether or not the super PACs start to go after vulnerable members who don’t take positions in support of the industry – I think these are all fascinating things to watch as AI kind of gets mainlined in the political discourse.”
“If you look at the polling,” he continues, “the public is overwhelmingly nervous about AI. Republicans and Democrats are 70th-percentile ‘not excited’ about AI, which could be a function of a whole bunch of different things. That fact is going to go up against this $200m super PAC. It could be a really fascinating case study to see how this all shakes out.”
The ARI’s MacKenzie tells Transformer he is hopeful for the future, at least in terms of the legislative fight to come. “ I’m optimistic, I really am,” he says. “Because this, unlike a lot of other issues that we see in Congress, has attracted a sort of unique bipartisan activation of members across the aisle. I think we can actually pass some meaningful AI safeguards that protect people – because it really matters to them. They’re worried about this. They’re worried about their jobs, their families, their kids.”
He points to the passage earlier this year of the Take It Down Act, which aims to stop the spread of non-consensual deepfakes, with wide bipartisan support (and an endorsement from Melania Trump). “Passing legislation like that is pretty unobjectionably good, right?” he says. “That’s hard work. It takes multiple tries. But we did it in the first five months of this year. So I do feel confident that we can continue that momentum and pass additional safeguards for folks.”
“ People look to politicians for social proof, for signaling,” Joe Allen says, “as to how they should think and what they should do. And I think to the extent that Trump is signaling that these are the good guys, that the tech oligarchs in essence will lead America to a golden age. It’s counter-signaled by people like Blumenthal, like Blackburn, like [Taylor–] Greene, who are clearly pointing out the dangers and the present damages being inflicted.”
“And it shows the people who look up to those politicians that maybe they should be suspicious of these companies and not just simply put their trust in, to the point that they would hand over their children’s minds to tech oligarchs for development,” he adds.
The tech industry now faces rebellion over AI not just from traditional critics on the left, but also from the MAGA movement that catapulted Trump to power. The question is whether the White House will continue ignoring its old allies in favour of new ones.
Nicky Woolf is a former Guardian internet reporter and editor of New Statesman America. He is the maker of investigative podcasts Finding Q, The Sound and Fur & Loathing.




