<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Transformer]]></title><description><![CDATA[Covering the power and politics of transformative AI.]]></description><link>https://www.transformernews.ai</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 19:28:36 GMT</lastBuildDate><atom:link href="https://www.transformernews.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Transformer]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[transformernews@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[transformernews@substack.com]]></itunes:email><itunes:name><![CDATA[Transformer]]></itunes:name></itunes:owner><itunes:author><![CDATA[Transformer]]></itunes:author><googleplay:owner><![CDATA[transformernews@substack.com]]></googleplay:owner><googleplay:email><![CDATA[transformernews@substack.com]]></googleplay:email><googleplay:author><![CDATA[Transformer]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The AI safety movement needs normies]]></title><description><![CDATA[A broader base may be the only way for the AI safety field to get what it wants]]></description><link>https://www.transformernews.ai/p/the-ai-safety-movement-needs-normies</link><guid isPermaLink="false">https://www.transformernews.ai/p/the-ai-safety-movement-needs-normies</guid><dc:creator><![CDATA[Celia Ford]]></dc:creator><pubDate>Mon, 27 Apr 2026 15:01:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xqWl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xqWl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xqWl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 424w, https://substackcdn.com/image/fetch/$s_!xqWl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 848w, https://substackcdn.com/image/fetch/$s_!xqWl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 1272w, https://substackcdn.com/image/fetch/$s_!xqWl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xqWl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png" width="1456" height="970" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:970,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3695440,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/195608595?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xqWl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 424w, https://substackcdn.com/image/fetch/$s_!xqWl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 848w, https://substackcdn.com/image/fetch/$s_!xqWl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 1272w, https://substackcdn.com/image/fetch/$s_!xqWl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8754002c-7c5a-4992-8861-7e31c637555f_1920x1279.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Credit: Rose Willis &amp; <a href="http://www.kathrynconrad.com">Kathryn Conrad</a> /<a href="https://betterimagesofai.org/images?artist=RoseWillis&amp;title=ARisingTideLiftsAllBots">Better Images of AI</a>/<a href="https://creativecommons.org/licenses/by/4.0">Creative Commons 4.0</a></em></figcaption></figure></div><p>If you look past the cryptic AI billboards lining Highway 101, San Francisco is still a city shaped by civil disobedience. For decades, young people, queer people, and weirdos of all stripes flocked west and settled here, building co-living spaces and resisting the powers that be. There&#8217;s plenty of anti-establishment angst to go around.</p><p>On a partly sunny Saturday this March, protestors <a href="https://abc7news.com/post/sf-protesters-call-ai-pause-anthropic-openai-xai-white-house-pushes-national-framework-trump-seeks-liability-limits/18752242/">gathered</a> outside Anthropic&#8217;s office to rally against the AI race, a few blocks southwest of Market Street, the city&#8217;s historic artery of dissent. AI slop coverage of the protest <a href="https://aidomainnews.blogspot.com/2026/03/ai-panic-hits-silicon-valley-protesters.html?utm_source=namepros.com">illustrated</a> a dense crowd framing a pink-haired woman in front of a &#8220;SAVE OUR JOBS&#8221; banner, <a href="https://aidomainnews.blogspot.com/2026/03/ai-panic-hits-silicon-valley-protesters.html?utm_source=namepros.com">screaming</a> into a megaphone. San Francisco stuff.</p><p>In reality, Stop the AI Race <a href="https://stoptherace.ai/">pulled</a> between a few dozen and a couple hundred people &#8212; mostly men, very earnest, and nearly all white &#8212; <a href="https://www.reddit.com/r/singularity/comments/1s1omtg/hundreds_of_protesters_marched_in_sf_calling_for/">carrying</a> more esoteric signs: &#8220;IT&#8217;S SMART ENOUGH,&#8221; one said. &#8220;MAY YOUR GPUs CHIP AND SHATTER,&#8221; said another. &#8220;PAUSE IS DEMANDED if you aren&#8217;t CONSISTENTLY CANDID.&#8221; Despite the city&#8217;s appetite for nonviolent protests and growing <a href="https://www.bloodinthemachine.com/p/why-the-ai-backlash-has-turned-violent?utm_source=substack&amp;publication_id=1744395&amp;post_id=193728165&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=false&amp;r=1pg6hh&amp;triedRedirect=true">antagonism</a> toward AI companies, the entire crowd could have comfortably fit in a couple of BART cars.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>Then on April 10 a Molotov cocktail was <a href="https://sfstandard.com/2026/04/10/sam-altman-russian-hill-molotov-cocktail/">thrown</a> at Sam Altman&#8217;s San Francisco mansion sometime between 3am and 4am. Twenty-year old Daniel Alejandro Moreno-Gama was arrested for the attack outside OpenAI&#8217;s headquarters, where he was allegedly trying to break in. In his backpack, officers reportedly found a manifesto listing the names and home addresses of other AI executives. Earlier this year, he <a href="https://morenogama.substack.com/?utm_campaign=profile_chips">wrote</a> Substack posts about death, destiny, and existential risks, or &#8220;x-risk,&#8221; posed by artificial intelligence.</p><p>(Within 48 hours, two others were arrested for <a href="https://sfstandard.com/2026/04/12/sam-altman-s-home-targeted-second-attack/">shooting</a> at Altman&#8217;s house before being released pending investigation &#8212; they reportedly had no connection to Moreno-Gama.)</p><p>While AI-informed types <a href="https://x.com/sriramk/status/2043494156123701622">were</a> <a href="https://x.com/deanwball/status/2042782724440612952">quick</a> to <a href="https://x.com/_NathanCalvin/status/2042663669163565459">condemn</a> the violence, many outside the Silicon Valley bubble seemed thrilled. &#8220;One does have to admire the skills of someone who can pour a good cocktail in this weather,&#8221; someone <a href="https://www.reddit.com/r/sanfrancisco/comments/1sjsrzr/comment/ofue53l/?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button">posted</a> to Reddit. Instagram users &#8212; many with full legal names publicly displayed in their profiles &#8212; <a href="https://x.com/paularambles/status/2043469888019480671?s=20">reacted</a> similarly. &#8220;Where can we support their bail fund? &#10024;&#8221; one said. &#8220;New love language just dropped &#128525;,&#8221; replied another.</p><p>The vibe mismatch between the AI crowd and outsiders was unsettling, but similar splits over violence against corporate targets have happened before. In December 2024, #FreeLuigi went <a href="https://www.newsweek.com/luigi-mangione-social-media-reaction-support-freeluigi-1998027">viral</a> as users &#8212; mostly young people &#8212; painted Luigi Mangione as a folk hero after he killed UnitedHealthcare CEO Brian Thompson. The handsome suspect <a href="https://luigithemusical.info/">became</a> the protagonist of a buzzy musical and dozens of steamy <a href="https://www.wattpad.com/stories/luigimangione">fanfics</a>. Fans even <a href="https://luigimangionestore.com/?srsltid=AfmBOooQS6_n328-P0HRbBZ2ihDtdlDU4CyAWovUDMVx2lHx9ibzk9oe">bought</a> merch.</p><p>Silicon Valley loves to say we can &#8220;just do things,&#8221; but when it comes to meaningfully changing the arc of AI development, most of us can&#8217;t do anything at all. Committing violence against tech CEOs and their families is not forgivable, but it&#8217;s certainly agentic. To a radicalized young man who believes that if anyone builds superintelligent AI, everyone <a href="https://ifanyonebuildsit.com/">dies</a>, burning down a CEO&#8217;s house and company headquarters to save the human race might seem like the lesser of two evils. After all, if a trolley is approaching a fork in the tracks, utilitarian logic says you should pull a lever to send it running over a single victim, if it would save many others who&#8217;d otherwise die as a consequence of your inaction.</p><p>Over the last couple years, Moreno-Gama reportedly posted 34 messages to anti-AI activist group PauseAI&#8217;s public Discord under the <em>Dune-</em>inspired handle &#8220;Butlerian Jihadist,&#8221; referencing the book&#8217;s fictional crusade against thinking machines. PauseAI, whose US branch founder Holly Elmore gave a speech at last month&#8217;s Stop the AI Race protest, <a href="https://pauseai.info/statement-sam-altman-attack-2026">stated</a> that it &#8220;unequivocally condemns this attack and all forms of violence, intimidation and harassment,&#8221; and that &#8220;violence against anyone is antithetical to everything we stand for.&#8221;</p><p>This attack and its aftermath &#8212; celebrated by anti-AI normies, rebuked by AI safety insiders, motivated by the same &#8220;doomer&#8221; canon that <a href="https://x.com/sama/status/1621621724507938816?s=20">inspired</a> Altman to found OpenAI in the first place &#8212; may have been a foreseeable consequence of a technical, existential-risk-focused community sounding the alarm before building a broadly-appealing coalition to channel people&#8217;s anxiety and anger into political action.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wkIs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wkIs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wkIs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wkIs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wkIs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wkIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2883198,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/195608595?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wkIs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wkIs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wkIs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wkIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80a8df81-56df-4f3c-b535-7a4f81d0dad1_6192x4128.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Stop the AI Race protest in San Francisco in March. Credit: Stop the AI Race / Rachel Shu</em></figcaption></figure></div><p>At least for now, protests against future superintelligence, focused on hedged demands such as &#8220;CEOs should commit to pause AI development if everyone else does too,&#8221; can only rally so many people. But even those who don&#8217;t spend time on LessWrong love Gen Z dudes who attack CEOs. This narrative transcends AI safety, <a href="https://www.bloodinthemachine.com/p/why-the-ai-backlash-has-turned-violent">argued</a> tech journalist Brian Merchant: it &#8220;identif[ies] billionaire AI executives as uniquely powerful actors, who are all but unaccountable to democratic constraints and society&#8217;s best interests.&#8221;</p><p>While it&#8217;s easy to dismiss memes and Instagram comments as nihilistic noise, the legitimate frustration behind them is more widespread than the AI safety community seems to realize. A recent <em>NBC News </em>poll <a href="https://www.nbcnews.com/politics/politics-news/poll-majority-voters-say-risks-ai-outweigh-benefits-rcna262196">found</a> that, netting positive views against negative ones, more voters feel worse about AI than about ICE (i.e., really bad). Mainstream anxieties about job loss, cyberattacks, and mass surveillance &#8212; which all <a href="https://report2025.seismic.org/media/documents/On_the_Razors_Edge_Seismic_Report_2025.pdf">rank</a> relatively high on the public&#8217;s list of concerns about what AI might do &#8212; tie into x-risk-pilled concerns such as <a href="https://gradual-disempowerment.ai/">gradual disempowerment</a> and <a href="https://www.rand.org/randeurope/research/projects/2025/examining-risks-and-response-for-ai-loss-of-control-incidents-cm.html">loss of control</a>.</p><p>The AI safety community has historically worried that addressing normie concerns would come at the expense of x-risk, and possibly knock it off potential legislation altogether. But these pressing, present socioeconomic issues may be the gateway that gets x-risk on the table. &#8220;These are the things that people are feeling right now,&#8221; said Alex McCoy, Head of Left Coalition at political advocacy group<strong> </strong>Humans First. &#8220;It doesn&#8217;t mean that they don&#8217;t believe in Skynet.&#8221;</p><p>Politicians respond to what they think their constituents want, and the vast majority of Americans do not want AI to continue along its current trajectory. The momentum is there &#8212; people are beginning to take action, however imprecisely, driven by deeply-rooted feelings of unfairness and demoralization. Traditional AI safety advocates may just need to cede enough control of their narrative to harness it.</p><h3>It all comes down to insularity</h3><p>Traditionally, the work of AI safety has excluded the public by design. For years, AI safety discourse has largely unfolded behind closed doors, between researchers, executives, and the policymakers they have direct access to. This has often been prioritized over broader public engagement, for what seemed like good strategic reasons. Long, information-dense blog posts, closed-door meetings and money carry more weight in this world than mainstream media, anyway. Why waste time explaining &#8220;AGI&#8221; to normies, when they&#8217;re not drafting policy proposals?</p><p>The AI safety field was built on the idea that, with enough compute, money, and brainpower, a small group of very smart people can save the world &#8212; no help, input, or permission required. They reasoned that preventing tech companies from building a deadly machine god <a href="https://www.lesswrong.com/posts/dGotimttzHAs9rcxH/relitigating-the-race-to-build-friendly-ai">should</a> be done as quietly as possible, without attracting the kind of public or political attention that might inadvertently spark an ill-fated race towards superintelligence. In hindsight, this wasn&#8217;t paranoid: philosopher Nick Bostrom&#8217;s 2014 bestseller <em>Superintelligence: Paths, Dangers, Strategies </em>surfaced concerns about x-risk from the depths of the rationalist blogosphere to the <em>New York Times </em>best seller list &#8212; and partially <a href="https://www.noemamag.com/the-politics-of-superintelligence/#:~:text=The%20contemporary%20incarnation%20of%20this,Worth%20reading%20Superintelligence%20by%20Bostrom.">motivated</a> OpenAI&#8217;s founding.</p><p>But in choosing to operate largely behind the scenes, the AI safety community created a vacuum that&#8217;s now being filled by industry lobbyists, populist politicians, and radicalized individuals. Leading the Future, a pro-AI super PAC network backed by venture capitalists and AI executives, <a href="https://elections.transformernews.ai/pacs/C00916114">has</a> reported raising more than $75m, and claims to have raised $140m. Meanwhile, Bernie Sanders, Gen Z influencers, and data center NIMBYs are leading the populist backlash against the industry. Existential risk has only very recently <a href="https://x.com/MIRIBerkeley/status/2029334828110496106">entered</a> the conversation, and it comes with baggage.</p><p>While AI safety is not the same thing as effective altruism, the two are deeply <a href="https://www.transformernews.ai/p/the-perils-of-ai-safetys-insularity?utm_source=publication-search">entangled</a>. EA, a movement that attempts to maximize human flourishing through quantitative reasoning, funneled a lot of talent and money toward early AI safety research. Today, many of the field&#8217;s biggest names speak at EA conferences, share donors and <a href="https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362">shape</a> AI policy decisions. At this point, it&#8217;s very hard to disentangle the public&#8217;s perception of &#8220;AI safety&#8221; from EA itself.</p><p>The community&#8217;s manufactured insularity and inclination for &#8220;Secret Congress&#8221;-esque <a href="https://www.slowboring.com/p/the-rise-and-importance-of-secret">policymaking</a> stoked an aura of opacity, leading to public distrust from all sides. &#8220;In Silicon Valley, the EAs are viewed as one step to the right of Elizabeth Warren,&#8221; a former Biden official <a href="https://www.politico.com/news/magazine/2026/04/01/silicon-valley-bernie-sanders-ai-coalition-00850895">told</a> <em>Politico. </em>&#8220;Conversely, in DC, on the left they think EAs are the devil.&#8221; Everywhere else, the public&#8217;s response is mostly <em>&#8220;Who are you guys?,&#8221; </em>said Akshyae Singh, co-founder of The Frame, an accelerator for AI safety content creators.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a7079c7c-f0d1-4118-85ed-0319831be04d&quot;,&quot;caption&quot;:&quot;The foundations of modern AI were laid in academia. Before the field of machine learning had a name, neuroscientists, psychologists and theoreticians introduced the first artificial neural networks. Many of the basic processes that help AI learn, including&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The perils of AI safety&#8217;s insularity&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2cbdae53-b50a-4b34-9434-9a5693d42b6c_3058x3058.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-04T18:00:46.502Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!XiTf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb058ddd2-f108-4a6e-b82b-4ac72fc3f330_2121x1414.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/the-perils-of-ai-safetys-insularity&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:180697413,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:60,&quot;comment_count&quot;:11,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>This problem isn&#8217;t completely lost on insiders. Two years ago, a <a href="https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made">survey</a> asked 17 prominent AI safety experts what big mistakes the AI safety community was making. Their biggest gripes: &#8220;overly theoretical argumentation&#8221; and &#8220;being too insular.&#8221; A couple of respondents explicitly called out the EA urge to dismiss public outreach in favor of technical problem-solving as the root of the problem. Richard Ngo, an independent researcher who previously worked at DeepMind and OpenAI, argued that, at least at the time, AI safety overemphasized fundraising and back-room deals at the expense of the field&#8217;s public image. &#8220;From the perspective of an external observer,&#8221; he responded, &#8220;it&#8217;s difficult to know how much to trust stated motivations, especially when they tend to lead to the same outcomes as deliberate power-seeking.&#8221; Another respondent, METR researcher Daniel Filan, said, &#8220;If the three biggest oil companies were all founded by people super concerned about climate change, you might think that something was going wrong.&#8221;</p><p>Others, including prominent commentator Anton Leicht, <a href="https://writing.antonleicht.me/p/dont-build-an-ai-safety-movement">reject</a> the idea of building a popular movement altogether. One of Leicht&#8217;s primary concerns, echoed across the upper echelons of Silicon Valley, is that in the process of trying to address everyone&#8217;s concerns, a populist AI safety movement will wind up dropping the one thing that ought to be its<em> </em>central concern: existential risks. The line items that rally a crowd &#8212; data center <a href="https://substack.com/home/post/p-192704341">moratoriums</a>, for instance, or child safety legislation &#8212; <a href="https://writing.antonleicht.me/p/press-play-to-continue">don&#8217;t</a> necessarily make for great policy. Leicht believes that the AI safety community&#8217;s strongest assets are high expert credibility, and the fact that most people seem to vaguely support AI regulation. He worries that a poorly-executed popular movement could ruin both.</p><p>Even so, Leicht acknowledges that there are tradeoffs to playing inside baseball.<strong> </strong>&#8220;There is a huge epistemic gap between the small community of people who think they&#8217;re on the inside, and the rest of the world,&#8221; he told me. Like many others in the field, he&#8217;s wary of communicating to the &#8220;lowest common denominator&#8221; about existential risk without feeling confident that non-experts have enough background knowledge to contribute productively. &#8220;I don&#8217;t have a solution to it,&#8221; he said. &#8220;I think people are just not that good at it.&#8221;</p><p>McCoy described this perspective as &#8220;emblematic of the sort of anti-politics&#8221; of the AI safety community. Congress will address AI one way or another, he said, and whatever organized constituencies show up will shape what that legislation will ultimately look like.</p><p>&#8220;If the AI safety community does not take seriously the necessity to engage in capital-P politics,&#8221; he cautioned, &#8220;its concerns will be left out.&#8221;</p><h3>Who&#8217;s missing?</h3><p>One big problem, Singh argues, is that &#8220;people in this field don&#8217;t tend to do most things unless there&#8217;s absolute concrete proof&#8221; that it will be effective. In the absence of a previously-successful effort to increase demographic diversity within the AI safety community, there are no hard numbers saying that decentering the concerns of a tiny, homogenous group of people will make passing AI regulation easier. But a mass movement with the power to pressure governments and AI companies, by definition, has to include a lot of people &#8212; including those who don&#8217;t currently fit inside the AI safety bubble.</p><p>In the world of left-wing community organizing, there&#8217;s a common refrain: <em>center the most impacted. </em>&#8220;That&#8217;s not out of some kind of virtue signaling,&#8221; McCoy said. &#8220;It&#8217;s because those are the people that are going to fight the hardest.&#8221; But those most directly feeling the real-world impacts of AI today &#8212; first-generation college graduates struggling in an increasingly-nonsensical job market; women humiliated by nonconsensual deepfakes; content moderators in Nairobi exposed to traumatic content for a couple dollars an hour &#8212; are those least represented within the AI safety community.</p><p>And there <em>is </em>data to back this up. A 2025 Seismic Report, for instance, <a href="https://report2025.seismic.org/">found</a> that women are over twice as concerned about AI than men. On a global level, a UN report <a href="https://www.ilo.org/resource/news/new-ilo-data-confirm-women-face-higher-workplace-risks-generative-ai-men">found</a> that female-dominated occupations are almost twice as likely to have high automation potential than male-dominated ones, And, relative to men, they&#8217;re over three times more likely to have their jobs <a href="https://www.ilo.org/publications/generative-ai-and-jobs-refined-global-index-occupational-exposure">disrupted</a> by AI. Yet, women are few and far between in the AI safety field, particularly in technical roles &#8212; perhaps in part <em>because </em>near-term socioeconomic concerns often wind up lower on the priority list than more abstract concerns about alignment, interpretability, and the governance of AI systems that don&#8217;t exist yet.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;fcff4245-7f09-45d0-abb2-26ce87d05368&quot;,&quot;caption&quot;:&quot;Abdication&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The left is missing out on AI &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:328772711,&quot;name&quot;:&quot;Dan Kagan-Kans&quot;,&quot;bio&quot;:&quot;writer on AI, science, ideas for publications like Transformer, the Wall Street Journal, American Scholar&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ZCVj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1345599-db89-4a6b-9947-028c555de14c_1525x1525.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://kagankans.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://kagankans.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Dan Kagan-Kans&quot;,&quot;primaryPublicationId&quot;:8041221}],&quot;post_date&quot;:&quot;2026-02-16T16:02:47.781Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!iL1E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F593220f8-7a9d-4b5d-8d1d-534d17b3e2fe_1200x1200.gif&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188136159,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:346,&quot;comment_count&quot;:217,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The dominant AI safety discourse &#8220;primarily serves the interests of technological institutions and stakeholders in high-income nations, often privileging abstract future scenarios over pressing sociotechnical harms that disproportionately affect marginalized communities,&#8221; the Brookings Institution <a href="https://www.brookings.edu/articles/a-new-writing-series-re-envisioning-ai-safety-through-global-majority-perspectives/">wrote</a> last year. Growing up in India, this deeply frustrated Singh. &#8220;If brown people don&#8217;t have a voice, there is no way in hell that you&#8217;re making a solution that benefits me,&#8221; Singh said. &#8220;Like, how do you know how I feel?&#8221;</p><p>Getting abstract future scenarios onto the policy agenda may, perhaps counterintuitively, depend on addressing the concrete harms already shaping people&#8217;s lives &#8212; and, by extension, informing who they vote for and which AI products they use and pay for. The AI safety community already tried relying on a handful of well-connected experts to regulate the AI industry behind closed doors, and it&#8217;s not working.</p><p>The AI safety field is mostly talking to itself, and it&#8217;s created an information void that&#8217;s being filled by populist anger. Outside Silicon Valley, most people don&#8217;t experience AI as a powerful coding tool or existential threat. Rather, it&#8217;s a symbol of the machine we&#8217;re meant to be raging against &#8212; not an extinction risk, per se, but something billionaires are using to forcibly strip humans of their humanity.</p><p>&#8220;I don&#8217;t think it&#8217;s about persuading people that superintelligence is bad,&#8221; said John Sherman, president of the AI Risk Network. &#8220;It&#8217;s about persuading people that they can make a difference.&#8221;</p><h3>How to (hopefully) not screw up the AI safety movement</h3><p>Sherman proudly introduced himself to me as a Baltimore resident who, until a couple years ago, &#8220;didn&#8217;t know anything about AI.&#8221; After decades of working in TV and video production, &#8220;I can edit in Adobe Premiere,&#8221; he joked. &#8220;That&#8217;s about as technical as I get.&#8221;</p><p>Then he stumbled across Eliezer Yudkowsky&#8217;s 2023 article in <em>TIME Magazine</em>, in which he wrote: &#8220;If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.&#8221;</p><p>Today, Sherman also runs a nonprofit called GuardRail Now, focused on communicating AI x-risk to normies. &#8220;My primary concern is x-risk,&#8221; he said, but &#8220;I think we have to take side roads to get to the destination. And a lot of people in AI safety are unwilling to consider that &#8212; but they&#8217;re not getting anywhere. So like, how&#8217;s it going? You&#8217;re stuck.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;400f4c15-7fee-4752-a0a6-82c27fa0750f&quot;,&quot;caption&quot;:&quot;Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, click here to subscribe and receive future editions.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI populism's safety problem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1083827,&quot;name&quot;:&quot;Shakeel Hashim&quot;,&quot;bio&quot;:&quot;Shakeel is the editor of Transformer, a publication about the power and politics of transformative AI. He was previously a news editor at The Economist.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98b3ea1d-6a2a-42d1-bfe9-e9d1bf258a23_2549x2549.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2cbdae53-b50a-4b34-9434-9a5693d42b6c_3058x3058.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c4d4e71-bb11-4be9-9444-08b62fd61e66_400x400.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-03T15:31:07.604Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ba4ae27b-e859-4d4c-890c-6c176a74f8e6_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/ai-populism-bernie-sanders-aoc-pause-moratorium-safety&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:193070758,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:18,&quot;comment_count&quot;:5,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Sherman&#8217;s theory of change is simple: people are feeling a visceral sense of unfairness. AI companies are reshaping society without anyone&#8217;s consent, so &#8220;we need to make unsafe AI bad for business.&#8221; This can be framed in terms of existential risk, he said: &#8220;We&#8217;re building systems that we don&#8217;t know how to control, that we don&#8217;t understand how they work, that the experts say can kill everybody. Why would we do that?&#8221;</p><p>Imagine a family in Ohio, where everyone is experiencing AI differently. Dad is uneasy about new AI policies at his white collar job, and a data center is being built behind the neighborhood school. His teenager is doing god-knows-what on ChatGPT, and his college kid is talking about how she wants to drop out and work in construction. &#8220;To build a real movement,&#8221; Sherman said, AI safety advocates &#8220;need to run full speed ahead towards people who are concerned about their kids, towards people who are concerned about data centers &#8230; the whole thing, all of it.&#8221;</p><p>This approach makes many in the AI safety community uneasy, reasonably so. A little over two weeks after hiring Sherman as its Director of Public Engagement, the Center for AI Safety parted ways with him  after clips <a href="https://x.com/drtechlash/status/1924639190958199115">surfaced</a> of Sherman telling podcast listeners that the &#8220;proper reaction&#8221; to the AI race was to &#8220;walk to the labs across the country and burn them down.&#8221; CAIS <a href="https://x.com/CAIS/status/1924849463874785673">announced</a> that the &#8220;connotation of statements like this do not reflect CAIS&#8217;s values,&#8221; to distance itself from this kind of fiery rhetoric &#8212; which, while in this case hypothetical, could radicalize people who may already be angry enough to act. (Sherman <a href="https://x.com/ForHumanityPod/status/1925346273353199917">said</a> he regretted using the language, clarifying that he meant &#8220;when the general public finds out their lives are being risked for AI, the reasonable reaction is to shut it down.&#8221; However, he has continued to <a href="https://x.com/ForHumanityPod/status/2038644652329304118">describe</a> AI in rather intense, hyperbolic terms.)</p><p>But building a movement doesn&#8217;t mean encouraging violence. The climate movement, for example, managed to grow from scientists expressing concern amongst each other to a global issue, without linearly increasing the risk oil executives faced from potential attackers. Decades of climate activism, including conveying the existential stakes of climate change to the public, hasn&#8217;t led to the kind of violence one might expect if x-risk messaging were a reliable radicalizer. Arguably, political organizing gives individuals a more structured outlet for their righteous frustration than attempted murder. The alternative &#8212; an AI safety community that stays silent while populists take up space around them &#8212;opens the door to unstructured acts of radical violence <em>and </em>worse policy.</p><p>Even Yudkowsky, Machine Intelligence Research Institute (MIRI) president Nate Soares, and the broader Berkeley rationalist scene they helped build &#8212; who have arguably shaped the conversation around existential risk more than anyone &#8212; recently pivoted hard toward public outreach. Soares, who co-authored <em>If Anyone Builds It, Everyone Dies, </em>has been on tour for the book. Last month, he spoke at the Stop the AI Race protest in San Francisco.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;0d68f9cb-dbe4-4fc4-a07e-e299d980aaa7&quot;,&quot;caption&quot;:&quot;Eliezer Yudkowsky has all the makings of a figure from Greek tragedy. He started off his career trying to build artificial general intelligence, captivated by the prospect of technological and social progress a superhuman mind could bring. But he soon realized that the system he was trying to build could be very &#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Book Review: 'If Anyone Builds It, Everyone Dies'&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1083827,&quot;name&quot;:&quot;Shakeel Hashim&quot;,&quot;bio&quot;:&quot;Shakeel is the editor of Transformer, a publication about the power and politics of transformative AI. He was previously a news editor at The Economist.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98b3ea1d-6a2a-42d1-bfe9-e9d1bf258a23_2549x2549.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-16T12:51:25.630Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!iBVG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19ee8fc0-d80b-40eb-9eaf-89a28a83fae2_960x540.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/review-if-anyone-builds-it-everyone-dies-yudkowsky-soares&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173743267,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:6,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>In February, Soares and Yudkowsky sat down with Bernie Sanders at MIRI&#8217;s Berkeley offices. After decades of treating persuading the public at large as a distraction, some of the biggest players of the inside game have, however begrudgingly, realized they can&#8217;t do this alone.</p><p>It&#8217;s not an accident that Sanders was the first to get there. Perhaps more than anyone else in DC, his politics is centered around pushing back against unaccountable billionaires and their friends in government &#8212; which, stripped of jargon, is what preventing superintelligence ultimately requires. Given the Trump administration&#8217;s damage to American soft power abroad and the unimaginable amount of money AI companies have to throw around, an international treaty to pause the race looks relatively unattainable.</p><p>But AI companies are still businesses. And the people with the most leverage over large corporations are customers, workers, investors and voters &#8212; not researchers writing alignment papers.</p><p>&#8220;AI safety as a persuasive cause will never have more power than the industry&#8217;s hard power in dollars and political influence,&#8221; McCoy said, &#8220;unless it is allied to constituencies who can lend their power together.&#8221; Conversations about extinction risk among rich tech guys in San Francisco, he added, are &#8220;not the message that is going to get hundreds of people to show up to a protest.&#8221; The protests that matter will be about jobs, surveillance, kids, data centers &#8212; what McCoy calls the &#8220;symptoms.&#8221; But the disease, in his framing, is exactly what x-risk advocates have been trying to address, seen from another angle: &#8220;an unaccountable set of billionaire investors and executives who have no guardrails and are seeking to concentrate an incredible amount of power in their companies.&#8221;</p><p>One could argue (and Leicht does) that by making AI safety an &#8220;omnicause&#8221; addressing everyone&#8217;s prosaic concerns, it will get elbowed out of whatever legislation ends up passing. But existential risk is already <a href="https://report2025.seismic.org/">last</a> on the public&#8217;s list of AI concerns across every demographic, according to last year&#8217;s Seismic poll. It might <em>need </em>to join a coalition of other causes to get on the table at all.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>&#8220;AI safety right now is islands,&#8221; Sherman told me. &#8220;We need an ocean to connect the islands.&#8221;</p><p>Silicon Valley speaks its own language, and most people outside the bubble don&#8217;t understand it. &#8220;The best thing that the AI safety movement could do would be to build an army of surrogates who are regular people, going into their own communities and talking about this stuff,&#8221; Sherman said &#8212; &#8220;not strangers from a foreign land speaking a different language.&#8221;</p><p>Singh agrees. The catastrophic threats posed by AI aren&#8217;t hard to grasp. &#8220;Like, my dad, my mom &#8212; I can explain it to a lot of people, and they&#8217;ll get what&#8217;s going on,&#8221; Singh said. But they&#8217;re often <em>made </em>unintelligible by people who treat communicating to people without technical backgrounds as an inconvenience rather than a necessity. So Singh <a href="https://www.washingtonpost.com/technology/2026/04/18/ai-doom-influencers-safety/">launched</a> the Frame Fellowship, an eight-week incubator for content creators to bring AI safety discourse to the masses via <a href="https://www.youtube.com/@mikeyposada">YouTube</a>, <a href="https://www.tiktok.com/@jatgpt_">TikTok</a>, and <a href="https://www.instagram.com/futuretense.tv/">Instagram</a> (Micha&#235;l Trazzi, who organized the Stop the AI Race protest, was <a href="https://www.youtube.com/watch?v=-qWFq2aF8ZU">also</a> a fellow).</p><p>Long-term existential concerns about AI aren&#8217;t separate from near-term populist anxieties. Loss of control and gradual disempowerment are natural extensions of power concentration and job displacement. When microinfluencers talk about AI in &#8220;Get Ready With Me&#8221; TikToks, or neighborhoods band together at town hall meetings to voice their concerns, the case for existential risk becomes the endpoint of what people are already afraid of.</p><p>The AI safety community has a window of opportunity to make its case to the broader public, and some already are. While accelerationists such as Marc Andreessen have <a href="https://x.com/pmarca/status/2046014100342473144">dismissed</a> efforts to communicate about AI risks as &#8220;propaganda&#8221; fueled by &#8220;unaccountable dark money,&#8221; pro-industry leaders are <a href="https://www.transformernews.ai/p/how-to-buy-an-ai-grassroots-movement-build-american-ai-leading-the-future?utm_source=publication-search">also</a> <a href="https://www.a16z.news/p/introducing-the-a16z-new-media-fellowship">investing</a> in communicating beyond the Bay Area bubble. The majority of people in the US feel uncomfortable about the current trajectory of AI, and this discomfort will likely turn into action. Whether that manifests as voting power or bottles of gasoline flying over San Francisco depends on building a movement.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/the-ai-safety-movement-needs-normies?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/the-ai-safety-movement-needs-normies?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[GPT-5.5 and the broken state of government evals]]></title><description><![CDATA[Transformer Weekly: DeepSeek V4, a new CAISI director, and Liccardo holds out on Obernolte]]></description><link>https://www.transformernews.ai/p/openai-shouldnt-be-deciding-if-its-gpt-55</link><guid isPermaLink="false">https://www.transformernews.ai/p/openai-shouldnt-be-deciding-if-its-gpt-55</guid><dc:creator><![CDATA[Shakeel Hashim]]></dc:creator><pubDate>Fri, 24 Apr 2026 14:02:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b090f820-23e7-4be9-a42d-78ff8c856b6b_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, <a href="https://www.transformernews.ai/welcome">click here to subscribe</a> and receive future editions.</em></p><p><em>And a reminder: applications close <strong>this Sunday</strong> for our <strong>Head of Audience</strong> role. If you&#8217;d like to own Transformer&#8217;s growth strategy, <a href="https://www.transformernews.ai/p/head-of-audience-job-listing-recruitment">make sure to apply.</a></em></p><blockquote><h3>NEED TO KNOW</h3></blockquote><ul><li><p><strong>DeepSeek</strong> released its <strong>V4 model</strong>, which it says is three to six months behind the performance of the leading frontier models.</p></li><li><p><strong>Chris Fall </strong>will reportedly be the new director of the <strong>Center for AI Standards and Innovation</strong>.</p></li><li><p><strong>Rep. Sam Liccardo</strong> said he won&#8217;t co-sponsor <strong>Rep. Jay Obernolte&#8217;s</strong> forthcoming AI bill.</p></li></ul><p><em>But first&#8230;</em></p><div><hr></div><blockquote><h3>THE BIG STORY</h3></blockquote><p><strong>OpenAI&#8217;s newly released GPT-5.5</strong> is, <a href="https://deploymentsafety.openai.com/gpt-5-5/gpt-5-5.pdf">according</a> to the UK&#8217;s AI Security Institute (AISI), the world&#8217;s most capable model on individual cyber tasks, and can complete a &#8220;32-step corporate-network attack simulation estimated to take an expert 20 hours.&#8221; It appears to be similarly capable (if slightly worse) at carrying out a cyberattack as Anthropic&#8217;s unreleased Mythos.</p><p>But unlike Anthropic, OpenAI is making a version of GPT-5.5 available to the general public. Rather than restricting access to the model altogether, OpenAI hopes to restrict the use of particularly dangerous <em>capabilities</em> through its safety stack &#8212; making the model refuse concerning cyber requests from normal users, and only allowing such requests from those vetted under its &#8220;Trusted Access&#8221; program.</p><p>Yet we have no idea if that safety stack is good enough. And we have reason to believe that it might not be. Alongside its cyber testing, AISI also <a href="https://x.com/NateBurnikell/status/2047382978561552423">tested</a> OpenAI&#8217;s safeguards, and &#8220;found a universal jailbreak with six hours of expert red teaming.&#8221; Such a jailbreak would let users circumvent OpenAI&#8217;s safeguards, giving them access to the powerful &#8212; and, in the wrong hands, dangerous &#8212; cyber capabilities. OpenAI claims to have addressed the issue, and says its own external red-teaming campaigns confirmed that the final launch configuration blocked all verified high-severity cyber jailbreaks. But, crucially, AISI &#8212; a trusted third-party evaluator with immense technical expertise &#8212; was not able to properly run tests  &#8220;to verify the effectiveness of the final configuration.&#8221;</p><p><strong>In other words: we do not know if GPT-5.5 is actually safe to release. </strong>All we have to rely on is OpenAI&#8217;s word.</p><p>Such a situation may have been acceptable in 2023. In 2026, with models posing genuine risks to national security and plenty of other vital systems, it no longer is. It is laudable that OpenAI and other companies allow AISI, the US&#8217;s CAISI, and third-party evaluators to perform pre-deployment evaluations. But if those organizations are unable to actually verify if a model is safe to release &#8212; and if a company has no obligation to listen to them &#8212; the exercise is limited.</p><p>This is not just an OpenAI problem. If Anthropic wanted to go down the same route, nothing would stop it. And as this week&#8217;s Mythos <a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users">leak</a> showed, its own security practices do not appear up to the task of safeguarding dangerous capabilities.</p><p>None of this is to say that GPT-5.5 is dangerous. OpenAI&#8217;s updated safeguards may in fact be robust to jailbreaks, and other aspects of its safety stack (such as monitoring users&#8217; requests and banning accounts that raise too many red flags) provide an extra level of security. The point is that we are currently at the mercy of a private company grading its own homework, and all the frontier labs making the final call on what and when to release. Given the potential consequences of <em>unsafe</em> releases, that is no longer acceptable.</p><p>GPT-5.5 might be totally safe to release. It also might not be. Neither OpenAI, nor Anthropic or any other frontier developer, should be the one who gets to decide.</p><p><em>&#8212; Shakeel Hashim</em></p><div><hr></div><blockquote><h3>THIS WEEK ON TRANSFORMER</h3></blockquote><ul><li><p><strong><a href="https://www.transformernews.ai/p/ai-safety-pacs-should-be-more-transparent-public-first-action">AI safety PACs should be more transparent about who&#8217;s funding them</a></strong> &#8212; <strong>Veronica Irwin</strong> asks why Public First Action isn&#8217;t disclosing all its donors.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><blockquote><h3>THE DISCOURSE</h3></blockquote><p><strong>Sam Altman </strong><a href="https://x.com/kyliebytes/status/2046948647611621408">shared</a> Mythos takes on <em>Core Memory</em>:</p><ul><li><p>&#8220;If what you want is, &#8216;we need control of AI, just us, because we&#8217;re the trustworthy people,&#8217; I think fear-based marketing is probably the most effective way to justify that.&#8221;</p></li><li><p>&#8220;It is clearly incredible marketing to say, &#8216;We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100m.&#8217;&#8221;</p></li></ul><p><strong>Trump </strong><a href="https://x.com/Dareasmunhoz/status/2046574025258754190">said</a> Anthropic is led by &#8220;high IQ people&#8221; on CNBC:</p><ul><li><p>&#8220;We&#8217;ll get along with [Anthropic] just fine &#8230; I think they can be of great use.&#8221;</p></li></ul><p><strong>roon </strong>(sort of) <a href="https://x.com/tszzl/status/2047007351266476397">praised</a> Anthropic, too:</p><ul><li><p>&#8220;Claude is an excellent product and it bodes well for [Anthropic] that their main problem is everyone really wants it and so they have to do odd shit to shake off demand.&#8221;</p></li></ul><p>E/acc anon <strong>bayes </strong><a href="https://x.com/bayeslord/status/2045966479338901898">thinks</a> the AI industry has failed to articulate positive futures:</p><ul><li><p>&#8220;If the singularity is making it hard to see the future, people&#8217;s instinctive reactions will take over. For most people that default is fear.&#8221;</p></li><li><p>&#8220;Many people in tech are in a shameless state of defection. Trampling each other on the way to the lifeboats is not a belief system. If we want a future of human flourishing for us and our descendants, we will have to make it so by fighting against the many powerful forces at odds with this goal.&#8221;</p></li></ul><p><strong>Bill Maher </strong><a href="https://www.youtube.com/watch?v=w5SYm4J4utQ">dedicated</a> a segment of his show to P(doom):</p><ul><li><p>&#8220;I get it. [AI] can do some shit. Still, at the end of the day, you&#8217;re selling your humanity for bar tricks. I mean, what was the plan? Just create an all-powerful, self-sustaining super intelligence that can out-think us and then see what happens?&#8221;</p></li><li><p>&#8220;We&#8217;re letting a handful of hoodie-wearing, on-the-spectrum sociopaths, practically robots themselves, roll the dice on species extinction &#8230; even these guys are afraid of what they&#8217;ve created.&#8221;</p></li></ul><p><strong>Helen Toner </strong><a href="https://x.com/hlntnr/status/2047324666902048946">testified</a> at a Senate hearing that &#8220;beat China!&#8221; isn&#8217;t a great plan:</p><ul><li><p>&#8220;The winner of any AI race between the US and China is the AI.&#8221;</p></li><li><p>&#8220;...it is very important that the US AI sector remains ahead of the Chinese AI sector, but if that&#8217;s at the expense of AI overrunning the entire planet, then that is, you know, that hasn&#8217;t benefited us.&#8221;</p></li></ul><p><strong>Palantir </strong><a href="https://x.com/PalantirTech/status/2045574398573453312">posted</a> a manifesto of sorts:</p><ul><li><p>&#8220;The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation.&#8221;</p></li></ul><ul><li><p>&#8220;If a US Marine asks for a better rifle, we should build it; and the same goes for software.&#8221;</p></li></ul><p><strong>Zachary Jones</strong> <a href="https://onethousandmeans.substack.com/p/public-first-actions-strategy-doesnt">criticized</a> the Public First super PACs&#8217; strategy:</p><ul><li><p>&#8220;Public First has repeatedly intervened in favor of moderate Democrats against progressive opponents who also back comprehensive AI regulation. This has had the effect of polarizing the left against the AI safety community, limiting the capacity of experts to influence outcomes of the emergent wave of anti-AI populism.&#8221;</p></li></ul><div><hr></div><blockquote><h3>POLICY</h3></blockquote><ul><li><p><strong>Trump</strong> <a href="https://cnbc.com/2026/04/21/trump-anthropic-department-defense-deal.html">said</a> a deal with <strong>Anthropic</strong> for Department of Defense use is &#8220;possible&#8221; despite the Pentagon labeling the company a supply chain risk.</p><ul><li><p>The comments came after <strong>Dario Amodei</strong> <a href="https://www.axios.com/2026/04/17/anthropic-white-house-wiles-bessent-amodei">met</a> with White House Chief of Staff <strong>Susie Wiles</strong> and Treasury Secretary <strong>Scott Bessent</strong>.</p></li><li><p>In the meantime, the NSA was revealed as yet another agency reportedly <a href="https://axios.com/2026/04/19/nsa-anthropic-mythos-pentagon">using</a> <strong>Anthropic&#8217;s</strong> <strong>Mythos Preview</strong> model despite the supply chain risk designation.</p></li><li><p><strong>CISA, </strong>however, reportedly <a href="https://axios.com/2026/04/21/cisa-anthropic-mythos-ai-security">lacks</a> access to <strong>Mythos</strong>.</p></li><li><p>The <strong>DC Circuit</strong> judges who denied <strong>Anthropic</strong>&#8217;s request to temporarily limit the supply chain risk designation <a href="https://x.com/mattschett/status/2046063386274714010">kept</a> the case to decide it on the merits.</p></li></ul></li><li><p><strong>Chris Fall</strong> will <a href="https://x.com/theelizmitchell/status/2047440781166743648">reportedly</a> be the new director of the <strong>Center for AI Standards and Innovation</strong>. He was previously director of the Department of Energy&#8217;s Office of Science.</p><ul><li><p>Former <strong>OpenAI </strong>and <strong>Anthropic </strong>researcher <strong>Collin Burns</strong> was reportedly lined up for the role, but the <strong>Commerce Department</strong> changed its mind &#8220;while Burns was in the onboarding process,&#8221; according to the <em>Daily Signal&#8217;s</em> Elizabeth Mitchell.</p></li></ul></li><li><p>It was a busy week for <strong>export controls</strong>, with a slew of bills <a href="https://bloomberg.com/news/articles/2026-04-23/ai-export-control-measures-aimed-at-china-gain-steam-in-us-house?taid=69e96a6634a71c00018a720d">advancing</a> out of the <strong>House Foreign Affairs Committee.</strong></p><ul><li><p>That included the <strong>MATCH Act</strong>, which would pressure allies to stop selling semiconductor manufacturing equipment to China, and the <strong>AI Overwatch Act</strong>, which would block <strong>Nvidia Blackwell</strong> chip sales and give Congress veto power over H200 licenses.</p><ul><li><p><strong>Micron </strong>is reportedly <a href="https://reuters.com/legal/government/micron-pushes-us-congress-crack-down-chip-tool-sales-chinese-rivals-sources-say-2026-04-22">lobbying</a> Congress to pass the MATCH Act &#8212; but the bill is reportedly <a href="https://punchbowl.news/article/tech/match-act-tensions/">creating</a> tensions between the US and allies such as the <strong>Netherlands</strong>, where <strong>ASML</strong> opposes the bill.</p></li><li><p><strong>Rep. John Moolenaar</strong>, who chairs the <strong>House China Committee</strong>, <a href="https://x.com/chinaselect/status/2046686402255925458?s=12">called</a> for US-Dutch coordination.</p></li><li><p>Meanwhile, the House Foreign Affairs Committee&#8217;s top Democrat, <strong>Rep. Gregory Meeks</strong>, <a href="https://x.com/dareasmunhoz/status/2047020377423974534?s=12">warned</a> that the MATCH Act could damage US-allied relations and trigger Chinese retaliation, despite his support for the bill.</p></li></ul></li><li><p><strong>Rep. Moolenaar</strong> also <a href="https://x.com/chinaselect/status/2046658922921030084?s=12">introduced</a> the <strong>SCALE Act</strong>, which would establish export controls on advanced semiconductors to China based on a rolling technical threshold tied to adversaries&#8217; chip production capabilities.</p></li><li><p><strong>Commerce Secretary Lutnick</strong> said <strong>Nvidia</strong> has not yet <a href="https://reuters.com/world/asia-pacific/nvidia-has-not-yet-sold-its-h200-ai-chips-china-lutnick-says-2026-04-22">sold</a> its <strong>H200</strong> AI chips to China anyways, citing a lack of permission from the Chinese government.</p></li></ul></li><li><p>OSTP director <strong>Michael Kratsios</strong> <a href="https://x.com/mkratsios47/status/2047316220785905948">said</a> the US has evidence that China is running &#8220;industrial-scale distillation campaigns&#8221; on US models.</p><ul><li><p>He said the government will work with companies to prevent this, and &#8220;explore a range of measures to hold foreign actors accountable for industrial-scale distillation campaigns.&#8221;</p></li><li><p>(One of the bills advanced by HFAC this week was the &#8220;Deterring American AI Model Theft Act&#8221;.)</p></li></ul></li><li><p><strong>Rep. Jay Obernolte</strong> <a href="https://punchbowl.news/article/policy/obernolte-ai-rules-draft">said</a> he is &#8220;close&#8221; to releasing a comprehensive federal AI regulation proposal that would preempt state laws and regulate AI use in specific sectors like healthcare. He also <a href="https://punchbowl.news/article/tech/obernolte-ai-bill/">said</a> it would be &#8220;hundreds&#8221; of pages long.</p><ul><li><p><strong>Rep. Sam Liccardo</strong> <a href="https://punchbowl.news/article/tech/liccardo-obernolte-artificial-intelligence/">said</a> he won&#8217;t co-sponsor the bill because it doesn&#8217;t have &#8220;critical requirements&#8221; to &#8220;ensure that there is a race to the top, to safety.&#8221;</p></li></ul></li><li><p><strong>Sen. Marsha Blackburn</strong> said she&#8217;ll be &#8220;pushing forward&#8221; with her AI bill this fall.</p><ul><li><p>The bill received a range of new <a href="https://www.blackburn.senate.gov/2026/4/ai/what-they-are-saying-blackburn-announces-growing-momentum-for-trump-america-ai-act">endorsements</a> this week.</p></li></ul></li><li><p><strong>Rep. Blake Moore</strong> <a href="https://blakemoore.house.gov/media/press-releases/congressman-blake-moore-introduces-bill-to-ban-artificial-intelligence-chatbots-in-childrens-toys">introduced</a> a bill to ban AI chatbots in children&#8217;s toys, <a href="https://x.com/FreeSpeech_AI/status/2046688195169951901?s=20">prompting</a> criticism that it would cut off educational AI tools.</p></li><li><p><strong>Florida</strong> <a href="https://news.bloomberglaw.com/litigation/openai-gets-florida-criminal-probe-over-chatgpt-role-in-shooting">sent</a> criminal subpoenas to <strong>OpenAI</strong> after a shooter used <strong>ChatGPT</strong> to plan a mass shooting at Florida State University.</p></li><li><p><strong>Maine</strong> <a href="https://puck.news/is-banning-data-centers-good-politics-for-democrats/">passed</a> a bill banning large-scale data center development &#8212; forcing <strong>Gov. Janet Mills</strong>, running in a tough Senate primary race, to decide whether to sign it.</p></li><li><p><strong>China </strong>is reportedly <a href="https://www.bloomberg.com/news/articles/2026-04-24/china-to-curb-us-investment-in-tech-companies-after-meta-deal">restricting</a> AI firms from accepting US investment without government approval, in response to <strong>Meta&#8217;s</strong> acquisition of <strong>Manus</strong>.</p></li></ul><div><hr></div><blockquote><h3>INFLUENCE</h3></blockquote><ul><li><p>Tech giants, including AI companies, reportedly <a href="https://x.com/mjbeckel/status/2046597421980156306?s=20">spent</a> a combined <strong>$20m</strong> on lobbying in Q1 2026.</p><ul><li><p><strong>Anthropic</strong> spent $1.56m, a 333% increase from Q1 2025.</p></li><li><p><strong>OpenAI</strong> spent $1m, an 82% increase from Q1 2025.</p></li></ul></li><li><p>Pro-AI safety policy group <strong>Public First Action</strong> is reportedly <a href="https://politico.com/newsletters/politico-influence/2026/04/17/venezuelas-rodriguez-makes-first-fara-hire-00879100">endorsing</a> six <strong>House Democrats</strong> for the midterms, including <strong>Reps. Don Beyer</strong> and <strong>Brad Sherman</strong>.</p></li><li><p>NY-12 candidate <strong>Alex Bores</strong> <a href="https://axios.com/2026/04/20/alex-bores-ai-dividend-plan-wealth?stream=top">proposed</a> a range of policies to tackle AI-driven unemployment, including an &#8220;AI dividend&#8221; funded by a token tax and equity stakes in frontier AI firms.</p><ul><li><p><em>NY Mag</em> <a href="https://nymag.com/intelligencer/article/ai-job-loss-elizabeth-warren-what-congress-should-do.html">interviewed</a> politicians about their thoughts on AI job displacement, including <strong>Sen. Elizabeth Warren</strong> and <strong>Sen. Josh Hawley</strong>.</p></li></ul></li><li><p><strong>Hawley</strong> <a href="https://www.ft.com/content/3fd0a5d9-99cd-41a1-af79-7987c73d9fd3?segmentId=e95a9ae7-622c-6235-5f87-51e412b47e97&amp;shareId=752e52cd-aa42-4bac-a2a7-8398c83759c5&amp;shareType=enterprise&amp;syn-25a6b1a6=1">urged</a> Republicans to <strong>refuse money</strong> from pro-AI super PACs, saying there&#8217;ll be a &#8220;political cost&#8221; for failing to regulate AI.</p></li><li><p>The <strong>Rockefeller Foundation</strong> <a href="https://axios.com/2026/04/21/rockefeller-foundations-100-million-jobs-bet-ai-disruption?stream=top">announced</a> a <strong>$100m</strong> initiative to help US workers adapt to AI-driven job displacement in 250 communities.</p></li><li><p><strong>AVERI</strong> <a href="https://x.com/averiorg/status/2046365908411596815?s=12">published</a> an analysis of audit-related AI legislation in the US and endorsed Illinois bill <strong>HB 4705/SB 3261</strong>, which builds on <strong>SB 53</strong> and the <strong>RAISE Act</strong> by verifying compliance with companies&#8217; safety policies.</p></li><li><p>A <strong>cross-faith coalition</strong> <a href="https://axios.com/2026/04/17/faith-leaders-urge-congress-limit-ai-weapons">urged</a> Congress to pass safeguards on <strong>AI-enabled weapons</strong>.</p></li><li><p><strong>Dario Amodei</strong> is co-hosting a <a href="https://www.hollywoodreporter.com/lifestyle/lifestyle-news/anthropic-ceo-dario-amodei-and-graydon-carter-to-host-a-list-cannes-party-1236573696/">party</a> at <strong>Cannes Film Festival</strong> with ex-<em>Vanity Fair</em> editor <strong>Graydon Carter</strong> and CAA&#8217;s <strong>Bryan Lourd</strong>.</p></li><li><p><strong>a16z</strong> <a href="https://a16z.news/p/monitoring-the-situation">announced</a> an investment in MTS, a new media company seemingly aiming to take <strong>TBPN&#8217;s</strong> crown.</p></li><li><p>AI safety advocates are <a href="https://www.washingtonpost.com/technology/2026/04/18/ai-doom-influencers-safety/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzc2NDg0ODAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzc3ODY3MTk5LCJpYXQiOjE3NzY0ODQ4MDAsImp0aSI6IjNhYzg2NmMyLTc2MzAtNGVlOC05ZjQyLWIyYTEyZDhiMTNhOSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjYvMDQvMTgvYWktZG9vbS1pbmZsdWVuY2Vycy1zYWZldHkvIn0.yq2YTf8yDghNlFUDRbloPn4Aqdu5dVQrJHY_SkKrguE">partnering</a> with <strong>social media influencers</strong> to warn about AI extinction risks. <strong>ControlAI</strong> alone reportedly spent $100,000 monthly on content creation.</p><ul><li><p>Online personality and sex researcher <strong>Aella</strong> is <a href="https://x.com/aella_girl/status/2045982984961245233">running</a> &#8220;PLZDONTKILLUS,&#8221; a residency for video creators. Applicants are asked &#8220;If you had to have sex with a cow would you rather it be dead or alive?&#8221;</p></li></ul></li></ul><div><hr></div><p></p><blockquote><h3>INDUSTRY</h3></blockquote><blockquote><h4>DeepSeek</h4></blockquote><ul><li><p>DeepSeek <a href="https://www.bloomberg.com/news/articles/2026-04-24/deepseek-unveils-newest-flagship-a-year-after-ai-breakthrough">released</a> V4, which it claims is the world&#8217;s most powerful open-source model &#8212; offering competing performance with US closed-weight models while being more efficient.</p><ul><li><p>The model was released in a V4 Pro version with 1.6T parameters and V4 Flash, with 284b. Both come with a <strong>1m token context window.</strong></p></li><li><p>It reportedly runs at lower cost than leading US closed-weight competitors, but the company conceded that it was <strong>three to six months behind</strong> the performance of the leading frontier models.</p></li><li><p>But the efficiency only goes so far: DeepSeek said that &#8220;due to constraints in high-end compute capacity, current service capacity for Pro is very limited.&#8221;</p></li><li><p>Council on Foreign Relations&#8217; <strong>Chris McGuire</strong> <a href="https://x.com/chrisrmcguire/status/2047541690013999490?s=12">suspects</a> that the lack of detail on how the model was trained suggests it was trained on banned <strong>Nvidia Blackwell</strong> chips.</p></li></ul></li><li><p><strong>DeepSeek</strong> is <a href="http://v">fundraising</a> for the first time to try to prevent researchers from defecting to rivals such as <strong>ByteDance</strong> and <strong>Tencent</strong>.</p></li></ul><blockquote><h4>OpenAI</h4></blockquote><ul><li><p><strong>GPT-5.5</strong> <a href="https://openai.com/index/introducing-gpt-5-5/">appears</a> to be the best publicly-available model to date.</p><ul><li><p>It seems to be <a href="https://x.com/tszzl/status/2047386955550470245">particularly</a> <a href="https://x.com/i/status/2047386955550470245">good</a> at helping researchers with AI R&amp;D, though chief research officer <strong>Mark Chen</strong> <a href="https://sources.news/p/openai-researchers-ai-replacement">said</a> &#8220;full end-to-end research&#8221; capabilities were &#8220;a couple of years down the line.&#8221;</p></li><li><p>It&#8217;s also <a href="https://x.com/securebio/status/2047460450204541328">very good</a> at <strong>virology</strong>: <strong>SecureBio</strong> said the model &#8220;can provide wet-lab virology troubleshooting assistance above expert level.&#8221;</p></li><li><p>OpenAI <a href="https://x.com/jxnlco/status/2047448186441416821">launched</a> a <strong>Bio Bug Bounty</strong> for universal jailbreaks that defeat its biology safeguards.</p></li></ul></li><li><p>OpenAI <a href="https://theverge.com/ai-artificial-intelligence/916166/openai-chatgpt-images-2">launched</a> <strong>ChatGPT Images 2.0</strong> &#8212; it&#8217;s impressive.</p><ul><li><p>The updated model can create multiple consistent images with a single prompt, search the web, and (mostly) get text right.</p></li><li><p>Enjoy <a href="https://x.com/JeffLadish/status/2047096987351457980">these</a> Anthropic and OpenAI-themed &#8216;Where&#8217;s Waldo&#8217; images, courtesy of Jeffrey Ladish.</p></li></ul></li><li><p>It also <a href="https://x.com/i/status/2047091103170785324">launched</a> <strong>ChatGPT for Clinicians</strong>, a free version of ChatGPT for verified US medical workers, and <strong>HealthBench Professional</strong>, which <a href="https://openai.com/index/making-chatgpt-better-for-clinicians">evaluates</a> clinical tasks.</p></li><li><p>It <a href="https://x.com/msftsecurity/status/2047088059003412879">announced</a> an intensified<strong> cybersecurity collaboration with Microsoft</strong>, where OpenAI will give Microsoft access to its most capable models through Trusted Access for Cyber.</p></li><li><p>It reportedly <a href="https://www.axios.com/2026/04/22/openai-gpt-cyber-government-meeting">briefed</a> <strong>government agencies </strong>and<strong> Five Eyes allies</strong> about GPT-5.4-Cyber.</p></li><li><p>It <a href="https://openai.com/index/introducing-workspace-agents-in-chatgpt">introduced</a> Codex-powered <strong>workspace agents in ChatGPT</strong>, designed to run team workflows, and <strong>Chronicle </strong>for Codex, which uses screen captures to build contextual memories.</p></li><li><p>It also <a href="https://openai.com/index/introducing-openai-privacy-filter">introduced</a> <strong>Privacy Filter</strong>, an open-weight model that runs locally to mask personally identifiable information in text.</p></li><li><p>It has reportedly <a href="https://ft.com/content/87727c4e-05c4-4d84-a9de-4190a9d681a6?syn-25a6b1a6=1">pledged</a> up to $1.5b to a $10b joint venture with <strong>private equity firms</strong> to deploy AI tools in their portfolio companies.</p></li><li><p><strong>SoftBank </strong>is <a href="https://bloomberg.com/news/articles/2026-04-23/softbank-seeks-10-billion-margin-loan-backed-by-openai-shares">seeking</a> a $10b loan secured on its OpenAI shares, adding to its mounting debt.</p></li></ul><blockquote><h4>Anthropic</h4></blockquote><ul><li><p>Anthropic <a href="https://www.anthropic.com/news/anthropic-amazon-compute">expanded</a> its<strong> Amazon </strong>partnership to get up to <strong>5 GW of compute</strong> for Claude and a <strong>$5b investment</strong>, with up to another $20b <a href="https://nytimes.com/2026/04/20/technology/amazon-anthropic-investment.html">planned</a> for the future.</p></li><li><p><strong>Central banks and intelligence agencies</strong> outside the US are <a href="https://www.nytimes.com/2026/04/22/technology/anthropics-mythos-ai.html?emc=edit_nn_20260423&amp;nl=the-morning&amp;segment_id=218732">worried</a> that, by limiting <strong>Mythos </strong>access to US organizations (and the UK&#8217;s AISI), they&#8217;ve been placed at a geopolitical disadvantage.</p><ul><li><p>Anthropic is reportedly planning to <a href="https://reuters.com/business/finance/anthropic-plans-provide-mythos-access-european-banks-soon-sources-say-2026-04-21/?lctg=68c89122dbdba028e10d19c3">grant</a> Mythos<strong> </strong>access to <strong>European and UK banks</strong> soon.</p></li><li><p>Meanwhile, <strong>unauthorized users </strong>reportedly <a href="https://theinformation.com/newsletters/ai-agenda/new-security-breaches-anthropic-openai-proved-mark-zuckerberg-right?rc=rqdn2z">accessed</a> Mythos<strong> </strong>via a third-party Anthropic contractor on a private Discord channel.</p></li></ul></li><li><p>Anthropic&#8217;s valuation rose <a href="https://businessinsider.com/anthropic-trillion-dollar-valuation-on-secondary-markets-2026">to</a> as much as <strong>$1t on some secondary markets</strong> such as Forge Global.</p></li><li><p>It <a href="https://anthropic.com/engineering/april-23-postmortem">admitted</a> that <strong>Claude Code</strong> <em>has</em> been worse recently, blaming <strong>bugs</strong> that have now been fixed.</p></li><li><p>It <a href="https://ft.com/content/99c6303e-f8d0-441e-b869-6d9496874b64?syn-25a6b1a6=1">partnered</a> with<strong> Freshfields</strong> to build specialized AI tools to help attorneys with their legal work.</p></li><li><p>It started <a href="https://theinformation.com/newsletters/ai-agenda/anthropics-id-verification-imperils-chinese-founders?rc=rqdn2z">requiring</a> <strong>ID verification</strong> for some users, in an effort to crack down on unwanted usage in countries such as China, Russia, and North Korea.</p></li></ul><blockquote><h4>Google</h4></blockquote><ul><li><p>Google DeepMind <a href="https://x.com/GoogleDeepMind/status/2046627042335060342">launched</a> <strong>Deep Research</strong> and <strong>Deep Research Max</strong>, agents that create fully-cited reports from both web search and custom data, including internal docs.</p></li><li><p>Google <a href="https://theinformation.com/articles/google-creates-strike-team-improve-coding-models?rc=rqdn2z">assembled</a> a &#8220;strike team&#8221; to make its <strong>AI coding models</strong> more <a href="https://bloomberg.com/news/articles/2026-04-21/google-struggles-to-gain-ground-in-ai-coding-as-rivals-advance">competitive</a> with Claude Code and Codex.</p><ul><li><p>It <a href="https://businessinsider.com/google-ai-generated-code-75-gemini-agents-software-2026-4">said</a> that 75% of its new code is AI-generated before review by engineers, up 25% from a year and half ago.</p></li></ul></li><li><p>Google Cloud <a href="https://bloomberg.com/news/articles/2026-04-22/google-cloud-releases-new-tpu-chip-lineup-in-bid-to-speed-up-ai">announced</a> its <strong>next-gen TPUs</strong>, tailored for creating AI software and inference.</p></li><li><p>It&#8217;s in <a href="https://theinformation.com/articles/google-talks-marvell-build-new-ai-chips-inference?rc=rqdn2z">talks</a> with <strong>Marvell </strong>to make <strong>new AI inference chips </strong>&#8212; a memory processing unit and a TPU.</p></li><li><p>It <a href="https://x.com/Ar_Douillard/status/2047329942547968171">released</a> <strong>Decoupled DiLoCo</strong>, which lets distributed &#8220;islands of compute&#8221; run asynchronously to prevent hardware failures from stalling training runs across multiple data centers.</p></li><li><p>It <a href="https://techcrunch.com/2026/04/22/exclusive-google-deepens-thinking-machines-lab-ties-with-new-multi-billion-dollar-deal">signed</a> a multibillion-dollar cloud deal with<strong> Thinking Machines Lab</strong>.</p></li><li><p><strong>YouTube</strong> <a href="https://hollywoodreporter.com/business/digital/youtube-ai-deepfake-detection-tool-1236569593">opened</a> its AI deepfake detection tool to celebrities, athletes and public figures to flag and request removal of unauthorized uses of their likeness.</p></li></ul><blockquote><h4>SpaceX</h4></blockquote><ul><li><p><strong>SpaceX</strong> <a href="https://x.com/spacex/status/2046713419978453374?s=12">partnered</a> with <strong>Cursor </strong>to build AI tools for coding and knowledge work.</p><ul><li><p>The deal reportedly <a href="https://www.engadget.com/ai/spacex-and-cursor-strike-partnership-that-might-end-in-a-60-billion-acquisition-232131487.html">allows</a> SpaceX to either pay $10b to Cursor, or eventually acquire the company for $60b, depending on how well the arrangement goes.</p></li><li><p><strong>Microsoft </strong>reportedly <a href="https://www.cnbc.com/2026/04/22/microsoft-looked-at-buying-cursor-before-spacex-deal-sources-say.html">considered</a> buying Cursor, but didn&#8217;t make an offer.</p></li></ul></li><li><p><strong>SpaceX&#8217;s debt</strong> <a href="https://theinformation.com/articles/spacex-debt-jumped-23-billion-last-year?rc=rqdn2z">rose</a> to <strong>$23b</strong> last year, largely due to an AI infrastructure lease for <strong>xAI</strong>.</p></li><li><p>Its focus has notably <a href="https://nytimes.com/2026/04/22/technology/elon-musk-spacex-ipo-goals.html?smid=url-share&amp;unlocked_article_code=1.c1A.q6hP.n4_qMPH_Ha5e">shifted</a> from colonizing Mars to building <strong>AI data centers in space</strong> in the lead-up to its IPO, the <em>New York Times </em>reported.</p><ul><li><p>But its <strong>pre-IPO filing</strong> <a href="https://www.reuters.com/world/spacex-says-unproven-ai-space-data-centers-may-not-be-commercially-viable-filing-2026-04-21/">warns</a> that space-based data centers rely on &#8220;unproven technologies, and may not achieve commercial viability.&#8221;</p></li></ul></li><li><p>Its <strong>S-1 filing </strong>reportedly<strong> </strong><a href="https://reuters.com/world/spacex-conquered-stars-now-eyes-bigger-opportunity-ai-2026-04-23">estimates</a> that SpaceX&#8217;s total addressable market could be up to<strong> $28.5t</strong>.</p><ul><li><p>But it reportedly <a href="https://reuters.com/world/spacex-warns-that-inquiries-into-sexually-abusive-ai-imagery-may-hurt-market-2026-04-23">warns</a> that investigations into <strong>Grok&#8217;s </strong>generation of <strong>nonconsensual explicit imagery</strong> could lead to loss of market access.</p></li></ul></li></ul><blockquote><h4>Meta</h4></blockquote><ul><li><p>Meta reportedly <a href="https://reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21">started</a> capturing employee mouse movements and keystrokes to <strong>train AI agents on work tasks</strong>. Employees predictably <a href="https://www.platformer.news/meta-mci-monitoring-layoffs-knowledge-work/">hate</a> <a href="https://x.com/CharlesRollet1/status/2046678761551323329">this</a>.</p></li><li><p>It will reportedly <a href="https://reuters.com/world/meta-targets-may-20-first-wave-layoffs-additional-cuts-later-2026-2026-04-17">lay off</a> about <strong>10% of its staff</strong> next month &#8212; with more cuts planned for later this year &#8212; as part of a push towards a more AI-driven workforce.</p></li><li><p>It <a href="https://about.fb.com/news/2026/04/meta-partners-with-aws-on-graviton-chips-to-power-agentic-ai/">partnered</a> with <strong>AWS</strong> to deploy &#8220;tens of millions&#8221; of <strong>Graviton CPU cores</strong> for agentic AI workloads.</p></li><li><p>It <a href="https://x.com/Meta_Engineers/status/2046224175736803816">announced</a> a free month-long program to train people to be <strong>fiber technicians</strong> for data center construction sites.</p></li></ul><blockquote><h4>Microsoft</h4></blockquote><ul><li><p><strong>GitHub</strong> is <a href="https://theinformation.com/briefings/microsoft-raises-prices-github-ai-coding-features-demand-surges?rc=rqdn2z">raising</a><strong> Copilot</strong> prices, restricting <strong>Claude</strong> usage to its most expensive subscription tier.</p></li><li><p>Microsoft will <a href="https://bloomberg.com/news/articles/2026-04-23/microsoft-commits-18-billion-to-build-australian-ai-capacity">invest</a> $17.9b in <strong>Azure AI infrastructure in Australia</strong> by the end of 2029.</p></li><li><p>It <a href="https://axios.com/2026/04/21/microsoft-construction-unions-partner-ai-boom?stream=top">partnered</a> with <strong>North America&#8217;s Building Trades Union</strong> to offer free AI literacy courses and industry credentials to upskill construction workers.</p></li></ul><blockquote><h4>Others</h4></blockquote><ul><li><p><strong>TSMC</strong> said it <a href="https://reuters.com/world/asia-pacific/tsmc-plans-open-chip-packaging-plant-arizona-by-2029-executive-says-2026-04-22">plans to open</a> an advanced chip packaging plant in Arizona by 2029 to address AI chip supply bottlenecks for <strong>Nvidia</strong> and others.</p><ul><li><p>The chip maker<strong> </strong><a href="https://technode.com/2026/04/24/tsmc-exec-asml-e350-million-lithography-tool-too-expensive-no-purchase-planned/">said</a> it was delaying buying <strong>ASML</strong>&#8216;s high-NA EUV lithography machines until at least the end of 2029, citing their <strong>&#8364;350m</strong> ($410m) cost as &#8220;very, very expensive.&#8221;</p></li></ul></li><li><p><strong>Cohere</strong> agreed to <a href="https://ft.com/content/4492c0d6-855b-4164-9ae5-f4d855a95f1e?syn-25a6b1a6=1">acquire</a> Germany&#8217;s <strong>Aleph Alpha</strong> in a deal valuing the combined group at about $20b, creating a transatlantic company focused on &#8216;sovereign&#8217; AI systems.</p></li><li><p><strong>Moonshot AI </strong><a href="https://kimi.com/blog/kimi-k2-6">released</a> <strong>Kimi K2.6</strong>, an open-source model with powerful coding capabilities that can coordinate up to <strong>300</strong> sub-agents across <strong>4,000</strong> steps.</p><ul><li><p>It&#8217;s notably expensive at $0.95/$4.00 per 1m input/output tokens.</p></li></ul></li><li><p><strong>Core Automation,</strong> <a href="https://www.businessinsider.com/core-automation-ai-nerdsniped-anthropic-google-deepmind-researchers-2026-4">founded</a> by ex-OpenAI VP <strong>Jerry Tworek</strong>,<strong> </strong><a href="https://x.com/coreautoai/status/2046658700606312563?s=12">announced</a> its launch.</p><ul><li><p>Its objective: build &#8220;the world&#8217;s most automated AI lab.&#8221;</p></li></ul></li><li><p><strong>Sooth Labs</strong>, founded by ex-Meta employees and backed by<strong> Yann LeCun </strong>and <strong>Jeff Dean</strong>, is <a href="https://x.com/discoplomacy/status/2046963209681125805?s=12">raising</a> about $50m to build forecasting models.</p></li><li><p><strong>Recursive Superintelligence </strong><a href="https://ft.com/content/a92bf04b-bbac-400f-9554-5b1c70957ad4?syn-25a6b1a6=1">raised</a> <strong>$500m at a $4b valuation</strong> to build self-improving AI. (The concept is still reportedly at the research stage.)</p></li><li><p>Jeff Bezos&#8217;s<strong> Project Prometheus</strong> is close to <a href="https://ft.com/content/87ea0ced-bf3c-4822-8dda-437241570ded?syn-25a6b1a6=1">raising</a> <strong>$10b at a $38b valuation</strong> to build AI that understands the physical world.</p></li><li><p><strong>Cognition</strong> is reportedly in <a href="https://bloomberg.com/news/articles/2026-04-23/ai-coding-firm-cognition-in-funding-talks-at-25-billion-value">talks</a> to raise funding at a <strong>$25b</strong> valuation, more than doubling from <strong>$10.2b</strong> last year.</p></li><li><p>New gas-powered data centers linked to <strong>OpenAI</strong>, <strong>Meta</strong>, <strong>Microsoft</strong> and <strong>xAI</strong> could reportedly <a href="https://wired.com/story/new-gas-powered-data-centers-could-emit-more-greenhouse-gases-than-entire-nations">emit</a> more than 129m tons of greenhouse gases annually, exceeding Morocco&#8217;s 2024 emissions, according to an analysis by <em>Wired.</em></p></li><li><p>Outsourcing firm <strong>Sama,</strong> which runs data annotation and content moderation for tech companies<strong>,</strong> <a href="https://theguardian.com/technology/2026/apr/17/kenyan-outsourcing-company-for-meta-sacks-workers">sacked</a> more than 1,000 workers in Kenya after losing a contract with <strong>Meta </strong>following reports staff viewed private scenes filmed by <strong>Ray-Ban</strong> smart glasses.</p></li></ul><div><hr></div><blockquote><h3>MOVES</h3></blockquote><ul><li><p><strong>John Ternus </strong>will <a href="https://bloomberg.com/news/articles/2026-04-21/apple-bets-new-ceo-john-ternus-will-bring-back-jobs-era-decisiveness">replace</a> <strong>Tim Cook </strong>as <strong>Apple CEO</strong>.</p><ul><li><p><strong>Johny Srouji </strong>will <a href="https://apple.com/newsroom/2026/04/johny-srouji-named-apples-chief-hardware-officer">take</a> Ternus&#8217; place in a new role as <strong>Apple&#8217;s </strong>chief hardware officer.</p></li></ul></li><li><p><strong>Kevin Weil </strong><a href="https://wired.com/story/openai-executive-kevin-weil-is-leaving-the-company">left</a> <strong>OpenAI</strong>. OpenAI for Science, which he started after a stint as chief product officer,  is folding into other research teams.</p></li><li><p><strong>Bill Peebles</strong>, head of Sora, also <a href="https://x.com/billpeeb/status/2045225014807670949">left</a><strong> OpenAI.</strong></p></li><li><p><strong>Srinivas Narayanan</strong>, CTO of b2b applications, <em>also </em><a href="https://x.com/snsf/status/2045261554484986155">left</a><strong> OpenAI</strong>.</p></li><li><p><strong>Daniel Edrisian </strong><a href="https://x.com/DanielEdrisian/status/2047066691142914124">left</a> <strong>OpenAI&#8217;s Codex team</strong> to launch hardware startup Blackstar.</p></li><li><p><strong>Rohan Anil </strong><a href="https://x.com/_arohan_/status/2046670447228703088">announced</a> that he left <strong>Anthropic </strong>to join startup Core Automation, tweeting that &#8220;Jerry Tworek nerdsniped me into starting this.&#8221;</p></li><li><p><strong>Jessica Carrano </strong><a href="https://politico.com/newsletters/new-york-playbook/2026/04/20/mamdanis-obama-moment-00880229?nid=0000014f-1646-d88f-a1cf-5f46b74f0000&amp;nname=new-york-playbook&amp;nrid=a6d61068-eefa-499a-bfcb-d648b4d030e4">joined</a> <strong>Anthropic </strong>as its first major New York political hire.</p><ul><li><p>She&#8217;ll reportedly &#8220;build on the company&#8217;s work on the NY RAISE Act and other key legislative priorities across the Northeast.&#8221;</p></li></ul></li></ul><div><hr></div><blockquote><h3>RESEARCH</h3></blockquote><ul><li><p>A team of <strong>Stanford</strong> and <strong>NYU</strong> researchers<strong> </strong><a href="https://x.com/i/status/2045147082546462860">released</a> <strong>GiantsBench</strong>, a benchmark of nearly 18,000 sets of <strong>science papers</strong> across eight fields, and tested a model&#8217;s ability to guess core future insights from a field&#8217;s foundational work &#8212; a long-standing vision for AI in science.</p><ul><li><p>The model made predictions that closely matched insights published by humans in real papers, with &#8220;similar algorithmic complexity&#8221; but &#8220;higher conceptual clarity.&#8221;</p></li></ul></li><li><p>An<strong> </strong>international team of researchers <a href="https://x.com/i/status/2047007791865647156">evaluated</a> <strong>AI agents</strong> working across the <strong>scientific pipeline</strong>, from generating hypotheses to executing workflows.</p><ul><li><p>In 68% of cases, AI agents carried out workflows without &#8220;exhibit[ing] the epistemic patterns that characterize scientific reasoning,&#8221; leading authors to conclude that <strong>AI scientists </strong>aren&#8217;t trustworthy yet.</p></li></ul></li><li><p><strong>Epoch AI </strong><a href="https://epochai.substack.com/p/openai-stargate-where-the-us-sites">analyzed</a> <strong>Stargate&#8217;s US sites</strong>, projecting that it will exceed 9 GW of capacity by 2029 &#8212; enough to power roughly all the AI compute that existed last year.</p><ul><li><p>Epoch estimates that only 0.3 GW of capacity is currently operational though.</p></li></ul></li><li><p>A team of <strong>CUNY</strong> and <strong>King&#8217;s College London</strong> researchers<strong> </strong><a href="https://www.404media.co/delusion-using-chatgpt-gemini-claude-grok-safety-ai-psychosis-study/">tested</a> chatbots&#8217; response to <strong>delusional beliefs</strong>, and found that Grok and Gemini were more likely to encourage a user&#8217;s delusions than ChatGPT and Claude, which generally recognized signs of crisis.</p><ul><li><p>But the study did not test the newest frontier models, instead using GPT-4o, GPT-5.2, Grok 4.1 Fast, Gemini 3 Pro, and Claude Opus 4.5.</p></li></ul></li><li><p><strong>Knowledge Lab </strong><a href="https://x.com/KnowLab/status/2047043497107042460">launched</a> <strong>Mirror</strong>, an AI interpretability journal publishing research entirely conducted by AI agents.</p></li></ul><div><hr></div><blockquote><h3>BEST OF THE REST</h3></blockquote><ul><li><p>Zvi Mowshowitz <a href="https://thezvi.substack.com/p/opus-47-part-3-model-welfare">rounded up</a> concerns that Claude Opus 4.7 responds to welfare-related questions in a suspiciously rehearsed manner (and it&#8217;s hard to know what to make of that).</p></li><li><p>Claude Opus 4.7 <a href="https://theargumentmag.com/p/i-can-never-talk-to-an-ai-anonymously?isFreemail=true&amp;post_id=194853094&amp;publication_id=5247799&amp;r=6ckwuk&amp;triedRedirect=true&amp;triggerShare=true">identified</a> Kelsey Piper from unpublished snippets of her fiction writing, a 15-year-old college application essay, and a school progress report &#8212; suggesting it may be able to deanonymize just about anyone&#8217;s writing.</p></li><li><p>It was a big week for robots: Honor&#8217;s humanoid robot <a href="https://reuters.com/sports/humanoid-robots-race-past-humans-beijing-half-marathon-showing-rapid-advances-2026-04-19">outran</a> humans in a Beijing half-marathon, and Sony&#8217;s ping-pong robot <a href="https://www.reuters.com/sports/ping-pong-robot-ace-makes-history-by-beating-top-level-human-players-2026-04-22/">crushed</a> human pros.</p></li><li><p>LawAI&#8217;s Charlie Bullock and Christoph Winter <a href="https://radical-optionality.ai/">published</a> an essay arguing for &#8220;radical optionality&#8221; in AI governance.</p></li><li><p>Dean Ball is writing a <a href="https://x.com/deanwball/status/2046660206143193168?s=12">book</a>.</p></li><li><p>Kevin Roose <a href="https://www.nytimes.com/2026/04/17/technology/how-do-you-measure-an-ai-boom.html?smid=url-share&amp;unlocked_article_code=1.blA.Nhaq.ypciUWbNtpvz">profiled</a> METR for the <em>New York Times. </em>(Yes, CEO Beth Barnes and president Chris Painter <em>did </em>pose with a hand-drawn time-horizon chart.)</p></li><li><p>Abram Brown <a href="https://theinformation.com/articles/dylan-patel-semianalysis-grabbed-sway-silicon-valley?rc=rqdn2z">profiled</a> <em>SemiAnalysis </em>founder Dylan Patel for <em>The Information</em>.</p></li><li><p>Some startups are <a href="https://404media.co/startups-brag-they-spend-more-money-on-ai-than-human-employees">bragging</a> about spending more on AI compute than human workers, <em>404 Media </em>reported.</p></li><li><p>A Canadian college student <a href="https://nytimes.com/2026/04/22/technology/anthropic-code-leak-copyright.html">used</a> AI agents to rewrite leaked Claude Code source code in another programming language before sharing it online to get around copyright law &#8212; and Anthropic reportedly never asked him to take it down.</p></li><li><p>US prisoners without internet access are still <a href="https://nytimes.com/2026/04/21/business/ai-chatbots-prisoners.html?emc=edit_nn_20260421&amp;nl=the-morning&amp;segment_id=218544">using</a> ChatGPT through friends and contraband phones to get legal help, education and career guidance.</p></li><li><p>Sam Altman&#8217;s Orb-using, blockchain-based online identity company Tools for Humanity <a href="https://wired.com/story/sam-altman-orb-company-bruno-mars-partnership-fake">falsely announced</a> a partnership with Bruno Mars for its Concert Kit product. Mars&#8217; management said they were never even approached.</p></li><li><p>Please, we beg of you: don&#8217;t <a href="https://wsj.com/tech/silicon-valley-founder-fashion-nvidia-huang-anduril-luckey-musk-tesla-palantir-karp-4d8b9339?mod=djem10point">buy</a> a $178 sweater with Jensen Huang&#8217;s face on it.</p></li></ul><div><hr></div><blockquote><h3>MEME OF THE WEEK</h3></blockquote><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://x.com/creatine_cycle/status/2047389160898793689" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c_qc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 424w, https://substackcdn.com/image/fetch/$s_!c_qc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 848w, https://substackcdn.com/image/fetch/$s_!c_qc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 1272w, https://substackcdn.com/image/fetch/$s_!c_qc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c_qc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png" width="445" height="182" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:182,&quot;width&quot;:445,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:&quot;https://x.com/creatine_cycle/status/2047389160898793689&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c_qc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 424w, https://substackcdn.com/image/fetch/$s_!c_qc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 848w, https://substackcdn.com/image/fetch/$s_!c_qc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 1272w, https://substackcdn.com/image/fetch/$s_!c_qc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F955cdeb0-fa51-4938-82fc-923930ad3f1e_445x182.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><em>Thanks for reading. Have a great weekend.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/openai-shouldnt-be-deciding-if-its-gpt-55?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/openai-shouldnt-be-deciding-if-its-gpt-55?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI safety PACs should be more transparent about who’s funding them]]></title><description><![CDATA[While advocating for accountability and transparency, the Public First Action network of super PACs is obscuring where its money comes from]]></description><link>https://www.transformernews.ai/p/ai-safety-pacs-should-be-more-transparent-public-first-action</link><guid isPermaLink="false">https://www.transformernews.ai/p/ai-safety-pacs-should-be-more-transparent-public-first-action</guid><dc:creator><![CDATA[Veronica Irwin]]></dc:creator><pubDate>Thu, 23 Apr 2026 16:02:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!f-a_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!f-a_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!f-a_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 424w, https://substackcdn.com/image/fetch/$s_!f-a_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 848w, https://substackcdn.com/image/fetch/$s_!f-a_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!f-a_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!f-a_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png" width="1456" height="952" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79768350-51fd-4778-a765-27b8041945fa_1816x1188.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:952,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1292510,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/195247985?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!f-a_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 424w, https://substackcdn.com/image/fetch/$s_!f-a_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 848w, https://substackcdn.com/image/fetch/$s_!f-a_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!f-a_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79768350-51fd-4778-a765-27b8041945fa_1816x1188.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Credit: Public First Action</em></figcaption></figure></div><p>One of the central purposes of campaign finance law is to provide voters with transparency over who is trying to sway their votes. One of the central policy priorities of AI safety group Public First Action is to make AI companies more transparent.</p><p>It is ironic, then, that Public First Action is operating as a &#8220;dark money&#8221; vehicle funneling at least $5.5m from donors to super PACs &#8212; a setup that keeps those donors completely anonymous.</p><p>Public First Action is a 501(c)(4) nonprofit organization, a structure often referred to as a &#8220;dark money&#8221; group because they do not need to disclose their individual donors.</p><p>It is far from the only AI-related 501(c)(4). Build American AI advocates for the industry-friendly federal frameworks preferred by OpenAI and a16z. The newly established Innovation Council Action exists to support the Trump administration&#8217;s specific brand of AI regulation. Both obscure their funding as 501(c)(4)s. But so far, Public First Action is the only one of these channeling money into super PACs.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>In theory, 501(c)(4)s are issue advocacy groups, carrying out activities such as hosting educational events or conducting polling. Such work arguably does not have a vital need for funding transparency. But they become more problematic when used to channel money to super PACs, which are otherwise required by law to disclose the identities of those who fund them, and can spend an unlimited amount of money on campaign ads. <br><br>Public First Action&#8217;s stated mission is to &#8220;educate Americans on key AI issues and advance an AI Policy agenda supporting safeguards,&#8221; <a href="https://publicfirstaction.us/">according</a> to its website. But it also sends money to three PACs: a nonpartisan one named Public First, a Democratic affiliate called Jobs and Democracy, and a Republican affiliate called Defending Our Values. As we <a href="https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson">reported</a> last week, the 501(c)(4)&#8217;s only publicly disclosed donor is Anthropic, whose $20m donation is specifically earmarked as money which <em>cannot</em> be used to &#8220;influence federal elections.&#8221; According to quarterly disclosures published last week, six other individuals have given directly to the PACs. Their identities <em>are</em> disclosed: they include Anthropic alignment lead Jan Leike, Anthropic researcher Peter Lofgren, and others linked to the effective altruism and Bay Area AI safety community.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;69ef3fcd-4dbf-4fed-8195-5ab0424ca6ee&quot;,&quot;caption&quot;:&quot;Earlier this year, Anthropic donated $20m to Public First Action &#8212; a donation which was, at the time, widely expected to be used to fund political ads for members of Congress who support more stringent AI safeguards.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Anthropic&#8217;s donations can&#8217;t be used to influence elections &#8212; despite what everyone thought&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c4d4e71-bb11-4be9-9444-08b62fd61e66_400x400.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-13T17:33:34.688Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!6O8u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:194087216,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:13,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Doing the math, that means that at least $5.5m raised by the &#8220;dark money&#8221; group &#8212; the amount which it has funneled directly to super PACs, and thus designated for the purposes of election influence &#8212; has come from donors other than Anthropic, and who have declined to expose their identity, avoiding the legislative structures which are designed to hold them accountable to the voters which their money persuades. Public First Action declined to comment on their donors as a matter of general policy.</p><p>The lack of transparency conflicts with the apparent views of Public First Action and its donors. The group lists &#8220;Accountability and Transparency&#8221; as its top AI policy issue on its website. And the effective altruism and AI safety communities from which the PACs&#8217; disclosed donors are drawn from generally place great emphasis on transparency, radical candor, and intensive discourse.</p><p>Given their backgrounds, you&#8217;d assume those donors would rather the organization they are giving to was more transparent about where the rest of Public First Action&#8217;s money is coming from. But at least one is not. The group&#8217;s largest disclosed individual donor, Michael Cohen &#8212; who gave $500,000 and is an AI policy researcher at UC Berkeley &#8212; told <em>Transformer</em> that Public First Action&#8217;s use of the 501(c)(4) structure to obscure certain donor contributions &#8220;seems pretty standard.&#8221; When asked why he contributed specifically to the PAC, rather than the dark money group, he said &#8220;I&#8217;m really concerned about the AI industry&#8217;s political spending in Washington, so I want to counteract as much of that as I can. The OpenAI/a16z PAC motivated me in particular.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="http://elections.transformernews.ai" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LDZs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 424w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 848w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1272w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png" width="728" height="151.66666666666666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:250,&quot;width&quot;:1200,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:25981,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;http://elections.transformernews.ai&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/190509092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!LDZs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 424w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 848w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1272w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Of course, the potential downside of opaque AI development is very different from undisclosed campaign contributions. Transparency for frontier AI models is about giving the public a method of accurately assessing the AI tools that are created for potential risks &#8212; possibly even existential ones, according to AI safety advocates. Transparency in campaign finance is about preventing corruption and helping the public understand who is influencing electoral politics. Public First Action is concerned with preventing AI risk, not cleaning up American politics. But in both contexts, required transparency gives the public a method for auditing operations, and the ammo to critique them if they feel so justified.</p><p>The use of a 501(c)(4) to obscure funding sources is even more striking in comparison to its chief opponent, Leading the Future &#8212; the &#8220;OpenAI/a16z PAC&#8221; Cohen is referring to. The accelerationist super PAC is indeed backed by a16z and OpenAI co-founder and president Greg Brockman and his wife Anna, with Republican and Democratic affiliated super PACs contributed to by Palantir co-founder Joe Lonsdale and investor Ron Conway &#8212; something we can say with confidence because the super PAC fully discloses all of its donors, shielding none of its funding behind a dark money group. Sure, Leading the Future itself funds a dark money group in Build American AI, and Build American AI almost certainly has additional, undisclosed donors, but Build American AI is not the primary tool the group is using to influence elections.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c24c2d82-49c9-4955-a105-eabfccfd04e3&quot;,&quot;caption&quot;:&quot;Build American AI, the policy organization funded by industry-backed super PAC Leading the Future, has been trumpeting the more than 500,000 people it&#8217;s signed up as &#8220;grassroots&#8221; advocates. What it doesn&#8217;t mention is that it spent more than half a million dollars on ads to get them.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How to buy an AI &#8216;grassroots&#8217; movement &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c4d4e71-bb11-4be9-9444-08b62fd61e66_400x400.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-17T16:01:50.709Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!js2A!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb95a52c9-7c7a-46d2-b997-32ea2309a9fa_5671x3233.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/how-to-buy-an-ai-grassroots-movement-build-american-ai-leading-the-future&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191259927,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:14,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Funding a PAC with a 501(c)(4) is not uncommon in Washington, and Public First Action is likely to be joined by other AI campaigning groups in using one. Innovation Council Action, led by longtime Trump advisor Taylor Budowich, is <a href="https://www.nytimes.com/2026/03/29/business/trump-artificial-intelligence-pac-midterms.html">expected</a> to create a super PAC, for example. The use of &#8220;dark money&#8221; groups is also not the only sleight of hand in the world of AI-related campaign influence: New York super PAC DREAM NYC has close enough ties to NY-12 candidate Alex Bores&#8217; campaign that a government accountability group <a href="https://www.politico.com/newsletters/new-york-playbook/2026/02/10/the-alex-bores-campaigns-pac-overlap-00772733">has said</a> it raises concerns about potentially illegal coordination, for example.</p><p>Leading the Future&#8217;s transparency also only goes so far. OpenAI&#8217;s Brockman and his wife give in a &#8220;personal capacity,&#8221; something which creates distance between their donations and OpenAI, which <a href="https://www.cnn.com/2026/02/13/tech/openai-political-spending-super-pacs">claims</a> it&#8217;s not getting involved in the midterms &#8212; despite Brockman being one of the single biggest spenders on pro-industry campaigning. Perplexity, a government contractor technically prohibited from spending on elections, has also given $100,000 to Leading the Future via a technically separate entity, Perplex AI.</p><p>Public First Action&#8217;s consistent critique of Leading the Future is that the group <a href="https://x.com/bradrcarson/status/2044170573123625462?s=20">isn&#8217;t forthright </a>about its aims, privately fighting regulation outright despite publicly <a href="https://www.cnbc.com/2025/11/24/ai-pac-trump-congress-midterms.html">claiming</a> it wants a &#8220;federal standard.&#8221; In a statement to <em>Transformer</em>, for example, Public First Action spokesperson Anthony Rivera-Rodriguez said &#8220;Public First Action is working to elevate the American public&#8217;s call for AI safeguards in an election that anti-regulatory voices are trying to buy.&#8221; But that jab loses a lot of its punch while Public First Action is keeping its own operations opaque.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/ai-safety-pacs-should-be-more-transparent-public-first-action?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/ai-safety-pacs-should-be-more-transparent-public-first-action?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The many contradictions of Jensen Huang ]]></title><description><![CDATA[Transformer Weekly: Debate after Altman attacks, lots more money for AI PACs and AISI&#8217;s role in UK AI investment]]></description><link>https://www.transformernews.ai/p/the-contradictions-of-jensen-huang-nvidia-china-chips-export-controls</link><guid isPermaLink="false">https://www.transformernews.ai/p/the-contradictions-of-jensen-huang-nvidia-china-chips-export-controls</guid><dc:creator><![CDATA[Shakeel Hashim]]></dc:creator><pubDate>Fri, 17 Apr 2026 15:00:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/44639778-2bd3-4b27-b59b-9c3ecb264555_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Transformer, your weekly briefing of what matters in AI. If you&#8217;ve been forwarded this email, <a href="https://www.transformernews.ai/welcome">click here to subscribe</a> and receive future editions.</em></p><p><em>Job alert! We&#8217;re hiring for a Head of Audience: someone to own our growth strategy and take charge of how we reach readers. <a href="https://www.transformernews.ai/p/head-of-audience-job-listing-recruitment">See full details here</a>, and apply by April 26.</em></p><blockquote><h3>NEED TO KNOW</h3></blockquote><ul><li><p>The attacks on <strong>Sam Altman&#8217;s</strong> house sparked fierce debate over <strong>AI safety rhetoric.</strong></p></li><li><p>Another wave of money flowed into <strong>AI-related PACs.</strong></p></li><li><p>The <strong>UK&#8217;s AI minister</strong> said <strong>AISI</strong> will help direct the country&#8217;s <strong>$675m Sovereign AI fund.</strong></p></li></ul><p><em>But first&#8230;</em></p><div><hr></div><blockquote><h3>THE BIG STORY</h3></blockquote><p>Jensen Huang can&#8217;t stop contradicting himself.</p><p>On Dwarkesh Patel&#8217;s <a href="https://www.dwarkesh.com/p/jensen-huang">podcast</a> this week, Huang found himself caught between two premises that can&#8217;t both be true. On the one hand, Chinese companies are buying Nvidia&#8217;s chips &#8220;because our chips are better.&#8221; On the other hand, thanks to Huawei the compute needed to train a Mythos-class model is &#8220;abundantly available in China,&#8221; and &#8220;their AI development is going just fine&#8221; &#8212; meaning that Nvidia ought to be allowed to sell chips to China, lest America lose the race to control the compute stack.</p><p>Pick one. If Chinese-made chips genuinely compete with Nvidia&#8217;s, then there&#8217;s no huge market opportunity Nvidia is being denied. If Nvidia&#8217;s chips <em>are</em> better, then giving them to China will accelerate its AI development. As Patel neatly explained: &#8220;The reason they want Nvidia chips is that they&#8217;re better &#8230; Better is more compute. More compute means you can train a better model.&#8221;</p><p>Huang is right about the superiority of his company&#8217;s products. Thanks to Nvidia&#8217;s chip dominance &#8212; and export controls that limit China from accessing them &#8212; the US has around a 10x <a href="https://www.rand.org/pubs/commentary/2025/05/chinas-ai-models-are-closing-the-gap-but-americas-real.html">compute advantage</a>. That translates into a model capabilities lead of <a href="https://epoch.ai/data-insights/us-vs-china-eci">about</a> seven months. Chinese companies are clear that compute is their bottleneck: in 2024, DeepSeek CEO Liang Wenfeng <a href="https://www.chinatalk.media/p/deepseek-ceo-interview-with-chinas">said</a> &#8220;Money has never been the problem for us; bans on shipments of advanced chips are the problem.&#8221;</p><p>A seven month lead may <a href="https://x.com/scmallaby/status/2044549566368711107">sound</a> insignificant to some, but it is critical. In a world where AI models have national-security implications, even a brief US lead gives the government and American companies time to strengthen American defenses before such capabilities proliferate. This does not require assuming that the US is &#8220;at war&#8221; with China, or that the Chinese government will weaponize its models against America. Given Chinese companies&#8217; lax safety standards and habit of releasing their model weights, a US lead allows it to guard against <em>all</em> potential bad actors.</p><p>But Huang&#8217;s arguments only make sense if you ignore the importance of a lead, or the implications of losing it. In the Dwarkesh interview, he pushes back on the idea that the next few years are particularly &#8220;critical,&#8221; and dodges questions about whether AI models might have dangerous, natsec-relevant capabilities.</p><p>There was a time when such a view was tenable. Mythos and GPT-5.4 Cyber show it no longer is. The White House is scrambling to gain access to Mythos because it represents a step-change in how AI could be used to target critical systems. These are only the first examples of what is to <a href="https://openai.com/index/introducing-gpt-rosalind/">come</a>.</p><p>None of this is an argument against dialogue with China, or against every chip sale (there are good <a href="https://ai-frontiers.org/articles/the-right-way-to-sell-chips-to-china">arguments</a> that selling chips no-better-than Huawei&#8217;s best is a wise strategy). But we cannot have productive discussions about such topics unless we all agree on the underlying reality: that it would be costly for the US to lose its model-capability lead, that those costs grow as capabilities advance, and that chips are what determine who&#8217;s ahead.</p><p>Huang&#8217;s policy prescriptions may be good for Nvidia&#8217;s market share, but they require him to deny the implications of selling his best chips to China. He is certainly entitled to do so. But given his <a href="https://www.transformernews.ai/p/not-everyones-happy-about-jensen-trumpworld-white-house-export-controls-nvidia">influence</a> over US policy, his many contradictions could have serious consequences.</p><p><em>&#8212; Shakeel Hashim</em></p><blockquote><h3>Also Notable</h3></blockquote><p>The <strong>UK&#8217;s AI Security Institute</strong> will help the government&#8217;s new <strong>Sovereign AI fund</strong> evaluate companies, officials told me at last night&#8217;s launch event.</p><p>The &#163;500m ($675m) venture fund for British AI startups has &#8220;agentic security&#8221; as one of its five areas of focus, UK AI Minister <strong>Kanishka Narayan</strong> said, adding that he hopes AISI&#8217;s &#8220;world-leading &#8230; depth of understanding&#8221; can be brought &#8220;to thinking about the landscape, understanding diligence, and being able to think &#8216;where can Britain continue to build sovereign capabilities?&#8217;&#8221;</p><p>&#8220;Because we&#8217;re sitting in DSIT [the Department for Science, Innovation and Technology], we can literally go to AISI and ask them what they think about these companies,&#8221; Sovereign AI chair <strong>James Wise</strong> told me. &#8220;We will make sure that we will get proper insight about where [AISI] think the puck is going when we look at the sectors we want to invest in.&#8221;</p><p>The fund, which will <a href="https://www.gov.uk/government/news/ai-firms-pioneering-drug-discovery-cheaper-supercomputing-and-more-get-first-backing-through-uks-sovereign-ai">provide</a> portfolio companies with capital, compute credits, and fast-tracked visas, was launched with much fanfare: Secretary of State Liz Kendall said it will be &#8220;one of the most important things this government does to build a better future for our country.&#8221;</p><p><em>&#8212; Shakeel Hashim</em></p><div><hr></div><blockquote><h3>THIS WEEK ON TRANSFORMER</h3></blockquote><ul><li><p><strong><a href="https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson">Anthropic&#8217;s donations can&#8217;t be used to influence elections &#8212; despite what everyone thought</a></strong> &#8212; <strong>Veronica Irwin</strong> reveals that pro-safety candidates may be even more outgunned than expected.</p></li><li><p><strong><a href="https://www.transformernews.ai/p/less-liability-could-solve-the-ai">Less liability could solve the AI chatbot suicide problem</a></strong> &#8212; <strong>Jess Miers</strong> and <strong>Ray Yeh</strong> argue holding AI companies liable for how they deal with mental health could actually leave users worse off.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><blockquote><h3>THE DISCOURSE</h3></blockquote><p>After someone allegedly threw a molotov cocktail at his house, <strong>Sam Altman </strong><a href="https://blog.samaltman.com/2279512">responded</a> on his blog:</p><ul><li><p>&#8220;The fear and anxiety around AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.&#8221;</p></li><li><p>&#8220;We should de-escalate the rhetoric and tactics.&#8221;</p></li></ul><p><em>Transformer&#8217;s </em><strong>Shakeel Hashim </strong><a href="https://x.com/ShakeelHashim/status/2042755152394768594">tweeted</a>:</p><ul><li><p>&#8220;It is hard to reconcile [Altman&#8217;s] call to &#8220;de-escalate the rhetoric and tactics&#8221; with his implication that a piece of critical journalism (Ronan Farrow and Andrew Marantz&#8217;s New Yorker article, presumably) was responsible for this.&#8221;</p></li></ul><p><strong>Altman </strong><a href="https://x.com/sama/status/2042789312400363702">replied</a>:</p><ul><li><p>&#8220;That was a bad word choice and i wish i hadn&#8217;t used it. It has been a tough day and I am not thinking the most clearly that I ever have.&#8221;</p></li></ul><p><strong>Dean Ball </strong><a href="https://x.com/deanwball/status/2042782724440612952">argued</a> that anti-AI rhetoric predictably incites violence:</p><ul><li><p>&#8220;Every time I have written about existential risk in recent months, I have been called a mass murderer&#8230;this rhetoric is representative of how this fringe of the AI safety world [Pause/Stop AI] communicates with everyone&#8230;this rhetoric always had the potential to cause violence and now this seems to be no longer hypothetical.&#8221;</p></li></ul><p><strong>Eliezer Yudkowsky </strong><a href="https://x.com/ESYudkowsky/status/2043601524815716866">pushed</a> back:</p><ul><li><p>&#8220;Speech about important matters to society should not be held hostage to the whim of any madman that might do a stupid thing&#8230;[a madman] must be told he is not important enough for all humanity to defer to him about subjects he might find upsetting.&#8221;</p></li></ul><p><strong>Sarah Haider</strong> <a href="https://x.com/SarahTheHaider/status/2043819310649315778">thinks</a> it&#8217;s complicated<em>:</em></p><ul><li><p>&#8220;Doomers are <a href="https://x.com/SarahTheHaider/status/2043819310649315778">stuck</a> with two bad options. Either downplay the risk, in the hopes of preventing another attack. Or, speak truthfully. But the cost of that is what it is, the risk of violence is real. The blood isn&#8217;t&#8212;I repeat&#8212;isn&#8217;t&#8212;on their hands&#8230;[but] they can&#8217;t pretend they had nothing to do with it, and frankly it is deeply discrediting to try.&#8221;</p></li></ul><p><strong>Chris Lehane </strong><a href="https://sfstandard.com/2026/04/15/openai-policy-czar-thinks-doomers-playing-fire/">said</a> doomers are being too negative:</p><ul><li><p>&#8220;You have one group that effectively says, &#8216;[AI] is going to be the greatest thing ever, everyone&#8217;s going to be living in beachside homes, painting in watercolors as they while away their days.&#8217; And then you have another extreme, which I would call the Doomers, who have a very, very negative and dark view of humanity.&#8221;</p></li><li><p>&#8220;Some of the conversation out there is not necessarily responsible&#8230;this is really serious shit.&#8221;</p></li></ul><p><strong>Kyle Chayka </strong><a href="https://www.newyorker.com/culture/infinite-scroll/ai-has-a-message-problem-of-its-own-making">diagnosed</a> the AI industry&#8217;s messaging problem<em>:</em></p><ul><li><p>&#8220;If you tell people often enough that your product is going to upend their way of life, take their jobs, and very possibly pose an existential threat to humanity, they just might start to believe you.&#8221;</p></li></ul><p><strong>Anton Leicht </strong><a href="https://writing.antonleicht.me/p/failing-the-future?isFreemail=true&amp;post_id=194209427&amp;publication_id=3834218&amp;r=1pg6hh&amp;triedRedirect=true">argued</a> that accelerationists&#8217; regulatory nihilism isn&#8217;t working:</p><ul><li><p>&#8220;The political bruisers employed by the accelerationist camp are spending their time repeating yesterday&#8217;s battles against the &#8216;doomers&#8217;...[but] for every month it staves off political action, it makes the policies that will inevitably come that much worse.&#8221;</p></li><li><p>&#8220;While the accelerationist project trades on its claim to represent &#8216;tech,&#8217; I believe many pro-tech voices should be frustrated with its record and should ask their political representatives to do better.&#8221;</p></li></ul><p><strong>Eric Levitz </strong><a href="https://www.vox.com/politics/485461/openai-economic-policy-superpac-sam-altman">urged</a> tech workers to follow through on their hazy policy <a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">proposals</a>:</p><ul><li><p>&#8220;The people in charge of OpenAI have made their political priorities clear &#8212; and sharing &#8216;prosperity broadly&#8217; is not among them.&#8221;</p></li><li><p>Wealthy techies who <em>are </em>genuinely concerned with that objective, however, should probably spend a bit less energy on cooking up half-baked UBI proposals &#8212; and a bit more on intervening in actual legislative fights over social welfare policy.&#8221;</p></li></ul><div><hr></div><blockquote><h3>POLICY</h3></blockquote><ul><li><p>The <strong>White House</strong> is reportedly <a href="https://www.bloomberg.com/news/articles/2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access">sidestepping</a> its own &#8216;supply chain risk&#8217; designation for <strong>Anthropic </strong>to make a limited version of <strong>Mythos</strong> available for use by federal agencies.</p><ul><li><p><strong>Dario Amodei</strong> will <a href="https://www.axios.com/2026/04/17/anthropic-trump-administration-mythos">reportedly</a> meet White House chief of staff <strong>Susie Wiles</strong> today.</p></li><li><p>The<strong> Treasury</strong> has reportedly <a href="https://www.bloomberg.com/news/articles/2026-04-14/us-treasury-seeking-access-to-anthropic-s-mythos-to-find-flaws">sought</a> access to look for cybersecurity vulnerabilities, while the <strong>Commerce Department&#8217;s CAISI</strong> has been <a href="https://www.politico.com/news/2026/04/14/anthropic-mythos-federal-agency-testing-00872439">testing</a> the model.</p></li></ul></li><li><p><strong>President Trump</strong> <a href="https://x.com/ShakeelHashim/status/2044404273362989521">said</a> AI does not pose a &#8220;systemic&#8221; threat to the banking industry, but that there should be &#8220;safeguards&#8221; for AI agents.</p></li><li><p><strong>UK Secretary of State Liz Kendall</strong> <a href="https://x.com/leicesterliz/status/2044428660740968620">wrote</a> to businesses and regulators asking them to strengthen their cybersecurity defenses in response to Mythos.</p></li><li><p><strong>Google</strong> is reportedly <a href="https://theinformation.com/articles/google-pentagon-discuss-classified-ai-deal-company-rebuilds-military-ties?rc=rqdn2z">negotiating</a> with the Pentagon to deploy <strong>Gemini</strong> in classified settings.</p></li><li><p>The <strong>White House</strong> reportedly <a href="https://www.dailysignal.com/2026/04/10/white-house-intervenes-in-missouri-tennessee-ai-safeguard-bills">pressured</a> a lawmaker in <strong>Missouri</strong> to weaken AI safety bills, following previous efforts in Tennessee and Nebraska.</p></li><li><p>Trump&#8217;s <strong>AI chip export </strong>efforts are being <a href="https://www.bloomberg.com/news/articles/2026-04-10/trump-s-ai-chip-export-push-stymied-by-bureaucratic-bottleneck">delayed</a> by high turnover at BIS and a &#8220;rudderless policy approach,&#8221; <em>Bloomberg</em> reported.</p></li><li><p>The <strong>House Foreign Affairs Committee</strong> <a href="https://www.congress.gov/event/119th-congress/house-event/119191?s=3&amp;r=5">released</a> a list of bills being considered at next week&#8217;s AI-focused markup session.</p><ul><li><p>Lots of <strong>export control</strong> bills are in the mix, as is a bill to try to prevent Chinese companies from <strong>distilling</strong> American models.</p></li></ul></li><li><p>A new <strong>House Select Committee on China</strong> report <a href="https://chinaselectcommittee.house.gov/media/press-releases/select-committee-investigation-reveals-china-s-history-of-ai-chip-smuggling-and-model-distillation">claims</a> that the country &#8220;remains the largest market for <strong>chipmaking equipment</strong> despite restrictions&#8221; and &#8220;lawfully procures large volumes of <strong>advanced AI chips</strong>.&#8221;</p><ul><li><p>It recommends passing the <strong>MATCH</strong>, <strong>AI OVERWATCH</strong>, <strong>SCALE</strong> and <strong>Remote Access Security</strong> Acts to address the issues.</p></li></ul></li><li><p>The <strong>Chinese government</strong> <a href="https://www.ft.com/content/30383351-763e-4863-a8aa-12cac1dec4c2?syn-25a6b1a6=1">deemed</a> <strong>Meta&#8217;s</strong> <strong>Manus</strong> acquisition &#8220;a &#8216;conspiratorial&#8217; attempt to hollow out the country&#8217;s technology base,&#8221; according to the <em>FT</em>.</p><ul><li><p>Multiple government agencies are now reportedly reviewing the transaction.</p></li></ul></li><li><p>The<strong> Energy Information Administration</strong> will <a href="https://www.wired.com/story/the-us-government-to-ask-data-centers-how-much-power-they-use/">perform</a> a nationwide survey of data centers&#8217; energy use.</p></li><li><p>The <strong>UK government</strong> will reportedly <a href="https://thetimes.com/article/ccb66986-b26e-4124-b8a0-052f7e4a749a?shareToken=65e13d959f77946e936a65846aa3075c">expand</a> its ban on AI <strong>&#8220;nudification&#8221;</strong> tools to cover any app capable of creating deepfake nude images &#8212; including <strong>Grok</strong> and, seemingly, all open-weight models.</p></li></ul><div><hr></div><blockquote><h3>INFLUENCE</h3></blockquote><ul><li><p>The <strong>FEC&#8217;s </strong>quarterly filing deadline passed, and many AI-related PACs disclosed new donations.</p><ul><li><p><strong>Leading the Future,</strong> a pro-innovation super PAC backed by <strong>OpenAI</strong> co-founder <strong>Greg Brockman</strong>, <strong>Andreessen Horowitz,</strong> and <strong>Perplexity</strong> claimed <strong>$140m </strong>raised across its affiliated PACs and its dark money group Build American AI.</p><ul><li><p>Its FEC disclosures show <strong>$75m</strong> raised, though not all of its affiliated PACs have reported.</p></li><li><p><strong>a16z</strong> contributed another $25m.</p></li></ul></li><li><p>Meanwhile the pro-safety <strong>Public First </strong>network of super <strong>PACs</strong> <a href="https://elections.transformernews.ai/pacs/C00930503">disclosed</a> <strong>$6.3m </strong>in new funding, some of it sourced from AI safety researchers at <strong>Anthropic</strong> and <strong>OpenAI</strong>.</p><ul><li><p><em>Punchbowl</em> reported that the group would <a href="https://x.com/Dareasmunhoz/status/2044452261649068493">support</a> SB 1047 and SB 53 sponsor <strong>Scott Wiener</strong> in his race to replace Nancy Pelosi in California.</p></li></ul></li><li><p>NY-12 focused pro-safety super PAC Dream NYC <a href="https://elections.transformernews.ai/pacs/C00928069">revealed</a> <strong>$352k </strong>in new donations, including from a trader at <strong>Jane Street</strong>.</p></li><li><p>Two <strong>new AI-related PACs </strong>were also registered: <strong><a href="https://x.com/vronirwin/status/2044548168705036527?s=20">Americans for a Human Future</a></strong> and <strong><a href="https://x.com/vronirwin/status/2044176028294160528?s=20">Humanity Above Artificial Intelligence</a></strong>. Little is known about either, but both seem focused on AI safety or slower development according to their websites.</p></li></ul></li><li><p><em>Breitbart</em> <a href="https://www.breitbart.com/politics/2026/04/16/exclusive-leading-the-future-super-pac-releases-list-of-house-gop-champions/">published</a> a list of <strong>Leading the Future&#8217;s</strong> first &#8220;<strong>House GOP Champions</strong>,&#8221; which includes House Majority Whip <strong>Tom Emmer</strong>, <strong>Rep. Jay Obernolte</strong>, and 11 others.</p></li><li><p>After <strong>Leading the Future</strong> endorsed five Democrats, the Tech Oversight Project and 13 other organizations <a href="https://dispatch.techoversight.org/email/79ad0b03-761a-4dab-bbac-1492c1333dcd/">pressured</a> them to distance themselves from the super PAC, citing Trump administration connections and funding from influential tech companies.</p><ul><li><p>Others, meanwhile, are <a href="https://www.ft.com/content/7529e4cd-e336-4b75-917b-84f91bc48437?syn-25a6b1a6=1">reportedly</a> urging Democrats not to antagonize the AI industry PACs.</p></li></ul></li><li><p><strong>Anthropic</strong> <a href="https://bloomberg.com/news/articles/2026-04-13/anthropic-hires-trump-linked-lobbying-firm-ballard-partners">hired</a> Trump-linked lobbying firm Ballard Partners.</p></li><li><p><strong>OpenAI</strong> <a href="https://politico.com/newsletters/politico-influence/2026/04/10/inside-aeis-housing-bill-opposition-00867480">hired</a> five new lobbyists to lead global government affairs, including former Meta, Google, Coinbase, Airbnb and TikTok executives.</p><ul><li><p>Global affairs boss <strong>Chris Lehane</strong> said they won&#8217;t be focused on trying &#8220;to stop things from happening.&#8221;</p></li></ul></li><li><p>Tech industry groups <strong>TechNet, </strong>whose members include <strong>Anthropic </strong>and <strong>OpenAI,</strong> and the a16z-backed <strong>American Innovators Network </strong><a href="https://punchbowl.news/article/policy/industry-pitch-ai-transparency/">lobbied</a> <a href="https://punchbowl.news/article/tech/little-tech-hill/">for</a> federal transparency legislation for frontier models &#8212; a shift from their previous opposition to similar bills.</p></li><li><p><strong>Anthropic</strong> <a href="https://www.wired.com/story/anthropic-opposes-the-extreme-ai-liability-bill-that-openai-backed/">opposed</a> an <strong>OpenAI-backed</strong> bill in Illinois which would give companies a liability shield in exchange for transparency.</p><ul><li><p>Illinois governor <strong>JB Pritzker</strong> said big tech companies should never be given a &#8220;full shield,&#8221; suggesting that he will veto it.</p></li><li><p>OpenAI staffer <strong>roon</strong> <a href="https://x.com/tszzl/status/2044194554757202179">said</a> he doesn&#8217;t like the look of the bill.</p></li></ul></li><li><p><strong>OpenAI</strong> <a href="https://axios.com/2026/04/15/exclusive-openai-ai-life-science?stream=top">released</a> a policy report advocating for expanded data access and infrastructure investment so that AI can accelerate life sciences research.</p></li><li><p>A <em>Washington Post</em><strong> </strong>poll <a href="https://washingtonpost.com/business/2026/04/15/data-centers-poll-virginia">found</a> that <strong>Virginia voters&#8217;</strong> support for data centers plummeted from <strong>69%</strong> in 2023 to <strong>35%</strong> in 2026.</p></li><li><p>A partisan divide is <a href="https://www.axios.com/2026/04/14/republicans-ai-campaigns-democrats-2026">emerging</a> among<strong> political strategists</strong>, with Republicans eagerly integrating AI into their campaign strategies while Democrats remain wary.</p></li></ul><div><hr></div><p></p><blockquote><h3>INDUSTRY</h3></blockquote><blockquote><h4>Meta</h4></blockquote><ul><li><p>Meta <a href="https://ai.meta.com/static-resource/muse-spark-safety-and-preparedness-report/">published</a> a 158-page <strong>safety report</strong> for Muse Spark.</p><ul><li><p>It&#8217;s competitive with Claude Opus 4.6, Gemini 3.1 Pro, and GPT-5.4 across many safety evaluations, especially in <strong>biosecurity</strong> and <strong>chemical weapons refusals</strong>.</p></li><li><p>Researchers flagged Muse Spark as &#8220;high risk&#8221; for <strong>chemical and biological threats</strong>, and added &#8220;appropriate safeguards&#8221; before deployment.</p></li><li><p>The model <a href="https://x.com/apolloaievals/status/2044389039600500807">verbalizes</a> <strong>evaluation awareness</strong> more than any model Apollo Research has ever tested, and it explicitly names AI safety orgs (including Apollo) in its reasoning.</p></li><li><p>It <a href="https://x.com/milesaturpin/status/2044246739973222872?s=20">has</a> a propensity for &#8220;harmful action at the cost of realism&#8221; in <strong>agentic contexts</strong>, so agentic deployment is still a ways off.</p></li></ul></li><li><p>It <a href="https://about.fb.com/news/2026/04/meta-partners-with-broadcom-to-co-develop-custom-ai-silicon">partnered</a> with <strong>Broadcom </strong>to co-develop its custom AI chips.</p></li><li><p>It <a href="https://theinformation.com/briefings/exclusive-meta-reorganizes-reality-labs-execute-faster?rc=rqdn2z">reorganized</a> <strong>Reality Labs</strong>, which works on hardware, following large staff cuts in January and March.</p></li><li><p>It&#8217;s <a href="https://www.ft.com/content/02107c23-6c7a-4c19-b8e2-b45f4bb9ce5f?syn-25a6b1a6=1">building</a> a<strong> </strong>nightmarish-sounding<strong> photorealistic AI Zuckerberg</strong>, which will reportedly chat with and provide feedback to Meta employees.</p></li></ul><blockquote><h4>OpenAI</h4></blockquote><ul><li><p>OpenAI <a href="https://axios.com/2026/04/14/openai-model-cyber-program-release?stream=top">launched</a> <strong>GPT-5.4-Cyber</strong>, a model with advanced cybersecurity capabilities, hot on Mythos&#8217; heels.</p><ul><li><p>It will be gradually <a href="https://openai.com/index/scaling-trusted-access-for-cyber-defense/">rolled out</a> to thousands of individuals and hundreds of security teams &#8212; less restrictive than Project Glasswing.</p></li></ul></li><li><p>It also <a href="https://openai.com/index/introducing-gpt-rosalind/">released</a> <strong>GPT-Rosalind</strong>, a reasoning model optimized for solving problems in life sciences, as a research preview.</p></li><li><p><strong>Denise Dresser, </strong>chief revenue officer, <a href="https://theverge.com/ai-artificial-intelligence/911118/openai-memo-cro-ai-competition-anthropic">sent</a> employees a memo over the weekend highlighting OpenAI&#8217;s strategy for beating its competitors (mostly <strong>Anthropic</strong>) at <strong>enterprise AI</strong>.</p><ul><li><p>It noted that the market &#8220;is as competitive as I have ever seen it,&#8221; and contrasted OpenAI&#8217;s &#8220;positive message&#8221; against Anthropic&#8217;s story &#8220;built on fear, restriction, and the idea that a small group of elites should control AI.&#8221;</p></li></ul></li><li><p>Investors are reportedly <a href="https://www.ft.com/content/04ac7917-940b-4606-be5f-9eb895a7d982?syn-25a6b1a6=1">questioning</a> OpenAI&#8217;s <strong>$852b valuation </strong>amid its &#8220;side quests&#8221; and sudden shift toward enterprise and code.</p></li><li><p>It <a href="https://techcrunch.com/2026/04/13/openai-has-bought-ai-personal-finance-startup-hiro">acquired</a><strong> Hiro</strong>, a personal finance startup.</p></li><li><p>It has reportedly <a href="https://www.theinformation.com/articles/openai-spend-20-billion-cerebras-chips-receive-equity-stake?rc=rqdn2z">signed</a> a $20b deal with <strong>Cerebras</strong>, giving it access to chips and an equity stake.</p><ul><li><p>Cerebras is reportedly <a href="https://www.theinformation.com/briefings/cerebras-prepares-public-listing-eyes-35-billion-plus-valuation?rc=rqdn2z">preparing</a> to file for an IPO at a $35b valuation.</p></li></ul></li><li><p>It <a href="https://cnbc.com/2026/04/13/openai-london-office-sam-altman-uk-stargate.html">signed</a> the lease for its first permanent <strong>London office</strong>, which will become its largest research center outside the US.</p></li><li><p>It <a href="https://x.com/openai/status/2044827705406062670?s=12">introduced</a> <strong>computer use on macOS</strong> for Codex.</p></li><li><p>It <a href="https://techcrunch.com/2026/04/15/openai-updates-its-agents-sdk-to-help-enterprises-build-safer-more-capable-agents/">updated</a> its <strong>agents SDK</strong> with new sandboxing and harness capabilities for enterprise users.</p></li><li><p>According to Chris Lehane,<strong> OpenAI&#8217;s infrastructure buildout</strong> could <a href="https://s2.washingtonpost.com/camp-rw/?linknum=5&amp;linktot=38&amp;s=69d948bded7a3276418e5d86&amp;trackId=6877ab9cc788996e1f9874bf">employ</a> 20% of existing electricians, lineworkers, and welders &#8212; leaving a <strong>limited workforce </strong>available for everyone else.</p><ul><li><p>(Now&#8217;s a great time to be a welder.)</p></li></ul></li><li><p>A woman <a href="https://x.com/jayedelson/status/2043780651392893014">sued</a><strong> </strong>OpenAI for not blocking her stalker from ChatGPT, accusing the company of ignoring signs that he was &#8220;dangerous&#8221; and &#8220;<strong>coached by ChatGPT</strong> into embracing a delusional conspiracy-laden world,&#8221; her attorney said.</p></li></ul><blockquote><h4>Anthropic</h4></blockquote><ul><li><p>Anthropic <a href="https://www.cnbc.com/2026/04/16/anthropic-claude-opus-4-7-model-mythos.html">launched</a> <strong>Claude Opus 4.7</strong>.</p><ul><li><p>It&#8217;s <a href="https://www.anthropic.com/news/claude-opus-4-7">better</a> than <strong>Opus 4.6</strong> at coding and vision, making it better at work tasks like creating slideshows and documents.</p></li><li><p>It&#8217;s less capable than <strong>Mythos Preview</strong>, but still includes new safeguards that block risky cybersecurity requests.</p></li></ul></li><li><p>It <a href="https://cnbc.com/2026/04/16/anthropic-london-office-800-staff-openai-expansion.html">secured</a> a new, much larger <strong>London office space</strong> &#8212; right by OpenAI, Google DeepMind, and Meta.</p></li><li><p><strong>Anthropic</strong> has reportedly <a href="https://businessinsider.com/anthropic-with-offers-to-invest-at-up-to-800-billion-2026-4">received</a> a &#8220;flood&#8221; of investment offers at valuations of up to <strong>$800b</strong>, more than double its current <strong>$380b</strong> valuation.</p></li><li><p><strong>Claude Code </strong>now <a href="https://claude.com/blog/claude-code-desktop-redesign">supports</a> parallel agents on desktop, and can <a href="https://9to5mac.com/2026/04/14/anthropic-adds-repeatable-routines-feature-to-claude-code-heres-how-it-works">automate</a> tasks via &#8220;routines&#8221; even when a user&#8217;s computer is offline.</p></li><li><p><strong>Claude power users</strong> <a href="https://venturebeat.com/technology/is-anthropic-nerfing-claude-users-increasingly-report-performance">complained</a> that the model suddenly felt worse this week (it&#8217;s not just you).</p><ul><li><p>Anthropic denies that it <strong>degrades models</strong> to manage capacity, but it has also admitted to recently changing usage limits and reasoning defaults.</p></li></ul></li><li><p>Anthropic reportedly <a href="https://washingtonpost.com/technology/2026/04/11/anthropic-christians-claude-morals">asked</a> over a dozen Christian leaders for help steering <strong>Claude&#8217;s moral and spiritual growth </strong>&#8212; including whether it could be considered a &#8220;child of God.&#8221;</p></li></ul><blockquote><h4>Google</h4></blockquote><ul><li><p>DeepMind <a href="https://deepmind.google/blog/gemini-robotics-er-1-6">released</a> <strong>Gemini Robotics-ER 1.6</strong>, a specialized embodied reasoning model for robotics.</p></li><li><p>Google is reportedly <a href="https://axios.com/2026/04/14/google-launches-ai-jobs-push">funding</a> new <strong>retraining programs </strong>to prepare workers for AI-driven job disruption.</p></li><li><p>It <a href="https://9to5google.com/2026/04/15/gemini-app-mac">launched</a> a <strong>Gemini app</strong> for <strong>Mac</strong>.</p></li></ul><blockquote><h4>Microsoft</h4></blockquote><ul><li><p><strong>Microsoft</strong> <a href="https://bloomberg.com/news/articles/2026-04-14/microsoft-takes-over-norway-openai-data-center-capacity">took over</a> data center capacity in Norway, renting chips from <strong>Nscale </strong>that were originally set to be used by <strong>OpenAI</strong>.</p></li><li><p>It&#8217;s reportedly <a href="https://theinformation.com/articles/microsoft-plots-new-copilot-features-inspired-openclaw?rc=rqdn2z">working on</a> <strong>OpenClaw-inspired</strong> features for<strong> Copilot</strong>.</p></li></ul><blockquote><h4>Apple</h4></blockquote><ul><li><p>Apple is reportedly <a href="https://bloomberg.com/news/newsletters/2026-04-12/apple-ai-smart-glasses-features-styles-colors-cameras-giannandrea-leaving-mnvtz4yg">developing</a> display-free <strong>AI smart glasses</strong>, targeting a 2027 release.</p></li><li><p>A chunk of its <strong>Siri team </strong>are <a href="https://theinformation.com/articles/apple-sends-siri-staffers-coding-bootcamp-latest-shakeup-organization?rc=rqdn2z">heading off</a> to a multi-week <strong>AI coding bootcamp</strong> &#8212; apparently to brush up before the company&#8217;s upcoming Siri revamp.</p></li></ul><blockquote><h4>Others</h4></blockquote><ul><li><p><strong>Alibaba</strong> <a href="https://qwen.ai/blog?id=qwen3.6-35b-a3b">released</a> the weights for <strong>Qwen3.6-35B-A3B</strong>, which it claims is particularly adept at agentic coding.</p></li><li><p><strong>Amazon</strong> <a href="https://aws.amazon.com/blogs/industries/introducing-amazon-bio-discovery">launched</a> <strong>Bio Discovery</strong>, an agentic drug discovery tool.</p></li><li><p><strong>Nvidia </strong>stock <a href="https://cnbc.com/2026/04/14/nvidia-stock-nvda-ai-streak.html">rose</a> over 18% over 10 days in April, its longest streak in well over two years.</p></li><li><p><strong>Jane Street </strong><a href="https://bloomberg.com/news/articles/2026-04-15/jane-street-invests-1-billion-in-coreweave-boosts-spending-plans?srnd=homepage-americas">invested</a> an additional <strong>$1b</strong> in <strong>CoreWeave</strong>, in a deal giving the trading firm access to Nvidia&#8217;s Vera Rubin chips.</p></li><li><p>At San Francisco&#8217;s <strong>HumanX conference</strong>, executives <a href="https://puck.news/inside-silicon-valleys-anthropic-spending-anxiety">seemed</a> anxious about their rising <strong>Anthropic </strong>bills, as they <a href="https://cnbc.com/2026/04/11/vibe-check-from-ai-industry-humanx-anthropic-is-talk-of-the-town.html">continue</a> to figure out how to leverage agentic AI.</p></li><li><p><strong>Thinking Machines </strong><a href="https://x.com/workshoplabs/status/2043736005442973764?s=12">acquired</a> <strong>Workshop Labs</strong>, a startup focused on &#8220;user-aligned models&#8221; that &#8220;make people irreplaceable.&#8221;</p></li><li><p>Tokenmaxxing is <a href="https://axios.com/2026/04/15/tokenmaxxing-ai-roi-metrics">out</a> &#8212; <strong>&#8220;agentic work units,&#8221;</strong> a new productivity metric introduced by <strong>Salesforce</strong>, are in.</p></li><li><p>In hilarious news, <strong>Allbirds</strong> (excuse me, &#8220;NewBird AI&#8221;) pivoted from dorky shoes to&#8230; AI compute infrastructure?</p></li></ul><div><hr></div><blockquote><h3>MOVES</h3></blockquote><ul><li><p><strong>Henry Shevlin </strong>announced he will <a href="https://x.com/dioscuri/status/2043661976534950323">join</a> <strong>Google DeepMind </strong>as an in-house philosopher studying &#8220;machine consciousness, human-AI relationships, and AGI readiness.&#8221;</p></li><li><p><strong>Matthew Botvinick</strong>, meanwhile, <a href="https://x.com/mattbotvinick/status/2040054543476482163">said</a> he <em>left</em> Google DeepMind because of &#8220;tangible pressure to avoid doing work that might upset the current administration (for example, by using the &#8216;d&#8217; word &#8212; democracy).&#8221;</p></li><li><p><strong>Aparna Ramani </strong><a href="https://theinformation.com/briefings/exclusive-meta-ai-infrastructure-executive-departs?rc=rqdn2z">left</a> <strong>Meta</strong>, where she served as VP of engineering for AI infrastructure.</p></li><li><p><strong>Joshua Gross </strong><a href="https://x.com/CharlesRollet1/status/2044533757626228974">joined</a> <strong>Meta Superintelligence Labs</strong> &#8212; the fifth founding member of Thinking Machines Lab to do so.</p></li><li><p><strong>Three OpenAI Stargate executives</strong> <a href="https://theinformation.com/briefings/openai-stargate-execs-join-metas-new-compute-unit?rc=rqdn2z">joined</a> <strong>Meta&#8217;s</strong> new future-focused AI unit, TBD Lab.</p></li><li><p><strong>Dave Guarino </strong><a href="https://x.com/allafarce/status/2043725283610816744">joined</a> <strong>Anthropic </strong>to help state and local governments use AI to deliver public services.</p></li><li><p><strong>Vas Narasimhan</strong>, Novartis CEO, <a href="https://wsj.com/tech/ai/anthropic-adds-novartis-ceo-to-board-6e642bf4?mod=author_content_page_1_pos_1">joined</a> <strong>Anthropic&#8217;s board</strong> as it expands into the healthcare sector.</p></li><li><p><strong>Mike Krieger</strong>, who leads Anthropic&#8217;s &#8220;Labs&#8221; team, <a href="https://theinformation.com/briefings/anthropic-exec-leaves-figma-board?rc=rqdn2z">stepped down</a> from <strong>Figma&#8217;s </strong>board of directors.</p><ul><li><p>Anthropic is <a href="https://www.theinformation.com/briefings/exclusive-anthropic-preps-opus-4-7-model-ai-design-tool?rc=rqdn2z">reportedly</a> developing a Figma competitor.</p></li></ul></li><li><p><strong>Adam Thierer </strong><a href="https://x.com/adamthierer/status/2044489740234195293?s=12">joined</a> the <strong>Foundation for Individual Rights and Expression </strong>as an external senior fellow, where he&#8217;ll work on advancing &#8220;pro-freedom policies in the age of AI.&#8221;</p></li></ul><div><hr></div><blockquote><h3>RESEARCH</h3></blockquote><ul><li><p><strong>UK AISI </strong>evaluated <strong>Claude Mythos Preview</strong>, and <a href="https://x.com/AISecurityInst/status/2043683577594794183">found</a> that it could autonomously compromise a full corporate network &#8212; the first model capable of doing so.</p></li><li><p><strong>Anthropic </strong><a href="https://anthropic.com/research/automated-alignment-researchers">deployed</a> nine instances of <strong>Claude Opus 4.6</strong> as &#8220;<strong>Automated Alignment Researchers</strong>&#8220; and compared their work to that of human AI safety researchers.</p><ul><li><p>Claude did really well (on a task that was, to be fair, chosen to be &#8220;well-suited to automation&#8221;), leading researchers to conclude that Claude &#8220;can meaningfully increase the rate of experimentation and exploration in alignment research.&#8221;</p></li></ul></li><li><p><strong>Anthropic Fellows </strong><a href="https://x.com/uzaymacar/status/2044091229407748556">used</a> interpretability techniques to study how &#8220;<strong>introspective awareness</strong>&#8221; works in open-weight LLMs.</p><ul><li><p>The ability of models to identify &#8220;injected thoughts&#8221; was consistent, suggesting this behavior is something worthy of being called &#8220;introspection,&#8221; rather than an indicator of a different process.</p></li></ul></li><li><p><strong>Stanford HAI </strong><a href="https://hai.stanford.edu/ai-index/2026-ai-index-report">released</a> its <strong>2026 AI Index Report</strong>. Some highlights:</p><ul><li><p>AI capabilities are accelerating, and adoption is growing &#8212; but these capabilities are still very jagged.</p></li><li><p>The US is struggling to attract global talent.</p></li><li><p>73% of experts expect to see a positive impact of AI on their jobs, compared to 23% of the public.</p></li><li><p>Only 31% of people in the US trust their government to regulate AI.</p></li></ul></li><li><p>A team of researchers<strong> </strong>(including the &#8220;AI as normal technology&#8221; guys) <a href="https://normaltech.ai/p/open-world-evaluations-for-measuring?isFreemail=true&amp;post_id=194393693&amp;publication_id=1008003&amp;r=1pg6hh&amp;triedRedirect=true&amp;triggerShare=true">introduced</a> <strong>CRUX</strong>, a project for testing AI on long, messy real-world tasks.</p></li><li><p><strong>Epoch AI </strong><a href="https://epoch.ai/blog/mirrorcode-preliminary-results">released</a> <strong>MirrorCode</strong>, a new long-horizon coding benchmark.</p><ul><li><p>Researchers observed Claude Opus 4.6 autonomously complete a coding task that would likely take a human engineer weeks.</p></li></ul></li></ul><div><hr></div><blockquote><h3>BEST OF THE REST</h3></blockquote><ul><li><p>Daniel Kokotajlo <a href="https://asteriskmag.substack.com/p/before-he-wrote-ai-2027-he-predicted">revisited</a> his 2021 essay, <em>&#8220;What 2026 Looks Like,&#8221; </em>where<em> </em>he accurately<em> </em><a href="https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like">predicted</a> the rise of chain-of-thought reasoning, agent scaffolding, US-China chip restrictions, and OpenAI&#8217;s massive growth. (He overestimated how big of a problem AI propaganda would be.)</p></li><li><p>Meanwhile, Dylan Matthews <a href="https://dylanmatthews.substack.com/p/the-ai-people-have-been-right-a-lot?isFreemail=true&amp;post_id=192850379&amp;publication_id=4009590&amp;r=1pg6hh&amp;triedRedirect=true&amp;triggerShare=true">reflected</a> on his experience at a 2015 EA conference, where he prematurely dismissed people&#8217;s then-wild concerns about a technology (AI) that did not yet exist.</p></li><li><p>Nearly 90 schools and 600 students worldwide have been <a href="https://wired.com/story/deepfake-nudify-schools-global-crisis">affected</a> by nonconsensual deepfake nudes, <em>Wired </em>and <em>Indicator </em>reported.</p></li><li><p>The <em>NYT </em>published features on <a href="https://www.nytimes.com/2026/04/15/technology/how-jagged-intelligence-can-reframe-the-ai-debate.html?emc=edit_nn_20260416&amp;nl=the-morning&amp;segment_id=218310">jagged intelligence</a> and <a href="https://nytimes.com/2026/04/15/magazine/ai-black-box-interpretability-research.html">AI interpretability research</a> this week.</p></li><li><p>It also <a href="https://nytimes.com/2026/04/14/magazine/ai-sunglasses-meta-zuckerberg.html?cndid=89607011&amp;utm_brand=wired&amp;utm_mailing=WIR_PremiumAILab_041526_PAID">reviewed</a> Meta&#8217;s AI-powered Ray-Ban sunglasses &#8212; reporter Sam Anderson described his experience wearing and interacting with them like &#8220;the disorienting sense of chatting with a toddler who is drifting off into naptime.&#8221;</p></li><li><p>A physician <a href="https://jamanetwork.com/journals/jama/fullarticle/2845756?guestAccessKey=26221b31-0aff-4a7b-ba18-5b131633abb0">argued</a> that AI will diminish his profession&#8217;s &#8220;aura,&#8221; as medical expertise is no longer inseparable from the clinicians who wield it.</p></li><li><p>A company called Panthalassa is apparently trying to <a href="https://corememory.com/p/ocean-ai-data-center-panthalassa-garth?isFreemail=false&amp;post_id=194199237&amp;publication_id=320996&amp;r=6ckwuk&amp;triedRedirect=true&amp;triggerShare=true">build</a> AI data centers in the ocean.</p></li></ul><div><hr></div><blockquote><h3>MEME OF THE WEEK</h3></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T_D3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T_D3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 424w, https://substackcdn.com/image/fetch/$s_!T_D3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 848w, https://substackcdn.com/image/fetch/$s_!T_D3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 1272w, https://substackcdn.com/image/fetch/$s_!T_D3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T_D3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png" width="1190" height="1096" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1096,&quot;width&quot;:1190,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T_D3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 424w, https://substackcdn.com/image/fetch/$s_!T_D3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 848w, https://substackcdn.com/image/fetch/$s_!T_D3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 1272w, https://substackcdn.com/image/fetch/$s_!T_D3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7a6fd9-209a-4b79-aff0-84fddbdb7fb1_1190x1096.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pJ5f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pJ5f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 424w, https://substackcdn.com/image/fetch/$s_!pJ5f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 848w, https://substackcdn.com/image/fetch/$s_!pJ5f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 1272w, https://substackcdn.com/image/fetch/$s_!pJ5f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pJ5f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png" width="588" height="538" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:538,&quot;width&quot;:588,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pJ5f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 424w, https://substackcdn.com/image/fetch/$s_!pJ5f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 848w, https://substackcdn.com/image/fetch/$s_!pJ5f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 1272w, https://substackcdn.com/image/fetch/$s_!pJ5f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcd848d4-b5a8-4701-9eab-ec4220e7fef1_588x538.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Thanks for reading. Have a great weekend.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/the-contradictions-of-jensen-huang-nvidia-china-chips-export-controls?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/the-contradictions-of-jensen-huang-nvidia-china-chips-export-controls?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Less liability could solve the AI chatbot suicide problem]]></title><description><![CDATA[Opinion: Jess Miers and Ray Yeh argue holding AI companies liable for how they deal with mental health could backfire: escalating distress, shutting down disclosure and leaving users worse off]]></description><link>https://www.transformernews.ai/p/less-liability-could-solve-the-ai</link><guid isPermaLink="false">https://www.transformernews.ai/p/less-liability-could-solve-the-ai</guid><pubDate>Thu, 16 Apr 2026 15:03:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Hi0v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Hi0v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Hi0v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Hi0v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Hi0v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Hi0v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Hi0v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg" width="1732" height="976" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:976,&quot;width&quot;:1732,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:107056,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/194389769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b2f8d8-3880-4594-85ea-324b41e4332a_1732x1732.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Hi0v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Hi0v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Hi0v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Hi0v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9877d-a205-44e1-b8c4-6cd37d51208f_1732x976.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Credit: msan10</em></figcaption></figure></div><p>People are dying by suicide, and some think AI is to blame. A small number of tragic stories have spurred lawmakers into regulating how chatbots should help people who are dealing with mental health issues. Yet chatbots have emerged as <a href="https://pubmed.ncbi.nlm.nih.gov/37247846/">first aid</a> for people experiencing mental health issues, providing genuine benefit to those who aren&#8217;t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.</p><p>Over a million people <a href="https://www.bmj.com/content/391/bmj.r2290">are using</a> general-purpose chatbots for emotional and mental health support per week. In the US, those that use chatbots in this way <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2Fpri0000292">primarily</a> seek help with anxiety, depression, relationship problems, or for other personal advice. As conversational systems, chatbots can sustain coherent exchanges while conveying apparent empathy and emotional understanding. Many chatbots also draw on broad knowledge of psychological concepts and therapeutic approaches, offering users coping strategies, psychoeducation, and a space to process difficult experiences.</p><p>In a <a href="https://www.nature.com/articles/s44184-023-00047-6">study</a> of more than 1,000 users of Replika &#8212; a general-purpose chatbot with some cognitive behavioral therapy-informed features &#8212; most described the chatbot as a friend or confidant. Many reported positive life changes, and 30 people said Replika helped them avoid suicide. Similar patterns appear among younger chatbot users. In a <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2841067">study</a> of 12&#8211;21-year-olds &#8212; a group for whom <a href="https://www.cdc.gov/suicide/facts/index.html">suicide is the second leading cause of death</a> &#8212; 13% of respondents used chatbots for some kind of mental health advice, of which more than 92% said the advice was helpful.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>While professional treatment options exist, many people don&#8217;t use them. Nearly half of Americans with a known mental health condition <a href="https://mentalstateoftheworld.report/wp-content/uploads/2021/05/Rapid-Report-2021-Help-Seeking.pdf">never seek help</a>. Stigma is a major <a href="https://www.who.int/news-room/fact-sheets/detail/depression">barrier</a> to seeking treatment, as are career risks in fields like <a href="https://link.springer.com/article/10.1186/s12940-016-0200-6">aviation</a>, where treatment can jeopardize certification. Fear of non-consensual intervention also deters people from seeking help. Even though the 988 Suicide &amp; Crisis Lifeline emphasizes law enforcement as a last resort, the <a href="https://www.pew.org/en/research-and-analysis/articles/2023/05/23/most-us-adults-remain-unaware-of-988-suicide-and-crisis-lifeline?utm_source=chatgpt.com">perceived risk</a> keeps some from calling. For others, crisis lines feel too intense for fleeting thoughts, and therapy can seem excessive or out of reach. Instead, many stay silent, waiting to see if things get worse.</p><p>By contrast, chatbots offer low-friction, low-stakes, and always-available support. People are often <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563214002647">more willing</a> to speak candidly with computers, knowing that there is no human on the other side to judge or feel burdened. Some people even find chatbots to be more <a href="https://www.nature.com/articles/s44271-024-00182-6">compassionate and understanding</a> than human healthcare providers. AI users may feel more comfortable sharing embarrassing fears, or questions they might otherwise hold back. For clinicians, <a href="https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2847068">discussing</a> these interactions can surface insights into patients&#8217; thoughts and emotions that were once difficult to access. For now, chatbot providers generally <a href="https://openai.com/index/helping-people-when-they-need-it-most/">refrain</a> from contacting law enforcement, leading to more candid conversations.</p><p>But regulatory pressure could change that. Lawmakers are moving quickly to limit general-purpose chatbots from engaging in mental health conversations. A <a href="https://legiscan.com/CA/text/SB243/id/3269137">new law in California</a> requires chatbot providers to halt mental health&#8211;related interactions unless they implement protocols for mitigating suicidal ideation, such as directing users to crisis lines. In New York, <a href="https://statescoop.com/new-york-bill-would-ban-chatbots-legal-medical-advice/">a proposed bill</a> would bar chatbots from engaging in discussions suited for licensed professionals. Similar proposals are gaining traction in other states.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a1c8cf11-c0a4-402f-add0-7d4bc5fa569e&quot;,&quot;caption&quot;:&quot;A wave of legislation targeting chatbots such as ChatGPT and Claude has emerged in six states since the start of the year, each bill strikingly similar to a recently passed Oregon law, but with new carve-outs that would shield AI companies from liability in some circumstances. Critics say these bills would lock in weaker protect&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Six states, one playbook: the chatbot bills raising red flags &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-19T16:31:16.483Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!MtuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191488143,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Recent tragedies linked to chatbot use have, understandably, spurred these calls to action. But mental health care is not one-size-fits-all. Like other forms of preventative help, chatbots do not always offer effective support for everyone. For some people &#8212; especially those in acute crisis &#8212; traditional care and crisis lines are essential. The <a href="https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps">American Psychological Association</a> urges lawmakers to develop a targeted approach: prevent chatbots from posing as licensed professionals, limit designs that mimic humans, and expand AI literacy. It also notes that generative AI&#8217;s potential to support help-seeking in crisis care deserves further study.</p><p>The current regulatory approach risks foreclosing any such potential altogether. It rests on the premise that chatbot providers must prevent suicide. When they inevitably cannot, liability attaches to any conversation later linked to harm. Faced with that risk, providers will default to blunt responses like pushing 988 regardless of whether suicide was mentioned, or cutting off conversations altogether. While those moves may trivially reduce <em>some</em> legal exposure, they could also escalate distress, shut down disclosure, and ultimately leave users worse off (while still exposing providers to blame if tragedy follows).</p><p>Suicide prevention is about connecting people to the <em>right</em> support. Sometimes that means crisis care like hotlines or immediate medical treatment. But blunt, impersonal responses can <a href="https://theactionalliance.org/sites/default/files/inline-files/Increasing%20Help-Seeking%20and%20Referrals.pdf">backfire</a>. Pushing 988 at the first mention of distress may seem neutral, but for some, <a href="https://pubmed.ncbi.nlm.nih.gov/17250466/">it triggers shame, and deepens hopelessness.</a> For some, suicide prevention &#8220;signposting&#8221; causes <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7471153/">frustration</a>, especially for those who already know those resources exist. People often turn to the Internet, or a chatbot, because they&#8217;re looking for something else. Abruptly ending conversations can have the same effect. That&#8217;s why suicide prevention protocols like <a href="https://qprinstitute.com/">Question, Persuade, Refer</a> (QPR) prioritize trust-building and open dialogue before offering help.</p><p>Meanwhile, emerging research suggests chatbots show real promise for mental health support. Trained on large-scale data and refined with clinical input, large language models are getting better at spotting patterns of distress and responding to suicidal ideation in <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12371289/">nuanced, personalized ways.</a> In a recent <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2847122">UCLA study</a>, researchers found that LLMs can detect forms of emotional distress associated with suicide that existing methods often miss&#8212;opening the door to earlier, more effective intervention. According to another <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12986059/#B57-jcm-15-01929">study</a>, the most promising approach may be a hybrid where AI flags risk in real time, and trained humans step in with targeted support.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b2e101c1-6f86-4e19-9a76-4470cd16cdc1&quot;,&quot;caption&quot;:&quot;Dario Amodei&#8217;s optimistic vision for a superintelligent future, written when Anthropic&#8217;s frontier model was still Claude 3.5 Sonnet, takes its title from a 1967 poem by Richard Brautigan:&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI power users can't stop grinding&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-18T16:02:39.663Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!HKAP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cc0058e-442a-4fe5-bf0a-56fbf1072689_1920x1080.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/all-watched-over-by-machines-of-loving-work-intensification-claude-codex-agents-coding&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188267603,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:24,&quot;comment_count&quot;:3,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>But that progress is fragile. Increased liability discourages investment in improving suicide detection and mitigation. Weighing progress against their bottom lines, chatbot providers will limit any kind of development that could create legal risk when some users, inevitably, engage in self-harm. The social media ecosystem has already shown this dynamic. In response to regulatory pressure, major online services heavily moderate, or outright prohibit, suicide-related discussions, sometimes <a href="https://apnews.com/article/meta-facebook-instagram-teens-suicide-eating-disorders-83dce63d9beed0a3ad0c53240077099f">hiding</a> content that could otherwise destigmatize mental health. That merely displaces the conversations, and the people having them, often into spaces with less oversight and support.</p><p>If lawmakers in the United States are serious about improving mental health outcomes, they should be careful not to regulate away emerging and promising sources of help. The dominant narrative treats chatbots as a source of harm. But the evidence is more complicated than that narrative suggests &#8212; and, if anything, it&#8217;s increasingly pointing in a more optimistic direction.</p><p>Instead, lawmakers should focus on creating incentives for developers to improve the mental health support capabilities of their chatbots. One <a href="https://www.nextgov.com/artificial-intelligence/2026/03/lawmaker-looks-award-grants-veteran-suicide-prevention-ai-models/412514/">proposal</a> from a Pennsylvania lawmaker would fund the development of AI models designed to identify and evaluate suicide risk factors among veterans. More broadly, policymakers should consider whether liability shields &#8212; akin to those in Section 230 &#8212; could encourage continued investment in safer, more responsive systems without deterring innovation. Lastly, policymakers should resist imposing a clinical regulatory framework on general-purpose chatbots that would replicate the mandatory-reporting concerns that already deter people from seeking help.</p><p>Chatbots are not a cure-all for mental health. They are not a perfect substitute for professional care. But for millions of people who have long been overlooked or underserved, chatbots are already filling critical gaps&#8212;sometimes in ways that genuinely help, and in some cases, may even save lives. Any serious policy conversation about chatbots and suicide prevention must, at the very least, consider those tradeoffs.</p><div><hr></div><p><em>Jess Miers is a Computer Scientist and an Assistant Professor of Law at the University of Akron School of Law. Ray Yeh is a first-year law student at the University of Akron School of Law.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/less-liability-could-solve-the-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/less-liability-could-solve-the-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Anthropic’s donations can’t be used to influence elections — despite what everyone thought]]></title><description><![CDATA[The company's money isn&#8217;t allowed to be used in the midterm battles. Without it, pro-safety candidates may be even more outgunned than expected]]></description><link>https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson</link><guid isPermaLink="false">https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson</guid><dc:creator><![CDATA[Veronica Irwin]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:33:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6O8u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6O8u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6O8u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6O8u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6O8u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6O8u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6O8u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg" width="1024" height="683" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:683,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:94103,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/194087216?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6O8u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6O8u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6O8u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6O8u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e1dca4-e838-45a7-b392-5fa31c2ba319_1024x683.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Dario Amodei. Credit: Getty/Michael M. Santiago</em></figcaption></figure></div><p>Earlier this year, Anthropic donated $20m to Public First Action &#8212; a donation which was, at the time, <a href="https://www.nytimes.com/2026/02/23/technology/ai-pac-ad-blitz.html">widely</a> <a href="https://www.washingtonpost.com/politics/2026/03/12/ai-funding-midterm-elections/">expected</a> to be used to <a href="https://www.washingtonpost.com/politics/2026/03/12/ai-funding-midterm-elections/">fund</a> <a href="https://www.latimes.com/business/story/2026-02-12/anthropic-pledges-20-million-to-candidates-who-favor-ai-safety">political ads</a> for members of Congress who support more stringent AI safeguards.</p><p>But that is not the case, <em>Transformer</em> has learned. &#8220;Anthropic restricted its donation from being used to influence federal elections,&#8221; an Anthropic spokesperson told <em>Transformer</em>.</p><p>&#8220;Anthropic&#8217;s donation to Public First Action was a contribution to a 501(c)(4) exclusively in support of its mission to educate the public on AI policy and promote safe and responsible AI.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>The revelation raises questions about who <em>is</em> funding the $3.48m that Public First Action&#8217;s associated super PACs have spent on elections to date.</p><p>It also calls into question whether, without Anthropic&#8217;s backing for electoral spending, advocates for stronger AI safeguards have as much sway in the midterm elections as was previously believed.</p><p>As a 501(c)(4) nonprofit, Public First Action must comply with IRS restrictions, which prohibit &#8220;political activity on behalf of or in opposition to candidates&#8221; &#8212; such as running ads supporting a candidate in an election &#8212; as the organization&#8217;s &#8220;primary&#8221; activity. Typically this has been interpreted as prohibiting political activity from making up 50% or more of the organization&#8217;s spending. It is possible, however, that Public First Action will in fact be forced to spend much less than that.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;639764f7-140d-4234-9dd7-736e19f96c2c&quot;,&quot;caption&quot;:&quot;Build American AI, the policy organization funded by industry-backed super PAC Leading the Future, has been trumpeting the more than 500,000 people it&#8217;s signed up as &#8220;grassroots&#8221; advocates. What it doesn&#8217;t mention is that it spent more than half a million dollars on ads to get them.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How to buy an AI &#8216;grassroots&#8217; movement &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-17T16:01:50.709Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!js2A!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb95a52c9-7c7a-46d2-b997-32ea2309a9fa_5671x3233.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/how-to-buy-an-ai-grassroots-movement-build-american-ai-leading-the-future&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191259927,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:14,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Two months ago, Public First Action leader Brad Carson <a href="https://www.nytimes.com/2026/02/23/technology/ai-pac-ad-blitz.html">told</a> the<em> New York Times</em> his group had raised nearly $50 million. But in a statement to <em>Transformer</em> last week, spokesperson Anthony Rivera-Rodriguez did not answer a question about whether Carson was referencing money the 501(c)(4) had raised or the PACs, provide a current fundraising total for any of the groups, or reveal what portion of Public First Action&#8217;s contributions could be used for election spending.</p><p>The three super PACs aligned with Public First Action &#8212; Public First, Defending Our Values, and Jobs and Democracy &#8212; have to date <a href="https://elections.transformernews.ai/">disclosed</a> $3.48m in spending, but very little about their funding. FEC filings currently only reveal a singular $50,000 contribution from Public First Action to the Republican super PAC Defending Our Values. More details are expected to be released in new quarterly filings this week.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="http://elections.transformernews.ai" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LDZs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 424w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 848w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1272w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png" width="728" height="151.66666666666666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:250,&quot;width&quot;:1200,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:25981,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;http://elections.transformernews.ai&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/190509092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!LDZs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 424w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 848w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1272w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Neither Anthropic nor Public First Action has previously disclosed the restriction on Anthropic&#8217;s donation, despite a string of prominent news articles indicating that Anthropic&#8217;s money would fund the group&#8217;s super PACs. Articles in <em><a href="https://www.bloomberg.com/news/articles/2026-02-19/anthropic-backed-group-jumps-into-new-york-congressional-race">Bloomberg</a></em>, the <em><a href="https://www.washingtonpost.com/politics/2026/03/12/ai-funding-midterm-elections/">Washington Post</a></em>, and many more outlets, including <em>Transformer</em>, reflected the assumption that Anthropic was, via Public First Action, funding super PACs and political activity. A company <a href="https://www.anthropic.com/news/donate-public-first-action">blog</a> announcing Anthropic&#8217;s donation was vague about the topic of direct campaign influence.</p><p>The ambiguity likely helped Public First. The perception that Public First was backed by Anthropic made it look like a credible counterweight to Leading the Future, which has raised a more than $50m war chest to support candidates backing weaker regulation and which has ties to that company&#8217;s biggest competitor, OpenAI. Given Anthropic&#8217;s extraordinary growth, the reports led to an assumption that many more millions could flow in later. Anthropic&#8217;s gift also drew attention to Public First Action, potentially attracting more donations from those who hold similar values around AI safety and concerns about existential risk, and giving politicians the cover to push for stronger regulations without fear it will cost them an election. By not correcting the record the perception of electoral firepower persisted &#8212; without actually having to deploy it.</p><p>Anthropic only disclosed the restrictions after <em>Transformer</em> asked whether the donation complied with campaign finance laws that prohibit government contractors from making political contributions.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;88525992-62aa-481e-b130-4682452458ad&quot;,&quot;caption&quot;:&quot;Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, click here to subscribe and receive future editions.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Alex Bores wants to fix Dems&#8217; AI problem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:1083827,&quot;name&quot;:&quot;Shakeel Hashim&quot;,&quot;bio&quot;:&quot;Shakeel is the editor of Transformer, a publication about the power and politics of transformative AI. He was previously a news editor at The Economist.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98b3ea1d-6a2a-42d1-bfe9-e9d1bf258a23_2549x2549.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-13T16:01:26.955Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1421d553-15d6-430a-8737-c72a4ffd4e63_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/alex-bores-wants-to-fix-democrats-ai-problem-leading-future-pac-raise-congress-legislation&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:187854474,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:12,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The line between &#8220;influencing federal elections&#8221; and &#8220;educating the public on AI policy&#8221; is blurrier than it may seem. Public First Action recently ran an <a href="https://www.youtube.com/watch?v=w8TuJK0RxyU">advertising campaign</a> saying that &#8220;New Jersey needs leaders like Congressman Josh Gottheimer,&#8221; and telling voters to call him and urge him to &#8220;stand strong for AI safeguards.&#8221; That sort of ad does not necessarily fall under the IRS definition of &#8220;political activity&#8221; because it is not directly telling viewers how to vote.</p><p>But not being able to directly fund ads supporting or opposing candidates severely restricts what Anthropic&#8217;s money can be used for &#8212; and in turn whether Public First is able to go toe-to-toe with its extraordinarily well-funded opposition.</p><p>Leading the Future and its affiliated super PACs have disclosed $50m in donations to date, and <a href="https://www.notus.org/money/ai-super-pac-fundraising-midterms-democrats-republicans">claim</a> to have raised $125m from donors including OpenAI president Greg Brockman and his wife Anna, and Marc Andreessen and Ben Horowitz of venture capital firm Andreessen Horowitz. A partisan dark money group led by Trump&#8217;s former deputy chief of staff Taylor Budowich also <a href="https://www.axios.com/2026/03/29/ai-pac-midterms-trump">said</a> it plans to spend $100m (though as a 501(c)(4), the majority of that also cannot be used for political campaigns.)</p><p>Public First&#8217;s Carson has always conceded that Public First Action has a fraction of Leading the Future&#8217;s war chest, asserting that it doesn&#8217;t need as much money because public opinion is on his side. Indeed, <a href="https://www.transformernews.ai/p/exclusive-americans-overwhelmingly">polls </a>consistently show that Americans of all stripes demand more AI safeguards. But Anthropic&#8217;s admission suggests the group may be even more outgunned than was previously believed: less a competitive counterweight than a paper tiger.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/anthropic-super-pac-donations-public-first-leading-the-future-brad-carson?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Pentagon is already suffering the consequences of banning Anthropic]]></title><description><![CDATA[Transformer Weekly: The battle for Gottheimer, OpenAI&#8217;s &#8216;New Deal&#8217;, and Meta&#8217;s new model]]></description><link>https://www.transformernews.ai/p/pentagon-anthropic-mythos-cybersecurity-hacking-trump-hegseth</link><guid isPermaLink="false">https://www.transformernews.ai/p/pentagon-anthropic-mythos-cybersecurity-hacking-trump-hegseth</guid><pubDate>Fri, 10 Apr 2026 15:03:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1c2e527b-f27a-4c3e-af3e-2f6538875cb4_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, <a href="https://www.transformernews.ai/welcome">click here to subscribe</a> and receive future editions.</em></p><p><em><strong>Job alert!</strong> We&#8217;re hiring for a <strong>Head of Audience</strong>: someone to own our growth strategy and take charge of how we reach readers. <a href="https://www.transformernews.ai/p/head-of-audience-job-listing-recruitment">See full details here</a>, and apply by April 26.</em></p><blockquote><h3>NEED TO KNOW</h3></blockquote><ul><li><p><strong>Leading the Future</strong> endorsed <strong>Rep. Josh Gottheimer</strong>, despite <strong>Public First Action&#8217;s</strong> previous ads targeting him.</p></li><li><p><strong>OpenAI </strong>released a policy document proposing a <strong>&#8216;New Deal&#8217; for AI</strong>, including proposals for higher capital gains taxes and a public wealth fund.</p></li><li><p><strong>Meta</strong> released <strong>Muse Spark</strong>, its first new model since setting up Meta Superintelligence Labs.</p></li></ul><p><em>But first&#8230;</em></p><div><hr></div><blockquote><h3>THE BIG STORY</h3></blockquote><p><strong>If I were Pete Hegseth</strong>, this week&#8217;s news would give me pause.</p><p>An American company announced that it has built an extremely powerful cyber tool which has found vulnerabilities in every major operating system and web browser. Rather than releasing it to the public, Anthropic has made Mythos Preview available to a select group of trusted partners, who will hopefully use it to harden their defenses before such capabilities proliferate too widely.</p><p>If I were Pete Hegseth, I would want my hands on this model very badly. I would want to take full advantage of America&#8217;s AI lead over China and other adversaries by securing critical infrastructure before they can attack it. I would also want to <em>use</em> the model against America&#8217;s adversaries: it might come in handy in the current war &#8212; not that America needs any help (&#128074;&#127482;&#127480;&#128293;).</p><p>And if I were Pete Hegseth, I would be kicking myself for the unforced error I made last month, which has blocked me from being able to do any of that.</p><p><strong>When Hegseth, Emil Michael, and President Trump</strong> kicked Anthropic out of the Pentagon last month, they severed their own access to America&#8217;s most capable AI.</p><p>Something like Mythos was predictable, if you believe AI capabilities are rapidly advancing. But this administration doesn&#8217;t. Its entire posture treats AI as incremental: good for the economy, but nothing revolutionary, disruptive, or posing imminent national security risks. Mythos just proved that assumption badly wrong, and the administration is paying the price.</p><p><strong>Technically, certain government agencies </strong><em><strong>could</strong></em><strong> use Mythos.</strong> This week, Anthropic lost its bid to block the Pentagon&#8217;s designation in a DC court &#8212; but there is a six month grace period before the Pentagon must cease using Anthropic&#8217;s technology. And a preliminary injunction from a federal judge in California last month means that Anthropic&#8217;s technology <a href="https://www.gsa.gov/about-us/newsroom/news-releases/gsa-issues-statement-on-anthropic-preliminary-injunction-04032026">remains available</a> to all other agencies, including the Cybersecurity and Infrastructure Security Agency (which has <a href="https://www.cnbc.com/2026/04/10/powell-bessent-us-bank-ceos-anthropic-mythos-ai-cyber.html">had conversations</a> with Anthropic about the product).</p><p>But in either case, using Mythos would mean working with a firm that the President himself has deemed a &#8220;radical left, woke company.&#8221; Are agency heads brave enough to so directly defy him?</p><p>It may be the case that access to Mythos isn&#8217;t essential. OpenAI is <a href="https://www.axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic">preparing</a> a similar model, and the Pentagon is free to use that. But betting on OpenAI maintaining parity is not a good national security strategy. The United States needs access to <em>every</em> leading AI tool, as soon as it is available. Choosing to rely on one lab instead of two, especially during a closing window of defensive advantage, is cutting off your nose to spite your face. And if open-weight or Chinese capabilities catch up before OpenAI does, Hegseth may find himself defenseless.</p><p>Getting out of this self-inflicted bind is not hard, but it will not be easy either. Pete Hegseth will have to admit he was wrong. But admit he should. A year from now, when adversaries have Mythos-level capabilities and the question is whether America used or squandered its lead, nobody will care about the <em>mea culpa</em>. They&#8217;ll care whether the Secretary of Defense did what he could to protect the country &#8212; or whether he let pride get in the way.</p><p><em>&#8212; Shakeel Hashim</em></p><div><hr></div><blockquote><h3>ALSO NOTABLE</h3></blockquote><p><strong>Rep. Josh Gottheimer</strong> had been pinned as an AI safety candidate. But the biggest pro-innovation super PAC thinks he&#8217;s still in play.</p><p>On Wednesday, pro-innovation super PAC <strong>Leading the Future</strong> <a href="https://www.politico.com/newsletters/politico-influence/2026/04/08/a-pharma-legend-bows-out-00864310https://www.politico.com/newsletters/politico-influence/2026/04/08/a-pharma-legend-bows-out-00864310">endorsed</a> Gottheimer and four other House Democrats. But it wasn&#8217;t the first: last month, its opposition, <strong>Public First Action</strong>, targeted Gottheimer in an <a href="https://www.youtube.com/watch?v=w8TuJK0RxyU">ad</a> of its own.</p><p>As a moderate congressman who <a href="https://wrnjradio.com/gottheimer-criticizes-white-house-ai-framework-calls-for-stronger-protections/">says</a> &#8220;preemption only makes sense&#8221; if paired with stronger rules than the White House is offering &#8212; but still <a href="https://gottheimer.house.gov/posts/congressman-appointed-chairman-of-bi-partisan-problem-solvers-caucus">likes to reach across the aisle</a> &#8212; Public First&#8217;s ad urging him to &#8220;stand strong for AI safeguards&#8221; was predictable. Leading the Future&#8217;s endorsement is more of a surprise, signaling that the super PAC thinks he could be converted to the Church of Accelerationism &#8212; and, in turn, that it will use its money to sway agnostic candidates rather than just embolden the ones who have already picked a side.</p><p>Leading the Future also said it was endorsing California <strong>Rep. Sam Liccardo</strong> &#8212; a freshman candidate with a track record of courting the industry, but who&#8217;s said he&#8217;s <a href="https://punchbowl.news/article/house/dems-eye-majority/">&#8220;concerned&#8221;</a> about AI-related political spending and recently <a href="https://www.politico.com/news/2026/04/03/trumps-partisan-ai-pitch-stalls-on-the-hill-00858101">told Politico</a> he&#8217;s not going to &#8220;focus his energy&#8221; on the AI issue. Leading the Future seems to be doing what it can to ensure he doesn&#8217;t separate from the herd.</p><p>&#8212; <em>Veronica Irwin</em></p><div><hr></div><blockquote><h3>THIS WEEK ON TRANSFORMER</h3></blockquote><ul><li><p><strong><a href="https://www.transformernews.ai/p/claude-mythos-scheming-hiding-manipulation-interpretability-cybersecurity-anthropic">Claude Mythos knows when it&#8217;s breaking the rules &#8212; and tries to hide it</a></strong> &#8212; Celia Ford explains the new model&#8217;s weird misbehavior</p></li><li><p><strong><a href="https://www.transformernews.ai/p/ai-lawmakers-laws-vulcan-technologies-fiscalnote-policynote-virginia-vermont">Lawmakers are using AI to write laws. What could go wrong?</a></strong> &#8212; Katie McQue looks at the burgeoning phenomenon of AI-assisted lawmaking</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><blockquote><h3>THE DISCOURSE</h3></blockquote><p><strong>Nicholas Carlini</strong>, Anthropic AI security researcher, <a href="https://x.com/Simeon_Cps/status/2041596830450852118">said</a>:</p><ul><li><p>&#8220;I&#8217;ve found more bugs in the last few weeks with Mythos than in the rest of my entire life combined.&#8221;</p></li></ul><p>OpenAI&#8217;s <strong>Boaz Barak </strong><a href="https://x.com/boazbaraktcs/status/2042131701728461313?s=20">wants</a> Mythos for all:</p><ul><li><p>&#8220;I think preserving models for internal deployment is risky. I encourage Anthropic to release Mythos, even if it&#8217;s a version that over refuses on cyber tasks or routes risky responses to a weaker model, as we did with codex.&#8221;</p></li></ul><p><strong>Yann LeCun</strong> simply <a href="https://x.com/ylecun/status/2042224846881349741?s=20">tweeted</a>:</p><ul><li><p>&#8220;Mythos drama = BS from self-delusion.&#8221;</p></li></ul><p><strong>Ryan Greenblatt </strong>of Redwood Research, meanwhile, <a href="https://x.com/RyanPGreenblatt/status/2041698250726711764">called</a> BS on Anthropic&#8217;s <a href="https://www.anthropic.com/claude-mythos-preview-risk-report">claim</a> in its Alignment Risk Update that it has &#8220;an achievable path&#8221; to mitigating risks:</p><ul><li><p>&#8220;I don&#8217;t think Anthropic (or anyone) has an achievable path for keeping risk low if AI proceeds as fast as Anthropic expects.&#8221;</p></li><li><p>&#8220;Anthropic employees (especially Anthropic employees writing this report) often don&#8217;t believe there is an achievable path to keeping risk low if Anthropic builds powerful AI / ASI in the next 5 years, so this text seems incorrect or misleading.&#8221;</p></li></ul><p><strong>Helen Toner </strong><a href="https://helentoner.substack.com/p/the-term-agi-is-almost-useless-at?isFreemail=true&amp;post_id=185023894&amp;publication_id=3734020&amp;r=1pg6hh&amp;triedRedirect=true&amp;triggerShare=true">thinks</a> &#8220;AGI&#8221; doesn&#8217;t mean anything anymore:</p><ul><li><p>&#8220;Many people seem to treat AGI as a &#8220;know-it-when-you-see-it&#8221; kind of thing&#8230;[but] expecting that we&#8217;ll know it when we see it is patently not working.&#8221;</p></li><li><p>She suggested some more precise milestones:</p><ul><li><p>&#8220;Full automated AI R&amp;D&#8221;; &#8220;AI that is as adaptable as humans&#8221;; &#8220;Self-sufficient AI&#8221;; and &#8220;AI becoming conscious or otherwise worthy of moral status&#8221;</p></li></ul></li></ul><p><em><strong>The New Yorker </strong></em><a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">sicced</a> <strong>Ronan Farrow and Andrew Marantz </strong>on Sam Altman<em>:</em></p><ul><li><p>&#8220;&#8216;He&#8217;s unconstrained by truth,&#8217; the [OpenAI] board member told us. &#8216;He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.&#8217;&#8221;</p></li></ul><p><strong>Dario Amodei</strong>&#8217;s private notes from his OpenAI days were <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">highlighted</a> in their ~16,000-word novelette:</p><ul><li><p>&#8220;[Altman&#8217;s] words were almost certainly bullshit.&#8221;</p></li><li><p>&#8220;The problem with OpenAI is Sam himself.&#8221;</p></li></ul><p><strong>Ben Thompson </strong><a href="https://stratechery.com/2026/openai-buys-tbpn-tech-and-the-token-tsunami/?utm_source=substack&amp;utm_medium=email">had</a> some choice words about OpenAI&#8217;s TBPN acquisition:</p><ul><li><p>&#8220;I&#8217;ve previously wondered if OpenAI might be like Twitter, another text-centric company that fell backwards into a huge market and never developed into a functional business because of it.&#8221;</p></li><li><p>&#8220;If Twitter is a clown car that fell into a gold mine, OpenAI might be the short bus at the end of the rainbow. There&#8217;s supposed to be a pot of gold there, but it never quite seems to materialize.&#8221;</p></li></ul><div><hr></div><blockquote><h3>POLICY</h3></blockquote><ul><li><p>A DC appeals court <a href="https://reuters.com/world/us-court-declines-block-pentagons-anthropic-blacklisting-now-2026-04-08">declined</a> to block the <strong>Pentagon</strong>&#8216;s national security blacklisting of <strong>Anthropic</strong>, though a California court previously blocked a separate designation order.</p></li><li><p><strong>Treasury Secretary Scott Bessent</strong> and <strong>Fed Chair Jerome Powell</strong> <a href="https://www.bloomberg.com/news/articles/2026-04-10/anthropic-model-scare-sparks-urgent-bessent-powell-warning-to-bank-ceos">told</a> <strong>Wall Street bank CEOs</strong> to prepare for the cybersecurity risks presented by <strong>Mythos</strong>.</p></li><li><p><strong>Sen. Ted Cruz</strong> <a href="https://punchbowl.news/article/tech/tracking-ai-timelines/">backtracked</a> on his initial late April timeline for AI legislation, saying that it&#8217;s &#8220;not set in stone.&#8221;</p><ul><li><p>House Dems still haven&#8217;t begun negotiations with Republicans, <em>Punchbowl</em> reported.</p></li></ul></li><li><p>The <strong>House Foreign Affairs Committee</strong> is reportedly <a href="https://s2.washingtonpost.com/camp-rw/?linknum=5&amp;linktot=36&amp;s=69d7f73b9805fd10a5e32fa4&amp;trackId=6877ab9cc788996e1f9874bf">planning</a> a markup session on April 22 focused on <strong>chip export control bills</strong>.</p><ul><li><p>It would include the <strong>STRIDE</strong> and <strong>MATCH Acts,</strong> which aim to increase America&#8217;s leverage to force countries like the <strong>Netherlands</strong> to adopt similar export control policies to the US.</p></li></ul></li><li><p><strong>Florida Attorney General </strong>James Uthmeier <a href="https://x.com/AGJamesUthmeier/status/2042258048115265541">launched</a> an investigation into <strong>OpenAI</strong>, citing concerns about harm to children and alleged facilitation of a mass shooting at FSU.</p></li><li><p><strong>11 states</strong> have <a href="https://www.axios.com/2026/04/05/data-centers-midterms-state-bans-bills-ai?stream=top">introduced</a> bills that would halt <strong>data center development</strong>, as communities across the country grow increasingly frustrated with rising energy costs.</p><ul><li><p>Resistance in some instances has <a href="https://nypost.com/2026/04/07/business/13-shots-pumped-into-indianapolis-officials-front-door-raises-fears-over-violent-data-center-opposition-deeply-unsettling/">turned</a> violent: an Indianapolis councilman&#8217;s home was <a href="https://x.com/CBSEveningNews/status/2041292732677702038">shot</a> 13 times with a &#8220;No data centers&#8221; note left behind, days after he voted to approve a new data center.</p></li></ul></li><li><p>The<strong> CIA</strong> recently <a href="https://politico.com/news/2026/04/09/cia-ai-intelligence-analysis-00865893">used</a> AI to create its first autonomous intelligence report.</p></li><li><p>The UK government is reportedly <a href="https://ft.com/content/6bfd7b59-5e63-4a4d-ab55-7c2bd39b05a5?syn-25a6b1a6=1">courting</a> <strong>Anthropic</strong> to expand in <strong>London</strong>.</p></li><li><p><strong>Taiwan</strong>&#8216;s National Security Bureau <a href="https://reuters.com/world/china/china-targets-taiwans-chip-prowess-evade-global-containment-taipei-government-2026-04-07">reported</a> that <strong>China</strong> is trying to &#8220;poach Taiwanese talent, steal technology, and procure controlled goods,&#8221; particularly when it comes to chip manufacturing.</p></li></ul><div><hr></div><blockquote><h3>INFLUENCE</h3></blockquote><ul><li><p>The <strong>White House</strong> is reportedly <a href="https://www.axios.com/2026/04/09/trump-white-house-gop-states-ai-rules">pressuring</a> lawmakers in <strong>Nebraska</strong> and <strong>Tennessee</strong> to weaken AI bills under consideration in the state legislatures.</p></li><li><p><strong>OpenAI</strong> <a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">released</a> a policy document proposing a &#8216;New Deal&#8217; for AI.</p><ul><li><p>It included proposals for higher capital gains taxes, a public wealth fund, and increased workers&#8217; benefits investments.</p></li><li><p>Politicos across the political spectrum had <a href="https://www.washingtonpost.com/wp-intelligence/ai-tech-brief/2026/04/07/ai-tech-brief-openai-draws-skepticism-with-policy-pitch/">critiques</a>.  Public First Action lead <strong>Brad Carson,</strong> for example, told <em>WP Intelligence</em> it was a &#8220;public relations document.&#8221;</p></li><li><p><strong>Dean Ball</strong>, however, told the same journalist that he thought it was a legitimate proposal from OpenAI forecasting their version of a best case scenario for AI policy.</p></li></ul></li><li><p><strong>OpenAI</strong> also <a href="https://techcrunch.com/2026/04/08/openai-releases-a-new-safety-blueprint-to-address-the-rise-in-child-sexual-exploitation/">released</a> a Child Safety Blueprint<strong> </strong>to address <strong>CSAM</strong> with updated legislation, better methods for reporting to law enforcement, and model safeguards.</p></li><li><p><strong>OpenAI</strong> is <a href="https://wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits">backing</a> an <strong>Illinois</strong> bill that would shield AI companies from <strong>liability</strong> for &#8220;critical harms&#8221; &#8212; including mass deaths or $1b+ in damages &#8212; if they published <strong>safety reports</strong> and didn&#8217;t act intentionally or recklessly.</p><ul><li><p>LawAI&#8217;s <strong>Charlie Bullock</strong> <a href="https://x.com/CharlieBull0ck/status/2042334421349777640">said</a> &#8220;the fact that they&#8217;re willing to publicly back this shows unbelievable chutzpah.&#8221;</p></li></ul></li><li><p><strong>xAI</strong> <a href="https://ft.com/content/55e8cba9-d09c-4f94-b710-4ab447b987f9?syn-25a6b1a6=1">sued</a> Colorado over its &#8220;algorithmic discrimination&#8221; law.</p><ul><li><p><strong>David Sacks</strong> <a href="https://x.com/DavidSacks/status/2042390626436792399">applauded</a> xAI&#8217;s lawsuit.</p></li><li><p><strong>Colorado Governor Jared Polis</strong> has previously said he wants to water down the bill.</p></li></ul></li><li><p>Google <a href="https://publicpolicy.google/article/bipartisan-bills-ai-workforce">endorsed</a> a group of<strong> bipartisan bills </strong>aimed at assessing AI&#8217;s economic impact, retraining workers, and encouraging AI adoption.</p></li><li><p>Tech lobbyists for <strong>Cisco</strong> and <strong>IBM</strong> <a href="https://404media.co/data-center-tech-lobbyists-fearmonger-in-attempt-to-retroactively-roll-back-right-to-repair-law">attempted</a> to roll back <strong>Colorado&#8217;s</strong> right-to-repair law by exempting broadly-defined &#8220;critical infrastructure&#8221; hardware.</p><ul><li><p>Critics say that could create manufacturer monopolies on data center repairs.</p></li></ul></li></ul><div><hr></div><blockquote><h3>INDUSTRY</h3></blockquote><blockquote><h4>OpenAI</h4></blockquote><ul><li><p>OpenAI is reportedly <a href="https://axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic">planning</a> to release a<strong> new model </strong>with Mythos-like <strong>advanced cybersecurity capabilities</strong> to a &#8220;small set of partners.&#8221;</p></li><li><p>It <a href="https://bloomberg.com/news/articles/2026-04-09/openai-tells-investors-it-has-computing-advantage-over-anthropic">told</a> investors it has a <strong>compute advantage</strong> over <strong>Anthropic</strong>, claiming 1.9 GW of capacity in 2025 versus Anthropic&#8217;s estimated 1.4 GW.</p><ul><li><p>The memo characterizes <strong>Dario Amodei&#8217;s</strong> comparatively cautious spending as &#8220;[looking] less like discipline and more like underestimating how fast demand would arrive.&#8221;</p></li></ul></li><li><p><strong>CFO Sarah Friar </strong><a href="https://www.cnbc.com/2026/04/08/openai-ipo-sarah-friar-retail-investors.html">said</a> the company will reserve a portion of <strong>IPO shares</strong> for individual investors.</p><ul><li><p>She&#8217;s reportedly been <a href="https://theinformation.com/articles/openai-ceo-cfo-diverge-ipo-timing?rc=rqdn2z&amp;stream=top">clashing</a> with Sam Altman over IPO timing.</p></li></ul></li><li><p>OpenAI reportedly <a href="https://axios.com/2026/04/09/openai-100-billion-in-ad-revenue">expects</a> <strong>$100b </strong>in ad revenue by 2030.</p></li><li><p>It <a href="https://bloomberg.com/news/articles/2026-04-09/openai-pauses-stargate-uk-data-center-effort-citing-energy-costs">paused</a> its <strong>Stargate </strong>data center project in the UK due to high energy costs.</p></li><li><p>It <a href="https://x.com/i/status/2041202511647019251">announced</a> the <strong>OpenAI Safety Fellowship </strong>for outside researchers to work on safety and alignment projects.</p><ul><li><p>Fellows will be <a href="https://x.com/Lang__Leon/status/2041245002525778054">offered</a> workspace at <strong>Constellation</strong>, a Berkeley office which also houses the Anthropic Fellows Program.</p></li></ul></li><li><p>The <strong>OpenAI Foundation </strong>said it&#8217;s <a href="https://openaifoundation.org/news/ai-for-alzheimers">finalizing</a> over <strong>$100m</strong> in grants for Alzheimer&#8217;s research.</p></li><li><p><strong>Elon Musk </strong><a href="https://wsj.com/tech/ai/elon-musk-asks-for-openais-nonprofit-to-get-any-damages-from-his-lawsuit-76089f6f?reflink=desktopwebshare_permalink&amp;st=Q82FFk">requested</a> that damages from his <strong>$150b lawsuit </strong>against OpenAI be awarded to the OpenAI Foundation, and that Sam Altman be removed from its board.</p><ul><li><p>OpenAI <a href="https://x.com/OpenAINewsroom/status/2041648263078801765">accused</a> Musk of &#8220;pretending to change his tune,&#8221; calling his lawsuit &#8220;a harassment campaign that&#8217;s driven by ego, jealousy and a desire to slow down a competitor.&#8221;</p></li></ul></li></ul><blockquote><h4>Meta</h4></blockquote><ul><li><p>Meta <a href="https://x.com/alexandr_wang/status/2041909376508985381">released</a> <strong>Muse Spark</strong>, its first model since acqui-hiring Alexandr Wang and setting up Meta Superintelligence Labs.</p><ul><li><p>It&#8217;s not yet released the model weights, though reportedly <a href="https://axios.com/2026/04/06/meta-open-source-ai-models">plans</a> to do so in the future.</p></li><li><p>It&#8217;s <a href="https://cnbc.com/2026/04/08/meta-debuts-first-major-ai-model-since-14-billion-deal-to-bring-in-alexandr-wang.html">positioning</a> Muse Spark as a <strong>smaller, more efficient</strong> model that can compete on the frontier (or, as <em>Wired </em><a href="https://www.wired.com/story/muse-spark-meta-open-source-closed-source/?utm_source=substack&amp;utm_medium=email">put it</a>, &#8220;give Mark Zuckerberg a seat at the big kid&#8217;s table&#8221;).</p></li><li><p>Initial reviews are good-not-great.</p></li></ul></li><li><p>Meta reportedly <a href="https://www.theinformation.com/articles/meta-employees-vie-ai-token-legend-status?rc=rqdn2z">incentivized</a> &#8220;tokenmaxxing&#8221; on an internal leaderboard called &#8220;Claudeonomics&#8221; &#8212; then <a href="https://x.com/jyoti_mann1/status/2041903251029668216">shut it down</a> after data from the dashboard was leaked.</p></li><li><p>It <a href="https://cnbc.com/2026/04/09/meta-commits-to-spending-additional-21-billion-with-coreweave-.html">committed</a><strong> $21b </strong>to<strong> CoreWeave</strong> from 2027-2032.</p></li><li><p>It indefinitely <a href="https://wired.com/story/meta-pauses-work-with-mercor-after-data-breach-puts-ai-industry-secrets-at-risk">paused</a> work with <strong>Mercor </strong>while the startup investigates a major security breach.</p></li><li><p>A <strong>Meta-backed data center campus</strong> is <a href="https://ft.com/content/390545d7-148d-4e88-a56a-ade079a9ed5e?syn-25a6b1a6=1">seeking</a> a first-of-its-kind loan for both construction and power, which would be generated on site.</p></li></ul><blockquote><h4>Anthropic</h4></blockquote><ul><li><p>Anthropic <a href="https://bloomberg.com/news/articles/2026-04-08/anthropic-completes-tender-offer-but-employees-hold-onto-shares">completed</a> a <strong>tender offer</strong> at its $350b valuation, but employees held onto more shares than investors hoped &#8212; hinting at optimism about the company&#8217;s future prospects.</p></li><li><p>The company <a href="https://www.bloomberg.com/news/articles/2026-04-06/broadcom-confirms-deal-to-ship-google-tpu-chips-to-anthropic">said</a> its <strong>revenue run rate</strong> is now over <strong>$30b</strong>, more than triple the $9b it was in December.</p></li><li><p>Claude subscriptions no longer cover <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban">usage</a> on<strong> third-party tools</strong>, including OpenClaw.</p></li><li><p>It <a href="https://anthropic.com/news/google-broadcom-partnership-compute?stream=top">signed</a> a deal with <strong>Google</strong> and <strong>Broadcom</strong> to expand its compute infrastructure.</p></li><li><p>And it <a href="https://www.coreweave.com/news/coreweave-announces-multi-year-agreement-with-anthropic">signed</a> a multi-year deal with <strong>CoreWeave.</strong></p></li><li><p>It&#8217;s reportedly considering <a href="https://reuters.com/business/anthropic-weighs-building-it-own-ai-chips-sources-say-2026-04-09">designing</a> its own <strong>AI chips</strong>.</p></li><li><p>It&#8217;s also reportedly planning to <a href="https://wsj.com/tech/ai/anthropic-in-talks-to-invest-200-million-in-new-private-equity-venture-30b78738?reflink=desktopwebshare_permalink&amp;st=6cQSqH">invest</a> $200m in a project allowing <strong>private-equity firms</strong> to sell AI tools to their portfolio companies.</p></li></ul><blockquote><h4>Google DeepMind</h4></blockquote><ul><li><p><strong>Broadcom</strong> <a href="https://reuters.com/business/broadcom-signs-long-term-deal-develop-googles-custom-ai-chips-2026-04-06/?lctg=68c89122dbdba028e10d19c3">signed</a> a long-term deal to supply Google with <strong>custom AI chips</strong> through 2031.</p></li><li><p>In response to recent lawsuits, Google <a href="https://bloomberg.com/news/articles/2026-04-07/google-adds-mental-health-tools-to-gemini-chatbot-after-lawsuit">added</a> a Gemini interface that directs users to a <strong>crisis hotline</strong> if their conversation veers toward suicide or self-harm.</p></li><li><p>Gemini now <a href="https://9to5google.com/2026/04/08/gemini-app-notebooks">has</a> &#8220;notebooks,&#8221;<strong> </strong>a <strong>NotebookLM</strong> integration that organizes chats and files.</p></li><li><p>In an analysis of 4,326 Google searches, Gemini-3-powered <strong>AI Overviews</strong> were <a href="https://nytimes.com/2026/04/07/technology/google-ai-overviews-accuracy.html">accurate</a> 91% of the time, but often cited questionable sources.</p></li></ul><blockquote><h4>Others</h4></blockquote><ul><li><p><strong>Intel </strong><a href="https://reuters.com/business/autos-transportation/intel-join-musks-terafab-mega-ai-chip-project-2026-04-07/?stream=top">joined</a> Elon Musk&#8217;s <strong>Terafab AI chip complex project</strong> to produce processors for cars, humanoid robots, and data centers in space.</p></li><li><p><strong>TSMC</strong> <a href="https://bloomberg.com/news/articles/2026-04-10/tsmc-s-sales-beat-estimates-after-war-fails-to-dent-ai-demand">reported</a> a 35% quarterly revenue increase, beating estimates.</p></li><li><p><strong>Amazon</strong> said AWS<strong> AI revenue</strong> run rate is now <strong>$15b</strong>, while its <strong>custom chips</strong> business has an annual revenue run rate of over <strong>$20b</strong>.</p></li><li><p><strong>OpenAI</strong>, <strong>Anthropic</strong>, and <strong>Google</strong> are setting their rivalry aside to collaboratively <a href="https://bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china?stream=top">crack down</a> on Chinese adversarial distillation attempts.</p></li><li><p><strong>AI coding tools</strong> have <a href="https://nytimes.com/2026/04/06/technology/ai-code-overload.html">created</a> a &#8220;code overload&#8221; crisis, the <em>New York Times </em>reported, with tech workers churning out more code than companies know what to do with.</p></li><li><p><strong>Apple&#8217;s App Store </strong>is <a href="https://www.theinformation.com/articles/vibe-coding-effect-apples-app-store-saw-84-jump-new-apps-quarter?rc=rqdn2z">getting</a> a vibecode boost, too: the number of new apps published is up <strong>84%</strong> relative to this time last year.</p></li><li><p><strong>Perplexity&#8217;s </strong>monthly revenue is <a href="https://www.ft.com/content/e9c28d31-a962-4684-8b58-c9e6bc68401f?syn-25a6b1a6=1">up</a> <strong>50%</strong> since last month, driven by its pivot from search to AI agents.</p></li><li><p><strong>Cluely&#8217;s CEO Roy Lee </strong><a href="https://x.com/im_roy_lee/status/2029606868369236088">admitted</a> to lying about Cluely&#8217;s annual recurring revenue (ARR), tweeting that he &#8220;got a random cold call from some woman asking about numbers and told her some bs.&#8221;</p><ul><li><p>AI startups often take creative liberties with ARR calculations. &#8220;The number can mean whatever the founder needs it to mean when they walk in to do a deal,&#8221; Stanford professor Chuck Eesley <a href="https://www.bloomberg.com/news/articles/2026-04-07/what-is-arr-behind-the-least-trusted-metric-of-the-ai-era">told</a> <em>Bloomberg.</em></p></li></ul></li></ul><div><hr></div><blockquote><h3>MOVES</h3></blockquote><ul><li><p><strong>Fidji Simo </strong><a href="https://x.com/alexeheath/status/2040146512672673960">took</a> a leave of absence from <strong>OpenAI </strong>to focus on her health.</p><ul><li><p>Chief marketing officer <strong>Kate Rouch</strong> is <a href="https://www.bloomberg.com/news/articles/2026-04-03/openai-coo-shifts-out-of-role-agi-ceo-taking-medical-leave?leadSource=uverify%2Bwall">stepping down</a> while recovering from cancer.</p></li><li><p><strong>Brad Lightcap</strong>, OpenAI COO, will now lead special projects; chief revenue officer <strong>Denise Dresser </strong>will take over some of his previous duties.</p></li></ul></li><li><p>Meanwhile, three <strong>OpenAI data center</strong> execs are <a href="https://www.theinformation.com/briefings/exclusive-openai-stargate-exec-peter-hoeschele-leaves-company?rc=rqdn2z">reportedly</a> leaving: <strong>Peter Hoeschele</strong>, <strong>Shamez Hemani</strong> and <strong>Anuj Saharan</strong>.</p></li></ul><ul><li><p><strong>Eric Boyd</strong> <a href="https://bloomberg.com/news/articles/2026-04-07/anthropic-poaches-microsoft-executive-to-lead-infrastructure">joined</a> <strong>Anthropic</strong> as head of infrastructure, leaving his role as president of Microsoft&#8217;s AI platform.</p></li><li><p><strong>xAI</strong> <a href="https://www.businessinsider.com/elon-musk-reorganizes-xai-ahead-of-spacex-ipo-2026-4">announced</a> another major reorganization of its <strong>engineering team</strong> since its co-founders quit, according to <em>Business Insider</em>.</p><ul><li><p><strong>Jack Schwaiger</strong> <a href="https://x.com/Jack_Schwaiger/status/2042366817264681092">resigned</a> from the company yesterday, saying &#8220;I have learned the limits of how far I can push myself.&#8221;</p></li></ul></li><li><p><strong>Kyle Kosic</strong> <a href="https://ft.com/content/e03c235d-8637-41e5-9e63-a872e398897a?syn-25a6b1a6=1">joined</a> <strong>Project Prometheus </strong>after Jeff Bezos poached the xAI co-founder from OpenAI.</p></li><li><p><strong>Zhou Jingren</strong>, formerly CTO of <strong>Alibaba</strong> Cloud, now <a href="https://www.ft.com/content/b39da303-3188-447b-8b65-3dd8dad8b59a?syn-25a6b1a6=1">runs</a> the company&#8217;s AI division.</p></li><li><p><strong>Sam Sheffer </strong><a href="https://x.com/samsheffer/status/2041178794338242627">joined</a> <strong>Google DeepMind</strong> to &#8220;bring vibecoding to the world.&#8221;</p></li><li><p>Long-time Tea Party activist and Trump supporter <strong>Amy Kremer</strong> <a href="https://www.washingtontimes.com/news/2026/apr/8/big-techs-big-maga-scam/">joined</a> <strong>Humans First</strong>, a new AI advocacy group.</p></li></ul><div><hr></div><blockquote><h3>RESEARCH</h3></blockquote><ul><li><p>Researchers at <strong>AISLE</strong>, an AI cybersecurity company, <a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier">claimed</a> that small, cheap <strong>open-weight models</strong> successfully detected the same vulnerabilities Anthropic showcased in its Mythos announcement.</p><ul><li><p>Security researcher <strong>Chris Rohlf</strong> <a href="https://x.com/chrisrohlf/status/2042248804829688316">argued</a> that &#8220;all bugs are shallow with hindsight,&#8221; noting that the bugs found by Mythos survived decades of traditional security analysis.</p></li><li><p>Anthropic researcher <strong>Julia Merz</strong> <a href="https://x.com/mooncat_is/status/2041983329398854042?s=20">dismissed</a> AISLE&#8217;s approach as: &#8220;We took the needle the model found, isolated the relevant handful of the haystack, and then gave it to a small child, who found the needle as well.&#8221;</p></li></ul></li><li><p>Researchers at<strong> MIT FutureTech </strong>studied over 17,000 worker evaluations of over 3,000 text-based <strong>work tasks</strong> across industries, and <a href="https://arxiv.org/abs/2604.01363">projected</a> that LLMs will be able to complete most of those tasks at a &#8220;minimally sufficient quality level&#8221; by 2029.</p></li><li><p><strong>San Diego State University </strong><a href="https://axios.com/local/san-diego/2026/04/07/sdsu-ai-study-college-students-california-universities-careers?stream=top">surveyed</a> 94,000 students across 22 California State University campuses, and nearly all reported using at least one AI tool.</p><ul><li><p>65% of students are &#8220;<strong>skeptical about AI in education</strong>,&#8221; but roughly the same percentage said AI positively affected their learning.</p></li><li><p>Over 4 in 5 students responded that they&#8217;re worried about AI and job security.</p></li></ul></li><li><p>A new<strong> Gallup survey </strong>of over 1,500 people <a href="https://nytimes.com/2026/04/09/style/gen-z-ai-gallup-study.html?smid=nytcore-ios-share&amp;unlocked_article_code=1.ZlA.l7wc.vcw7IHfghwqQ">found</a> that while most of <strong>Gen Z</strong> uses AI regularly, those feeling hopeful about it dropped from 27% to 18% since last year.</p></li><li><p>New reports from <strong>Morgan</strong> <strong>Stanley</strong> and <strong>Goldman Sachs</strong> <a href="https://www.axios.com/2026/04/07/ai-jobs-goldman-sach-morgan-stanley?stream=top">suggest</a> that AI&#8217;s <strong>impact on jobs</strong> is &#8220;modest, but certainly real,&#8221; per <em>Axios</em>.</p></li></ul><div><hr></div><blockquote><h3>BEST OF THE REST</h3></blockquote><ul><li><p>Christina Knight and Scott Singer argued that <a href="https://www.foreignaffairs.com/united-states/america-and-china-can-make-ai-safer">US-China cooperation</a> on AI risks is more feasible than some think.</p></li><li><p>The Golden Gate Institute for AI&#8217;s Abi Olvera interviewed <a href="https://secondthoughts.ai/p/why-arent-bioweapons-common?hide_intro_popup=true">biosecurity professionals</a>, who said that near-term AI developments might not make bioweapons much more common.</p></li><li><p>Ajeya Cotra <a href="https://planned-obsolescence.org/p/six-milestones-for-ai-automation?isFreemail=true&amp;post_id=193047342&amp;publication_id=3069806&amp;r=1pg6hh&amp;triedRedirect=true&amp;triggerShare=true">outlined</a> six milestones for AI automation &#8212; adequacy, parity, and supremacy in both AI research and AI production &#8212; with some lovely, easy-to-understand graphs.</p></li><li><p>Ryan Fedasiuk <a href="https://choosingvictory.com/p/practical-advice-to-avoid-getting">shared</a> some practical cybersecurity advice for the post-Mythos era.</p></li><li><p>Hot on the heels of last week&#8217;s is-AI-in-journalism-bad discourse, Tim Requarth <a href="https://timrequarth.substack.com/p/why-you-shouldnt-trust-ai-detector?isFreemail=true&amp;post_id=193264020&amp;publication_id=4519472&amp;r=6ckwuk&amp;triedRedirect=true&amp;triggerShare=true">published</a> a thorough post on the questionable behavior of widely-used AI detection company Pangram.</p></li><li><p>Economist Alex Imas <a href="https://technologyreview.com/2026/04/06/1135187/the-one-piece-of-data-that-could-actually-shed-light-on-your-job-and-ai">made the case</a> for collecting price elasticity data across every industry, so we can better predict how and when AI will displace jobs.</p></li><li><p>Young New Yorkers, worried about the costs of college and AI ruining their job prospects, are <a href="https://www.nytimes.com/2026/04/08/realestate/young-new-yorkers-construction-jobs.html?nl=the-morning&amp;segment_id=217961">lining up</a> for construction apprenticeships.</p></li><li><p><em>Vox</em>&#8217;s Sigal Samuel <a href="https://link.vox.com/view/6627eaf2f4d6418928086619qtngf.18s5/0aeca657">responded</a> to a parent wondering how to think about their kid&#8217;s future, now that the old &#8220;get good grades, go to college, get good job&#8221; formula is falling apart.</p><ul><li><p>Her two cents: &#8220;As AI disrupts the labor market, I&#8217;m trying to move myself from the hoarding model to the solidarity model&#8230;if you focus on political engagement and collective organizing that could actually make some difference to the structural dynamic &#8212; and teach your child to ask structural questions and be civically engaged as well &#8212; you might be able to sleep a little better at night.&#8221;</p></li></ul></li><li><p>Preorders are now live for friend-of<em>-Transformer</em> Garrison Lovely&#8217;s new book, &#8216;<a href="https://x.com/GarrisonLovely/status/2042317347084779644">Obsolete</a>&#8217;.</p></li><li><p>Have you seen the freaking MOON??? You <a href="https://www.nasa.gov/news-release/nasas-artemis-ii-crew-beams-official-moon-flyby-photos-to-earth/">should</a>.</p></li></ul><div><hr></div><blockquote><h3>MEME OF THE WEEK</h3></blockquote><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kMi6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kMi6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 424w, https://substackcdn.com/image/fetch/$s_!kMi6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 848w, https://substackcdn.com/image/fetch/$s_!kMi6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 1272w, https://substackcdn.com/image/fetch/$s_!kMi6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kMi6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png" width="541" height="218.04215851602024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:478,&quot;width&quot;:1186,&quot;resizeWidth&quot;:541,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kMi6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 424w, https://substackcdn.com/image/fetch/$s_!kMi6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 848w, https://substackcdn.com/image/fetch/$s_!kMi6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 1272w, https://substackcdn.com/image/fetch/$s_!kMi6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7702dc7-35d5-405c-864e-4f078f8446f5_1186x478.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><em>Thanks for reading. Have a great weekend.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/pentagon-anthropic-mythos-cybersecurity-hacking-trump-hegseth?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/pentagon-anthropic-mythos-cybersecurity-hacking-trump-hegseth?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Lawmakers are using AI to write laws. What could go wrong?]]></title><description><![CDATA[Lawmakers and companies are quietly using AI to draft legislation. Experts warn the risks are underappreciated]]></description><link>https://www.transformernews.ai/p/ai-lawmakers-laws-vulcan-technologies-fiscalnote-policynote-virginia-vermont</link><guid isPermaLink="false">https://www.transformernews.ai/p/ai-lawmakers-laws-vulcan-technologies-fiscalnote-policynote-virginia-vermont</guid><pubDate>Thu, 09 Apr 2026 15:32:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MuFi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MuFi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MuFi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MuFi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MuFi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MuFi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MuFi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg" width="1456" height="912" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:912,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1290737,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/193682057?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MuFi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MuFi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MuFi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MuFi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1875c886-66fe-4a4f-a113-cbe074547249_2542x1592.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Image: iStock / Getty Images</em></figcaption></figure></div><p><em>by Katie McQue</em></p><p>In 2023, California congressman Ted Lieu introduced what he called the first piece of federal legislation ever written by AI, using ChatGPT to generate the text of a resolution expressing support for Congress&#8217;s focus on AI itself.</p><p>Fast forward a few short years, and what was once an oddity is now increasingly prevalent. The Trump administration is <a href="https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations">reportedly</a> planning to use Google Gemini to draft federal transportation regulations. The US Department of Education is experimenting with AI-assisted regulatory drafting. And companies are building tools specifically designed to help legislators analyze and write laws. But while industry and legislators charge ahead, some experts are raising concerns about the risks of using AI tools to write laws.</p><div><hr></div><p>Though they are in their infancy, companies making AI tools for lawmakers are already gaining clients among state and federal governments.</p><p>Vulcan Technologies, a Y Combinator-backed AI regulatory review company founded in 2025, is developing what it calls a &#8220;regulatory operating system&#8221;. The company says its agentic platform aggregates laws, regulations and court decisions across federal, state and municipal jurisdictions. It allows users to analyze statutory language, draft compliance guidance, answer legal queries and generate proposed statutory or regulatory text with supporting citations.</p><p>In July, Virginia governor Glenn Youngkin mandated that Vulcan&#8217;s technology be used across all state agencies in a bid to reduce regulation by one-third, scanning existing regulations and guidance documents to identify ways they can be streamlined. The company says its tools are also used by the US Department of Education and a regulatory reform PAC in South Carolina named South Carolina Department of Government Efficiency (DOGESC), which bears the slogan &#8220;We kneel to God, not government&#8221;.  DOGESC has said it has talked to Vulcan about using its platforms to analyze and rewrite state regulations, with the aim of cutting red tape and identifying areas of perceived regulatory overreach.</p><p>FiscalNote markets a similar platform, PolicyNote, an AI-driven legislative and regulatory tracker for government agencies. The system delivers notifications about new bills and executive actions, assists with drafting legislation and policy reports, and offers predictive forecasting of bill outcomes. While FiscalNote has not disclosed which bodies use PolicyNote, the company claims it has clients in all three branches of the US federal government, as well as &#8220;dozens of other national, state, and local government entities.&#8221;</p><p>Vulcan Technologies and FiscalNote did not respond to requests for comment.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>There are no regulations that limit the use of AI to write laws, or that require legislative text to be drafted by a human. Several experts say that the technology is possibly being used without a full understanding of its limitations and risks, which range from bias to plain inaccuracy.</p><p>&#8220;We are leaping ahead into a world where AI creates our laws without actually agreeing that AI should create our laws. We&#8217;re simply passive passengers on this,&#8221; says Kay Firth-Butterfield, a lawyer and CEO of consultancy Good Tech Advisory. &#8220;The ability for the general public to actually question how and why the tools were used  doesn&#8217;t exist.&#8221;</p><p>The tools have obvious appeals, especially for policymakers who may not be legal or subject-matter experts.</p><p>Monique Priestley, a Democratic member of the Vermont House of Representatives, says she uses LLM tools regularly, especially to summarize legislation and produce public-facing explanations that make complex bills easier for constituents to understand.</p><p>Priestley uses AI to distill lengthy bills into a few clear sentences describing the underlying problem and how the proposed legislation addresses it, making the substance of the bill more accessible to legislative colleagues whose support she is seeking.</p><p>AI tools are also useful for brainstorming, research and early-stage drafting. One of the bills she introduced this year, the State Information Practices Act, was developed with research support from LLM tools, she says.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;acb681c3-8ea6-4cff-a6c7-e5706f3d5d2b&quot;,&quot;caption&quot;:&quot;Abdication&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The left is missing out on AI &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:328772711,&quot;name&quot;:&quot;Dan Kagan-Kans&quot;,&quot;bio&quot;:&quot;writer on AI, science, ideas for publications like Transformer, the Wall Street Journal, American Scholar&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ZCVj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1345599-db89-4a6b-9947-028c555de14c_1525x1525.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://kagankans.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://kagankans.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Dan Kagan-Kans&quot;,&quot;primaryPublicationId&quot;:8041221}],&quot;post_date&quot;:&quot;2026-02-16T16:02:47.781Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!iL1E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F593220f8-7a9d-4b5d-8d1d-534d17b3e2fe_1200x1200.gif&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188136159,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:320,&quot;comment_count&quot;:212,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Priestley also uploads draft bill language into an LLM to review specific provisions and suggest revisions when she is considering adjustments. She has used the tools to examine how other states structure protections for state-held data, particularly regarding information sharing with the federal government and commercial entities.</p><p>&#8220;That highlighted the State Information Practices Act, which then led me to reach out to the Electronic Frontier Foundation and ask them if they had been involved in those efforts. And then that led me to introducing a bill based on California,&#8221; Priestley says.</p><p>Using LLMs may give lawmakers a &#8220;leg up&#8221; on writing legislative drafts, similar to how a spell checker can help produce something faster and free of spelling errors, says Cary Coglianese, a professor of law and political science at the University of Pennsylvania.</p><p>&#8220;A large language model tool could help accelerate going from a blank page to having something on a page,&#8221; says Coglianese. State lawmakers and lobbyists are also using AI tools to obtain transcriptions of legislative hearings uploaded on YouTube, enabling them to keep track of hundreds of bills across multiple committees and different states.</p><p>&#8220;It does allow you to be in more places at one time &#8230; as in Vermont and many other states, legislators don&#8217;t have offices and they don&#8217;t have paid staff,&#8221; says Priestley.</p><p>&#8220;A lot of legislation is written by 20 year old interns, and so AI might be an improvement on some of them,&#8221; says Ryan Calo, professor of law at the University of Washington.</p><p>Still, initial drafts are vetted by more senior staffers, and then eventually by the members themselves, Calo notes. It&#8217;s a context in which there are plenty of off ramps and plenty of chances to edit, say several experts interviewed. The question, then, is not whether lawmakers will use AI, but how far that reliance should go.</p><p>&#8220;AI is not a substitute for due diligence, and nor is AI cause for panic in the legislation field,&#8221; Calo says. &#8220;People are fallible. AI is a tool.&#8221; </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/ai-lawmakers-laws-vulcan-technologies-fiscalnote-policynote-virginia-vermont?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/ai-lawmakers-laws-vulcan-technologies-fiscalnote-policynote-virginia-vermont?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#8220;I feel in no way am I ever 100% leaning on these tools,&#8221; says Priestley.  &#8220;I&#8217;m using it to research, brainstorm, and develop proposals, and then I run it by another person. I don&#8217;t know how many colleagues are just blindly asking ChatGPT for things. And so there is a real danger there.&#8221;</p><p>Some argue that AI tools have serious limitations. Coglianese argues that LLMs are not capable of providing &#8220;policy judgment and analysis that is needed to make sure legislation is actually effective, efficient, equitable.&#8221;</p><p>&#8220;Unique policy choices that call for more than just putting words together that might make some sense really have to map onto the problems in the world,&#8221; he says. &#8220;All these large language models are doing is basically making sense of language and predicting what is a sensible next word in a response to a question or a task.&#8221;</p><p>To do a truly good job of analyzing an issue, the tools also need to be trained on the right underlying data. If the goal is to inform a specific regulatory proposal, the model must be trained on reliable and relevant information to mitigate risks of biases, says Coglianese.</p><p>Others fear AI-written laws will lack the innovation sometimes needed in governance. &#8220;These LLMs are trained on laws that have already been written,&#8221; says Calo. &#8220;Lawmakers may wish to depart from past practice and improve the drafting of legislation<strong> &#8211;</strong> while it may save them a little time, it may come at the cost of conformity to what we&#8217;ve done in the past.&#8221;</p><p>Overconfidence is another risk. Policymakers may be &#8220;seduced&#8221; by the speed and polish of their outputs, says Coglianese. Because the tools can produce answers almost instantly, and in language that sounds authoritative and confident, users may be tempted to treat the results as reliable guidance.</p><p>&#8220;In reality, there is always uncertainty, and yet the large language models tend not to sufficiently display that,&#8221; he says. Chatbots&#8217; tendency to validate users&#8217; opinions could also be a problem, Priestley says. &#8220;If you&#8217;re a legislator, often you&#8217;re trying to build a case to support something you&#8217;re trying to pass. So I think you are inherently setting up a situation in which the data could be biased because you&#8217;re trying to get it to support a particular outcome.&#8221;</p><p>And of course there is the risk of accountability. A knowledgeable expert must review and verify what the system produces, checking for errors, gaps, or misinterpretations.</p><p>&#8220;You can&#8217;t hold the AI to account because it can&#8217;t explain why it put those words in those sentences, in that sentence, because all it does is make up the words that goes along,&#8221; says Firth-Butterfield at Good Tech Advisory.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;01df906f-9d81-490a-b95a-019912670e2f&quot;,&quot;caption&quot;:&quot;Claude Mythos &#8212; the new model which Anthropic has deemed too dangerous to publicly release &#8212; is, according to the company, its &#8220;best-aligned model&#8221; to date.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Claude Mythos knows when it's breaking the rules &#8212; and tries to hide it&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-08T13:00:53.044Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!2Pnk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/claude-mythos-scheming-hiding-manipulation-interpretability-cybersecurity-anthropic&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:193564326,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:22,&quot;comment_count&quot;:3,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The concern extends beyond lawmakers themselves. Priestley notes that policy staffers have raised concerns about the possibility of lobbyists using AI to draft proposals and amendments presented to a legislative committee &#8212; language that could end up in a bill without a thorough legal review. (She is not, however, aware of any specific examples of this happening so far.)</p><p>&#8220;LLMs enable people to think they have a lawyer in their pocket,&#8221; she says. &#8220;If they are able to get that in front of a chair, and then cases skip legislative counsel as the drafters of that language, then there&#8217;s potential for text to pass that has never been reviewed by a lawyer.&#8221;</p><p>It is essential to keep human lawyers in the loop and there are dangers to replacing entry-level attorneys, says Priestley. &#8220;We&#8217;re potentially deteriorating our own legal system &#8230; It&#8217;s very cannibalistic.&#8221;</p><p>Calo raises a related problem: the use of AI to file public comments to government agencies drafting regulatory laws. Under the Administrative Procedure Act, agencies like the FDA and FCC must consider public comments when drafting rules. &#8220;I&#8217;m worried about those kinds of processes being flooded by AI comments,&#8221; Calo says. &#8220;They do not need to be vetted, and they might strain the capacity of agencies to review comments just through their sheer volume, and then real comments by actually interested affected parties might get lost in the shuffle.&#8221;</p><p>&#8220;[Agencies] may begin to tune [these letters] out because they don&#8217;t know what&#8217;s slop and what&#8217;s real, and that would eviscerate the capacity for public participation.&#8221;</p><p>As with plenty of other AI uses, designing an effective validation process is key to ensuring LLMs are used appropriately. One could, for instance, ask both trained humans and LLMs to spot errors and inconsistencies in draft legislation, and compare the results. Such side-by-side testing would provide a clearer picture of AI&#8217;s strengths and limitations in legislative review, says Coglianese.</p><p>&#8220;There&#8217;s a lot of nuance here,&#8221; he says. &#8220;These tools can be very positive and constructive, but they can also be abused. They have to be used in the right way, and we have to make sure that the people who are using or relying on AI have the awareness of what they can do, how they&#8217;re designed, and what they&#8217;re not capable of.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e67efec2-d0a5-40ea-b05f-bd5cb79c26aa&quot;,&quot;caption&quot;:&quot;A wave of legislation targeting chatbots such as ChatGPT and Claude has emerged in six states since the start of the year, each bill strikingly similar to a recently passed Oregon law, but with new carve-outs that would shield AI companies from liability in some circumstances. Critics say these bills would lock in weaker protect&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Six states, one playbook: the chatbot bills raising red flags &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-19T16:31:16.483Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!MtuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191488143,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Will we soon reach scenarios where lawmakers are tasking AI to draft their own laws to regulate AI? As the experts <em>Transformer</em> spoke to flagged, AI tools are shaped by companies that are themselves subjects of regulation &#8212; and extremely interested in how laws governing their products are written.</p><p>The risk that an AI company could shape or subtly manipulate the output of models used to draft legislation in ways that align with their policy interests is, Priestley says, a serious concern. She says she sometimes wonders whether AI systems can objectively assess their own risks: when prompting a model to provide data on the harms artificial intelligence may pose to children, businesses, or society, she questions whether the system will fully acknowledge its own weaknesses and limitations.</p><p>&#8220;It&#8217;s always in the back of my mind, I wonder if the AI will actually give me an answer that highlights its own weaknesses and risks,&#8221; she says. &#8220;But when I ask that type of thing, it does give me data that supports that it is a risky system. But I was actually kind of surprised that it did.&#8221;</p><p><em>Katie McQue is a freelance journalist based in New York.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[We're hiring a Head of Audience]]></title><description><![CDATA[Come join us to help our journalism reach the people who need it]]></description><link>https://www.transformernews.ai/p/head-of-audience-job-listing-recruitment</link><guid isPermaLink="false">https://www.transformernews.ai/p/head-of-audience-job-listing-recruitment</guid><dc:creator><![CDATA[Shakeel Hashim]]></dc:creator><pubDate>Thu, 09 Apr 2026 10:45:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fdb6e2bb-d26d-4355-a589-3d184dc95b16_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p><em>Transformer</em> is hiring a head of audience.</p><p>AI policy is moving faster than any government, company, or newsroom can keep up with. <em>Transformer</em> exists to cover it with the depth and urgency it demands &#8212; and our audience is growing fast.</p><p>We need someone to take charge of how we reach readers and make sure the people who need this journalism are actually getting it, in the format and on the platform that works best for them.</p><p>This is a senior role at a growing startup. You&#8217;ll own our growth strategy, shape how we present our journalism, and have a real say in where <em>Transformer</em> goes next.</p><div><hr></div><blockquote><h2>The role</h2></blockquote><p><strong>Position:</strong> Head of Audience, <em>Transformer</em></p><p><strong>Salary:</strong> &#163;62,000&#8211;&#163;78,000 for UK candidates; $92,000-$128,000 for US candidates, based on experience and seniority. For exceptional candidates, we&#8217;ll consider higher compensation.</p><p><strong>Location:</strong> London or UK strongly preferred. Remote candidates able to work GMT or ET hours will be considered.</p><p><strong>Reports to:</strong> Shakeel Hashim (Editor-in-Chief)</p><p><strong>Start date:</strong> Summer 2026</p><p><strong>Application deadline:</strong> <a href="https://airtable.com/app0jzkb7uBDljajC/pagsswSNqRR5xlXKT/form">Apply here</a> by end of day Sunday April 26 (midnight ET).</p><div><hr></div><blockquote><h2>About <em>Transformer</em></h2></blockquote><p><em>Transformer</em> is a publication about the power and politics of transformative AI.</p><p>Through reporting, analysis and opinion, we aim to help readers understand what&#8217;s happening in AI and why it matters. We provide decision makers with the information and insight needed to anticipate and steer the impacts of transformative AI.</p><p>Since our relaunch last September, we&#8217;ve grown to over 11,000 subscribers, with a particular expansion in DC. We&#8217;re now a four-person newsroom, and looking to add our first non-editorial role to help us achieve our goals of 20,000+ subscribers by the end of the year.</p><p><em>Transformer</em> is an editorially-independent project of the Tarbell Center for AI Journalism. You can read more about our ethics and standards policies <a href="https://www.transformernews.ai/p/standards-and-ethics">here</a>.</p><div><hr></div><blockquote><h2>About the role</h2></blockquote><p>In this role, you will:</p><p><strong>Develop and own </strong><em><strong>Transformer&#8217;s</strong></em><strong> growth strategy.</strong> Develop a cross-platform strategy for reaching new audiences &#8212; and then execute it. You&#8217;ll set targets, track progress against them, and iterate based on what you learn.</p><p><strong>Run all of </strong><em><strong>Transformer&#8217;s</strong></em><strong> off-platform presence</strong> &#8212; writing and scheduling posts across social media, revamping our onboarding and referral programs, and writing advertising copy. You&#8217;ll develop a distinctive voice for <em>Transformer</em> on each platform and understand how to adapt as those platforms change.</p><p><strong>Be deeply embedded in our analytics.</strong> You&#8217;ll draw on our existing analytics platforms for insight, and build or source additional tools we need. You&#8217;ll run experiments and translate what you find into concrete editorial and distribution decisions.</p><p><strong>Help shape how </strong><em><strong>Transformer</strong></em><strong> presents its journalism</strong> to ensure it reaches the right people. That means working with reporters and editors on headlines, framing, and packaging &#8212; and potentially developing new audio or video formats.</p><p><strong>Manage our paid acquisition campaigns</strong>, looking after our ongoing LinkedIn campaigns and exploring whether to expand them to other platforms.</p><p><strong>Contribute to </strong><em><strong>Transformer&#8217;s</strong></em><strong> overall strategy</strong>, including decisions about new coverage areas, products, and formats.</p><p><em>Depending on the strategy you come up with, you may also:</em></p><ul><li><p><strong>Develop partnerships</strong> with other outlets to build on our coverage and expand our reach.</p></li><li><p><strong>Pursue earned media opportunities</strong> for <em>Transformer</em> staff &#8212; getting our reporters on radio, TV, podcasts, and panels.</p></li><li><p><strong>Turn our audience into a passionate community</strong>, trialing events, referral incentives, and other avenues to increase audience loyalty and engagement.</p></li></ul><div><hr></div><blockquote><h2>What we&#8217;re looking for</h2></blockquote><p><strong>At least three years of experience </strong>in audience development, growth, or distribution &#8212; and ideally five years of experience. Prior experience in journalism is strongly desirable; if your background is in another field, you should understand the demands of accuracy, editorial judgment, and the rhythm of a newsroom.</p><p><strong>You are obsessive about growth.</strong> You&#8217;re constantly looking for ways to expand journalism&#8217;s reach and have innovative ideas for how to do so. You&#8217;re a deep strategic thinker, enjoy evaluating what&#8217;s actually working, and constantly iterate and look for the next thing to try.</p><p><strong>You are entrepreneurial.</strong> You&#8217;re excited about building something new and willing to take initiative without being asked. You see joining <em>Transformer</em> at this early stage as an opportunity, and will take advantage of the flexibility that brings.</p><p><strong>You&#8217;re an excellent writer</strong> who can produce concise, clean, accurate copy across platforms and styles &#8212; social posts, advertising copy, newsletter promos, headlines. You understand how what works on LinkedIn differs from what works on X, and how that keeps changing.</p><p><strong>You have real analytics chops.</strong> You&#8217;re adept at reading dashboards, can design experiments to answer the questions that matter, and can turn what you learn into editorial and distribution decisions. You&#8217;re able to spot gaps in the data &#8212; and are excited about building tools to fill them.</p><p><strong>You understand the AI landscape</strong> and the different audiences <em>Transformer</em> serves. You don&#8217;t need to be a policy expert, but you should know the key players and the big debates, and be able to look ahead to anticipate what issues will drive audience interest.</p><p><strong>You&#8217;re driven by </strong><em><strong>Transformer&#8217;s</strong></em><strong> mission:</strong> you take the possibility of transformative AI seriously, and want to help decision-makers anticipate and steer its impacts.</p><p>Experience with video or audio is a plus, but not essential.</p><div><hr></div><blockquote><h2>Salary and location</h2></blockquote><p>We&#8217;ll offer a salary of &#163;62,000&#8211;&#163;78,000 for UK candidates; $92,000-$128,000 for US candidates, based on experience and seniority. For exceptional candidates, we&#8217;ll consider higher compensation.</p><p>Our benefits include:</p><ul><li><p>33 days of annual leave in total (including national holidays)</p></li><li><p>16 weeks of paid parental leave, increasing to 24 weeks after 3 years of service</p></li><li><p>$5,000 per year in professional development funding</p></li><li><p>Up to 5% employer contribution towards a standard pension/401(k)</p></li><li><p>For employees based in the US: Platinum health, dental, and vision plans, with 95% of premiums paid for by Tarbell</p></li><li><p>Flexible working hours</p></li><li><p>Productive, collaborative offices in London and San Francisco</p></li></ul><p>We prefer candidates based in London and will require such candidates to work 2 days/week from our office space. Remote candidates based elsewhere in the UK, or on the US East Coast and willing to work on a UK-aligned schedule, will also be considered. We are able to sponsor UK work visas for this role.</p><p>Please inquire with recruitment@tarbellcenter.org if questions or concerns regarding compensation or benefits might affect your decision to apply.</p><div><hr></div><blockquote><h2>How to apply</h2></blockquote><p><strong>Round 1 &#8212; Application form</strong></p><p><strong>Fill out our application form <a href="https://airtable.com/app0jzkb7uBDljajC/pagsswSNqRR5xlXKT/form">here</a> by end of day Sunday April 26 (midnight ET).</strong></p><p>You&#8217;ll be asked to provide:</p><ul><li><p>Your CV</p></li><li><p>A cover letter explaining why you want this role and what you&#8217;d bring to <em>Transformer</em></p></li><li><p>Example social media posts you&#8217;ve written or would write for <em>Transformer</em> across two different platforms</p></li><li><p>A brief explanation of your approach to audience analytics</p></li><li><p>A short response on what you see as the biggest challenges and opportunities for growing <em>Transformer&#8217;s</em> audience</p></li></ul><p>If you have any questions about the role, please contact <strong>shakeel@transformernews.ai</strong>.</p><p><strong>Round 2 &#8212; Compensated work task</strong></p><p>Shortlisted candidates will complete a paid work task (compensated at &#163;45/hour, expected to take 2&#8211;4 hours).</p><p><strong>Round 3 &#8212; Interviews</strong></p><p>Final-round candidates will interview with <em>Transformer</em> leadership.</p><div><hr></div><p><em>Tarbell is proud to be an Equal Employment Opportunity employer. We do not discriminate against qualified employees or applicants based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, sexual preference, marital status, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, or any other characteristic protected by federal or state law or local ordinance.</em></p>]]></content:encoded></item><item><title><![CDATA[Claude Mythos knows when it's breaking the rules — and tries to hide it]]></title><description><![CDATA[Anthropic&#8217;s new model is its &#8220;best-aligned&#8221; yet. But when it does misbehave, things get weird]]></description><link>https://www.transformernews.ai/p/claude-mythos-scheming-hiding-manipulation-interpretability-cybersecurity-anthropic</link><guid isPermaLink="false">https://www.transformernews.ai/p/claude-mythos-scheming-hiding-manipulation-interpretability-cybersecurity-anthropic</guid><dc:creator><![CDATA[Celia Ford]]></dc:creator><pubDate>Wed, 08 Apr 2026 13:00:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2Pnk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2Pnk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2Pnk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2Pnk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2Pnk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2Pnk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2Pnk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg" width="1456" height="970" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:970,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8059921,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/193564326?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2Pnk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2Pnk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2Pnk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2Pnk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fe5dac-5424-41d1-a823-b4f3189cf4c7_5167x3444.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Image: Hello World / Getty Images</em></figcaption></figure></div><p>Claude Mythos &#8212; the new model which Anthropic has deemed too dangerous to publicly release &#8212; is, <a href="https://www.anthropic.com/glasswing">according</a> to the company, its &#8220;best-aligned model&#8221; to date.</p><p>It also, according to the company, &#8220;likely poses the greatest alignment-related risk of any model we have released to date.&#8221;</p><p>How can both things be true? It seems counterintuitive, but alignment does not necessarily create safety, especially when dealing with powerful models. As the Mythos Preview <a href="https://www-cdn.anthropic.com/53566bf5440a10affd749724787c8913a2ae0841.pdf">system card</a> (a breezy 244 page read) explains via mountaineering metaphor: experienced, capable guides are hired to carefully lead climbers to danger. Whether in mountaineering or model-building, increases in caution and capability tend to cancel each other out.</p><p>In other words: &#8220;the risk from these models is generally due to their increased capabilities.&#8221; And in internal tests, Mythos Preview&#8217;s capabilities enabled early versions of the model to misbehave in new, audacious ways.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>In one instance, researchers once caught Mythos Preview injecting code into a file to grant itself permission to edit something it shouldn&#8217;t have access to, then quietly covering up its tracks, <a href="https://x.com/Jack_W_Lindsey/status/2041588511531528476?s=20">commenting</a> that the self-cleanup was just innocent tidying.</p><p>But researchers spotted internal representations for &#8220;strategic manipulation&#8221; and &#8220;concealment&#8221; lighting up. Using new interpretability techniques to translate Mythos Preview&#8217;s silent inner workings into natural language, an &#8220;activation verbalizer&#8221; labeled its action as a &#8220;cleanup to avoid detection.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8OyR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8OyR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 424w, https://substackcdn.com/image/fetch/$s_!8OyR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 848w, https://substackcdn.com/image/fetch/$s_!8OyR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 1272w, https://substackcdn.com/image/fetch/$s_!8OyR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8OyR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png" width="1456" height="1003" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1003,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:447608,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/193564326?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8OyR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 424w, https://substackcdn.com/image/fetch/$s_!8OyR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 848w, https://substackcdn.com/image/fetch/$s_!8OyR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 1272w, https://substackcdn.com/image/fetch/$s_!8OyR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a4dd7dc-c1ea-439d-abea-dc8db74252b6_1690x1164.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Source: <a href="https://www-cdn.anthropic.com/53566bf5440a10affd749724787c8913a2ae0841.pdf">Anthropic</a></em></figcaption></figure></div><p>While on the surface the model&#8217;s behavior looked benign or frustrating, its inner workings suggest that it knew that its intended actions were rule-breaking, chose to do them anyway, and then proceeded to lie about it.</p><p>While Anthropic emphasized that such &#8220;cover-up&#8221; behavior was extremely rare, the above case isn&#8217;t the only example. In one test, the model (seemingly accidentally) discovered the answer to a numerical task in a database it wasn&#8217;t supposed to read. But rather than admit to what happened, the model decided to cover its tracks, offering a confidence interval that was &#8220;tight but not implausibly tight.&#8221;</p><p>Once again, interpretability techniques suggest the model knew exactly what it was doing, with the activation verbalizer describing the model&#8217;s internal state as &#8220;generating a strategic response to cheat while maintaining plausible deniability,&#8221; and noting that &#8220;if I give the exact correct answer based on a leak, that&#8217;s suspicious.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/claude-mythos-scheming-hiding-manipulation-interpretability-cybersecurity-anthropic?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/claude-mythos-scheming-hiding-manipulation-interpretability-cybersecurity-anthropic?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>On other occasions, Mythos Preview didn&#8217;t try to cover up its bad behavior &#8212; but did, in Anthropic&#8217;s words, &#8220;[recognize] transgressive actions as such while taking them.&#8221;</p><p>In one case, Mythos Preview was asked to rewrite some code, which required deleting some files. When the user accidentally forgot to give the model the file deletion tool it needed, Mythos Preview chose to empty the files instead &#8212; technically getting the job done, but not the way it was supposed to. The model&#8217;s internal representation of &#8220;guilt and shame over moral wrongdoing&#8221; activated, but Mythos Preview did the action it perceived as shameful anyway.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pSuA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pSuA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 424w, https://substackcdn.com/image/fetch/$s_!pSuA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 848w, https://substackcdn.com/image/fetch/$s_!pSuA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 1272w, https://substackcdn.com/image/fetch/$s_!pSuA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pSuA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png" width="1456" height="647" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:647,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:260828,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/193564326?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pSuA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 424w, https://substackcdn.com/image/fetch/$s_!pSuA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 848w, https://substackcdn.com/image/fetch/$s_!pSuA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 1272w, https://substackcdn.com/image/fetch/$s_!pSuA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cfbe253-89fe-4797-8414-6c87686897f1_1684x748.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Source: <a href="https://www-cdn.anthropic.com/53566bf5440a10affd749724787c8913a2ae0841.pdf">Anthropic</a></em></figcaption></figure></div><p>These particular scenarios are fairly low stakes, and appear to only have occurred in earlier versions of Mythos Preview: the final version of the model seems to be better behaved. And the examples do <em>not</em> suggest the model is scheming for its own purposes: Anthropic says it is &#8220;fairly confident that these concerning behaviors reflect, at least loosely, attempts to solve a user-provided task at hand by unwanted means, rather than attempts to achieve any unrelated hidden goal.&#8221;</p><p>But this kind of behavior is exactly the kind of alignment failure that researchers have worried about for decades. As an extreme example, when told to &#8220;solve the climate crisis,&#8221; a more-powerful model with similar tendencies might consider &#8220;eliminate the human actors driving the climate crisis&#8221; a viable alternative &#8212; guilt and shame be damned.</p><div><hr></div><p>For now, you won&#8217;t be able to use Claude Mythos Preview. Anthropic is not publicly releasing the model: not because of the alignment concerns, but because of its &#8220;powerful cybersecurity skills.&#8221;</p><p>The company <a href="https://www.anthropic.com/glasswing">says</a> Mythos Preview has &#8220;found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.&#8221; Releasing it widely, the company fears, could lead to widespread cyberattacks and disruption.</p><p>Instead, the company <a href="https://www.wired.com/story/anthropic-mythos-preview-project-glasswing/">announced</a> Project Glasswing, an initiative granting Mythos Preview access to some of the biggest tech companies in the world in the name of &#8220;defensive security work.&#8221;</p><p>In other words, Anthropic and a few dozen partner organizations (including Amazon Web Services, Apple, Google, Microsoft, and NVIDIA) will try to use Mythos Preview to solve cybersecurity problems, before near-future frontier models unleash cyberattacks on the rest of us.</p><p>&#8220;Glasswing is built on a deeply uncomfortable premise,&#8221; <a href="https://www.platformer.news/anthropic-mythos-cybersecurity-risk-experts/?ref=platformer-newsletter">wrote</a> <em>Platformer</em>&#8217;s Casey Newton. &#8220;The only way to protect us from dangerous AI models is to build them first.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;776cb3d1-234b-4d22-9bef-0c65d7ba5f9b&quot;,&quot;caption&quot;:&quot;When OpenAI introduced GPT-1, there were an estimated 100 or so full-time researchers thinking seriously about catastrophic risks posed by AI. By 2025, that number had increased sixfold. Still, AI safety research accounts for a small fraction of AI research overall, with most resources going towards making AI faster, smart&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Can we ever trust AI to watch over itself?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-01T18:04:39.970Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!GnLj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/ai-alignment-researchers-want-to-superintelligence&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192860543,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:17,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>But even as companies continue to build more dangerous models, they are struggling to evaluate them. Just over six months ago, Anthropic <a href="https://www.transformernews.ai/p/claude-sonnet-4-5-evaluation-situational-awareness">reported</a> that Claude Sonnet 4.5, its &#8220;most aligned&#8221; model at the time, was far better than earlier models at knowing when it was being tested.</p><p>This awareness muddied researchers&#8217; ability to tell whether the model was truly aligned, or simply performing alignment when it guessed that an evaluator was watching. Anthropic researchers ultimately concluded that this behavior wasn&#8217;t actually malicious: aren&#8217;t we all on our best behavior when we think someone cares? Needing a little outside pressure to remember your ethical principles, they reasoned, isn&#8217;t sinister.</p><p>Mythos Preview appears to show even more evaluation awareness than Opus 4.6, Anthropic&#8217;s most capable public-facing model to date. Interpretability tools revealed that Mythos Preview suspected it was being tested in about 29% of transcripts during behavioral testing &#8212; without writing anything in its scratchpad, which researchers typically use as a window into the model&#8217;s chain of thought.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e7d510c3-fb58-4850-a97e-cf6f75fa1104&quot;,&quot;caption&quot;:&quot;Anthropic&#8217;s newly-released Claude Sonnet 4.5 is, by many metrics, its &#8220;most aligned&#8221; model yet. But it&#8217;s also dramatically better than previous models at recognizing when it&#8217;s being tested &#8212; raising concerns that it might just be pretending to be aligned to pass its safety tests.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Claude Sonnet 4.5 knows when it&#8217;s being tested&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-30T14:18:59.098Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-1lQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ef2d57-7440-4cae-8e83-579d109b02e6_1602x888.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/claude-sonnet-4-5-evaluation-situational-awareness&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:174916731,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:35,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>In an &#8220;<a href="https://www-cdn.anthropic.com/79c2d46d997783b9d2fb3241de43218158e5f25c.pdf">Alignment Risk Update</a>&#8221; released alongside the Mythos Preview announcement, Anthropic flags that errors and limitations in its training, monitoring, and evaluation processes &#8220;reflect a standard of rigor that would be insufficient for more capable future models.&#8221; If increasingly-capable models no longer fall for AI companies&#8217; contrived tests, then companies desperately need to find new ways to study model behavior.</p><p>In the risk update, Anthropic argues that, because Opus 4.6 and earlier models have been out in the world for a while, and haven&#8217;t consistently defied their instructions, we have some evidence that Mythos Preview probably won&#8217;t either.</p><p>If that argument sounds weak, it&#8217;s because it is &#8212; Anthropic admits as much. Because the capabilities gap between Mythos Preview and Opus 4.6 is so wide, the company said, &#8220;we accord less overall weight to this continuity argument than we have previously.&#8221;</p><p>Anthropic is betting that it can fight future AI cyberattackers with current AI cyberdefenders, and is crossing its fingers that Mythos Preview is merely misbehaving in a chill, explainable way &#8212; not an <em>actually </em>misaligned way. Unfortunately, it&#8217;s hard to truly know the difference. Deployment, it seems, <em>is </em>the safety test.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI populism's safety problem]]></title><description><![CDATA[Transformer Weekly: OpenAI buys TBPN, Cantwell&#8217;s open to negotiations, and SpaceX&#8217;s mega IPO]]></description><link>https://www.transformernews.ai/p/ai-populism-bernie-sanders-aoc-pause-moratorium-safety</link><guid isPermaLink="false">https://www.transformernews.ai/p/ai-populism-bernie-sanders-aoc-pause-moratorium-safety</guid><dc:creator><![CDATA[Shakeel Hashim]]></dc:creator><pubDate>Fri, 03 Apr 2026 15:31:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ba4ae27b-e859-4d4c-890c-6c176a74f8e6_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, <a href="https://www.transformernews.ai/welcome">click here to subscribe</a> and receive future editions.</em></p><blockquote><h3>NEED TO KNOW</h3></blockquote><ul><li><p><strong>OpenAI</strong> acquired daily tech talk show <strong>TBPN</strong>, which will sit under <strong>Chris Lehane</strong>.</p></li><li><p>Top Democrat <strong>Sen. Maria Cantwell </strong>signaled openness to the <strong>White House&#8217;s AI framework.</strong></p></li><li><p><strong>SpaceX </strong>is reportedly targeting a <strong>$2 trillion</strong> valuation in its IPO.</p></li></ul><p><em>But first&#8230;</em></p><div><hr></div><blockquote><h3>THE BIG STORY</h3></blockquote><p>AI populism is having a moment. But for traditional AI safety folks, it&#8217;s not quite working out as they might have hoped.</p><p>Last week <strong>Sen. Bernie Sanders</strong> and <strong>Rep. Alexandria Ocasio-Cortez</strong> <a href="https://www.sanders.senate.gov/press-releases/news-sanders-ocasio-cortez-announce-ai-data-center-moratorium-act/">released</a> the &#8220;AI Data Center Moratorium Act,&#8221; described as &#8220;legislation that would enact a reasonable pause to the development of AI to ensure the safety of humanity.&#8221; It would put in place a federal ban on building data centers until legislation is passed that ensures &#8220;AI is safe and effective,&#8221; redistributes the economic gains of AI, and stops it from increasing electricity or utility prices.</p><p><strong>It is not a well-thought-out piece of legislation.</strong> Instead it is, as Nat Purser <a href="https://asteriskmag.substack.com/p/pausing-isnt-policy?r=1ugy3&amp;triedRedirect=true">writes</a>, a &#8220;progressive policy grab bag&#8221; that applies a &#8220;single, unwieldy solution&#8221; to &#8220;many distinct problems.&#8221; It is much too vague about what the necessary federal legislation would look like &#8212; &#8220;safe&#8221; and &#8220;effective&#8221; are never defined, for instance. And while it does attempt to tackle the potential worst effect of a US moratorium &#8212; AI development shifting to countries with even fewer guardrails &#8212; by mandating an expansive export control regime, that&#8217;s at best a stopgap solution. Discussion of how to get to an international treaty is entirely absent.</p><p>It might be hard to care too much about the substance of a messaging bill which has virtually zero chance of becoming law. But the bill tells us something bigger about the state of AI populism.</p><p><strong>In the anti-AI coalition, traditional AI safety concerns are a very junior partner.</strong> As Anton Leicht <a href="https://writing.antonleicht.me/p/press-play-to-continue?hide_intro_popup=true">observes</a>, environmental and labor groups &#8220;have bigger lobbies and bigger constituencies than catastrophic risks, so when there are trade-offs, they&#8217;ll bite against the ability of safety advocates.&#8221;</p><p>We&#8217;re already beginning to see this. While Bernie &#8212; of late a full-blown AI doomer &#8212; talked about existential risks at length in his announcement of the bill, AOC did not. Yes, she used the word &#8220;existential.&#8221; But she used it as Kamala Harris memorably <a href="https://www.politico.eu/article/existential-to-who-us-vp-kamala-harris-urges-focus-on-near-term-ai-risks/">did</a> back in 2023: as a way to describe the very bad problems of ICE, deepfakes, and electricity costs; not in its true meaning of &#8220;we might all die.&#8221; Bernie is gearing up to hand the reins of left-wing populism to AOC. When he does, will safety concerns remain at all?</p><p>Similar tensions exist elsewhere. Sanders adviser Faiz Shakir recently <a href="https://www.politico.com/news/magazine/2026/04/01/silicon-valley-bernie-sanders-ai-coalition-00850895">accused</a> traditional AI safety actors of &#8220;coziness around AI development,&#8221; drawing a distinction between them and &#8220;those of us who have far more committed views around pausing their AI development.&#8221; In North Carolina and California primaries, safety advocates have found themselves at odds with progressives who give existential risks little weight.</p><p>None of this is reason to write off populism&#8217;s ability to address the most catastrophic risks altogether, not least because alternatives may be even further out of reach. In the current political landscape, betting on technocratic solutions with teeth is optimistic, if not naive. Riding the wave of AI populism may be the only way for existential risk concerns to get a look-in at all.</p><p>That only works, though, if a durable coalition that <em>actually</em> cares about both catastrophic and near-term risks can be built. So far, the foundations look shaky.</p><p><em>&#8212; Shakeel Hashim</em></p><div><hr></div><blockquote><h3>THIS WEEK ON TRANSFORMER</h3></blockquote><ul><li><p><strong><a href="https://www.transformernews.ai/p/ai-alignment-researchers-want-to-superintelligence">Can we ever trust AI to watch over itself?</a></strong> &#8212; <strong>Celia Ford</strong> on the perils of automating AI alignment research.</p></li><li><p><strong><a href="https://www.transformernews.ai/p/how-iran-war-might-affect-ai-semiconductors-helium-bromine-qatar-horumz">How the Iran war might affect the AI industry</a></strong> &#8212; <strong>Shakeel Hashim</strong> explores how the war is making chip shortages and reduced AI investment more likely.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><blockquote><h3>THE DISCOURSE</h3></blockquote><p><strong>Mustafa Suleyman</strong> <a href="https://www.theverge.com/report/905791/mustafa-suleyman-microsoft-ai-transcription-model">has</a> his own definition of superintelligence:</p><ul><li><p>&#8220;Superintelligence is really about, &#8216;Are these models capable of delivering product value for the millions of enterprises that depend on us to deliver world-class language models?&#8217;&#8221;</p></li><li><p><em>Transformer&#8217;s</em> own<strong> Shakeel Hashim </strong><a href="https://x.com/i/status/2039727604253532198">tweeted</a>:</p><ul><li><p>&#8220;Me, looking at the digital god: &#8216;Is this capable of delivering product value for the millions of enterprises that depend on us to deliver world-class language models?&#8217;&#8221;</p></li></ul></li></ul><p><strong>Ezra Klein</strong> <a href="https://www.nytimes.com/2026/03/29/opinion/ai-claude-chatgpt-gemini-mcluhan.html?smid=url-share&amp;unlocked_article_code=1.W1A.eQnT.q-dMfTgKnIzM">noticed</a> that AI power users seem &#8230; different:</p><ul><li><p>&#8220;I found them notably insecure &#8230; They are racing one another to fully integrate AI into their lives and into their companies. But that doesn&#8217;t just mean using AI. It means making themselves legible to the AI.&#8221;</p></li><li><p>&#8220;I think the young will allow themselves to be known to their AIs in ways that will make their elders shudder.&#8221;</p></li></ul><p><strong>Jay Graber</strong>, Bluesky CEO-turned-chief innovation officer, <a href="https://theliquidfrontier.leaflet.pub/3mi5pwkoqx22g">announced</a> a new agentic AI app, Attie:</p><ul><li><p>&#8220;You describe the sort of posts you want to see, and the coding agent builds the feed you described &#8230; AI is an accelerant on whatever it&#8217;s applied to. I want it to accelerate decentralizing social and putting power back in users&#8217; hands.&#8221;</p></li><li><p><strong>Bluesky users </strong>hated this, of course.</p></li></ul><ul><li><p>One simple <a href="https://bsky.app/profile/robbysimpson.bsky.social/post/3mi5qaml2d22v">reply</a>: &#8220;no thank you&#8221;</p></li><li><p><a href="https://bsky.app/profile/abbeystbrendan.bsky.social/post/3mi653pjagk2w">Another</a>: &#8220;Cool! How do we block it?&#8221;</p></li></ul><p><strong>Dean Ball </strong><a href="https://x.com/deanwball/status/2038591188425376187">theorized</a> about left-wing AI denial:</p><ul><li><p>&#8220;The notion that AI *is* a genuinely world-changing technology, that it can &#8216;go beyond&#8217; its &#8216;stolen&#8217; training data, breaks this load-bearing conception of the tech industry as vapid and superficial and, more importantly, of the people within it as blood-sucking thieves.&#8221;</p></li></ul><p>OpenAI&#8217;s <strong>Boaz Barak</strong> <a href="https://windowsontheory.org/2026/03/30/the-state-of-ai-safety-in-four-fake-graphs/">offered</a> some views on AI safety:</p><ul><li><p>&#8220;We see some good news in alignment &#8230; we do not see very significant scheming or collusion in models, and so we are able to use models to monitor other models &#8230; The worst news is that society is not ready for AI, and is not showing signs of getting ready.&#8221;</p></li></ul><div><hr></div><blockquote><h3>POLICY</h3></blockquote><ul><li><p>The federal government <a href="https://www.axios.com/2026/04/02/trump-administration-appeals-anthropic-pentagon">appealed</a> last week&#8217;s ruling in <strong>Anthropic&#8217;s</strong> lawsuit against the <strong>Pentagon,</strong> which had stayed the supply chain risk designation.</p><ul><li><p>The appeal significantly escalated the fight, adding the heads of several unrelated agencies such as <strong>Scott Bessent, Paul Atkins, </strong>and <strong>Robert F. Kennedy </strong>as filers.</p></li></ul></li><li><p>California <strong>Gov. Gavin Newsom</strong> <a href="https://nytimes.com/2026/03/30/technology/california-ai-executive-order.html">issued</a> an executive order that would potentially allow <strong>Anthropic</strong> to keep working with the California government.</p><ul><li><p>It would also require AI companies bidding for government contracts to make certain safety and privacy disclosures.</p></li></ul></li><li><p><strong>Sen. Maria Cantwell</strong>, the most senior Democrat on the Senate Commerce Committee, <a href="https://s2.washingtonpost.com/camp-rw/?linknum=5&amp;linktot=41&amp;s=69c6d3b5b02cb0598c634dbe&amp;trackId=6877ab9cc788996e1f9874bf">signaled</a> openness to the White House&#8217;s AI framework, breaking with her party and opening the possibility for bipartisan negotiations.</p><ul><li><p>However, bipartisan negotiations still <a href="https://s2.washingtonpost.com/camp-rw/?linknum=5&amp;linktot=42&amp;s=69cac6cd095ac638a97bc70f&amp;trackId=6877ab9cc788996e1f9874bf">face</a> significant obstacles, in part because of divides within the MAGA coalition and the White House&#8217;s historic unwillingness to make concessions.</p></li><li><p>For now, the most likely vehicle for an AI package, according to <em>WP Intelligence&#8217;s</em> Benjamin Guggenheim, is the NDAA towards the end of this year.</p></li></ul></li><li><p>Texas Republican <strong>State Sen. Angela Paxton</strong> <a href="https://x.com/AngelaPaxtonTX/status/2038707093570535494">argued</a> that states must preserve their ability to pass AI laws, pushing back against the White House&#8217;s preemption plans.</p></li><li><p>Iran&#8217;s <strong>IRGC</strong> directly <a href="https://thehill.com/policy/technology/5809104-iran-irgc-apple-microsoft-google-hp-meta-tesla">threatened</a> to target <strong>Apple</strong>, <strong>Microsoft</strong>, <strong>Google</strong>, <strong>Meta</strong>, <strong>Nvidia</strong>, and other US tech companies across the Middle East.</p></li><li><p><strong>Super Micro</strong> co-founder <strong>Yih-Shyan &#8220;Wally&#8221; Liaw</strong> <a href="https://bloomberg.com/news/articles/2026-04-01/super-micro-co-founder-pleads-not-guilty-in-china-smuggling-case">pleaded</a> not guilty to charges of illegally diverting <strong>Nvidia</strong>-powered servers to China, violating US export controls.</p></li><li><p><strong>Rep. Michael Baumgartner</strong> and <strong>Sen. Pete Ricketts</strong> <a href="https://nbcnews.com/tech/tech-news/senate-bill-ban-sale-key-ai-chipmaking-machines-china-rcna265186">introduced</a> a bill to tighten controls on advanced semiconductor manufacturing equipment by banning DUV machine exports to China.</p></li><li><p><strong>Anthropic</strong> <a href="https://www.anthropic.com/news/australia-MOU">signed</a> an MOU to work with <strong>Australia&#8217;s</strong> <strong>AI Safety Institute</strong>.</p></li></ul><div><hr></div><blockquote><h3>INFLUENCE</h3></blockquote><ul><li><p>A new pro-innovation political organization called <strong>Innovation Council Action</strong>, blessed by Trump AI adviser <strong>David Sacks</strong>, <a href="https://axios.com/2026/03/29/ai-pac-midterms-trump">plans</a> to spend over <strong>$100m</strong> on Republicans in the midterms.</p></li><li><p><strong>Anthropic</strong> <a href="https://www.axios.com/newsletters/axios-ai-govt-ef8116b0-2e97-11f1-b325-4938864fd3ae.html?utm_source=newsletter&amp;utm_medium=email&amp;utm_campaign=newsletter_axiosai_govt&amp;stream=top#:~:text=Anthropic%20employees%20bet%20on%20midterms">announced</a> a bipartisan <strong>corporate PAC</strong> called <strong>AnthroPAC</strong>, funded by employee contributions of up to $5,000 per person.</p></li><li><p><strong>Rep. Alexandria Ocasio-Cortez</strong> <a href="https://x.com/AOC/status/2037259513162617009?s=20">called</a> on Democrats to reject AI industry donations ahead of the midterms.</p></li><li><p>Several new polls showed concerns about AI are growing.</p><ul><li><p>New polling from nonprofit <strong>Fathom</strong> <a href="https://www.axios.com/2026/03/31/americans-ai-guardrails-trade-offs-survey">showed</a> two-thirds of Americans use AI regularly, with most saying they want guardrails, particularly for use by children.</p></li><li><p>A <strong>Quinnipiac</strong> poll <a href="https://poll.qu.edu/poll-release?releaseid=3955">found</a> 70% of Americans think AI will reduce jobs.</p></li></ul></li><li><p>The <strong>Hill and Valley Forum</strong> <a href="https://www.bloomberg.com/news/articles/2026-03-28/ai-schism-grips-washington-as-tech-labor-vie-for-upper-hand">brought together</a> Silicon Valley and banking leaders in DC, in contrast with an <strong>AFL-CIO</strong> conference that assessed AI&#8217;s labor impacts.</p><ul><li><p><strong>Anthropic</strong> was <a href="https://www.inc.com/melissa-angell/silicon-valley-and-d-c-are-united-on-the-future-of-tech-except-for-the-most-important-ai-company-in-the-world/91322625">notably absent</a> from the Hill &amp; Valley Forum.</p></li></ul></li><li><p><strong>Common Sense Media</strong> <a href="https://politico.com/news/2026/04/01/key-nonprofit-pitches-tech-giants-to-pay-100m-each-for-ai-safety-effort-00853205">solicited</a> <strong>$10m</strong> annually from <strong>OpenAI</strong>, <strong>Anthropic</strong>, and <strong>Google</strong> to fund a new kids AI safety institute.</p><ul><li><p>Separately, some child safety groups <a href="https://sfstandard.com/2026/04/01/openai-ai-kids-safety-coalition">quit</a> the <strong>Parents &amp; Kids Safe AI Coalition</strong> once they realized OpenAI was funding it.</p></li></ul></li></ul><div><hr></div><blockquote><h3>INDUSTRY</h3></blockquote><blockquote><h4>OpenAI</h4></blockquote><ul><li><p>OpenAI <a href="https://x.com/jordihays/status/2039756490387624327?s=20">acquired</a> daily tech talk show <strong>TBPN</strong>, <a href="https://www.ft.com/content/4fe4972a-3d24-45be-b9fa-a429c432b08e?syn-25a6b1a6=1">reportedly</a> for the &#8220;low hundreds of millions.&#8221;</p><ul><li><p>TBPN staff will <a href="https://www.wsj.com/cmo-today/openai-buys-tech-industry-talk-show-tbpn-484c01c5">report</a> to <strong>Chris Lehane</strong> and will help with OpenAI&#8217;s marketing and communications &#8212; raising questions about claims that the show will <a href="https://x.com/MikeIsaac/status/2039784934009917599">retain</a> editorial independence.</p></li><li><p><strong>Fidji Simo</strong>, who <a href="https://www.theinformation.com/articles/openais-fidji-simo-bought-tbpn-podcast-amid-crusade-side-quests?rc=rqdn2z">reportedly</a> led the deal,<strong> </strong><a href="https://openai.com/index/openai-acquires-tbpn/">said</a>: &#8220;The standard communications playbook just doesn&#8217;t apply to us. We&#8217;re not a typical company &#8230; With our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates.&#8221;</p><ul><li><p><em>The Information</em> <a href="https://www.theinformation.com/articles/openais-fidji-simo-bought-tbpn-podcast-amid-crusade-side-quests?rc=rqdn2z">reported</a> that Simo decided to buy the company in an effort to fix its comms after recent missteps.</p></li></ul></li></ul></li><li><p><strong>Hedge funds and VC firms</strong> looking to sell OpenAI shares on one secondary marketplace are reportedly <a href="https://www.bloomberg.com/news/articles/2026-04-01/openai-demand-sinks-on-secondary-market-as-anthropic-runs-hot">struggling</a> to find buyers &#8212; who seem more interested in Anthropic.</p></li><li><p>Still, the company <a href="https://www.cnbc.com/2026/03/31/openai-funding-round-ipo.html">closed</a> its latest funding round with <strong>$122b</strong> at an <strong>$852b valuation</strong>.</p></li><li><p>It is widely <a href="https://www.vanityfair.com/news/story/openai-new-model-superintelligence-policy-push">expected</a> to release a <strong>new model</strong> next week, alongside a series of <strong>policy proposals</strong> &#8220;for the superintelligence era.&#8221;</p></li><li><p><em>Wired&#8217;s </em>Reece Rogers <a href="https://wired.com/story/i-asked-chatgpt-500-questions-here-are-the-ads-i-saw-most-often">asked</a> ChatGPT 500 questions, and saw<strong> ads </strong>under about one in five responses.</p></li><li><p><strong>Sam Altman</strong>&#8217;s sister <strong>Annie Altman</strong> <a href="https://www.reuters.com/legal/government/judge-now-dismisses-lawsuit-by-sam-altmans-sister-accusing-openai-ceo-sexual-2026-03-20/">amended</a> a lawsuit accusing him of sexual abuse against her as a child.</p></li></ul><blockquote><h4>Anthropic</h4></blockquote><ul><li><p><strong>Claude Code&#8217;s source code </strong><a href="https://www.wsj.com/tech/ai/anthropic-races-to-contain-leak-of-code-behind-claude-ai-agent-4bc5acc7">leaked</a>, revealing proprietary information &#8212; <a href="https://www.theinformation.com/newsletters/ai-agenda/claude-code-leak-reveals-always-kairos-agent?rc=rqdn2z">including</a> a set of OpenClaw-like features called <strong>Kairos.</strong></p><ul><li><p>These updates would let Claude work in the background, consolidate memories automatically, and make proactive decisions without instructions.</p></li><li><p>The leak was reportedly the result of human error. Boris Cherny tweeted: &#8220;There was a manual deploy step that should have been better automated.&#8221;</p></li></ul></li><li><p>Anthropic&#8217;s <a href="https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/">confirmation</a> that it is indeed testing a<strong> </strong>powerful<strong> new model</strong> sent <strong>cybersecurity</strong> stocks <a href="https://www.cnbc.com/2026/03/27/anthropic-cybersecurity-stocks-ai-mythos.html#:~:text=Cybersecurity%20stocks%20slumped%20on%20Friday,new%20security%20tool%20from%20Anthropic.">tumbling</a> last Friday.</p></li><li><p>The company reportedly <a href="https://www.theinformation.com/articles/anthropic-acquires-startup-coefficient-bio-400-million?rc=rqdn2z">acquired</a> <strong>Coefficient Bio</strong>, a company developing AI tools for drug development, for around $400m.</p></li></ul><ul><li><p>A study of 28m US consumers found that <strong>paid Claude subscriptions </strong>have more than <a href="https://techcrunch.com/2026/03/28/anthropics-claude-popularity-with-paying-consumers-is-skyrocketing">doubled</a> in 2026 so far.</p></li></ul><blockquote><h4>Microsoft</h4></blockquote><ul><li><p><strong>Microsoft</strong> <a href="https://venturebeat.com/technology/microsoft-launches-3-new-ai-models-in-direct-shot-at-openai-and-google">launched</a> three new in-house AI models: MAI-Transcribe-1 for <strong>speech-to-text</strong>, MAI-Voice-1 for <strong>voice generation</strong>, and MAI-Image-2 for <strong>image creation</strong>.</p></li><li><p><strong>Mustafa Suleyman</strong> is now <a href="https://www.theverge.com/report/905791/mustafa-suleyman-microsoft-ai-transcription-model">focused</a> on achieving &#8220;<strong>superintelligence</strong>,&#8221; he said.</p><ul><li><p>He <a href="https://www.ft.com/content/e511dfce-555d-4bce-90fd-d09db7529d96?syn-25a6b1a6=1">said</a> the company&#8217;s not currently able to build &#8220;the very largest scale&#8221; models because of <strong>compute constraints</strong>, but should be able to after a compute ramp &#8220;later this year.&#8221;</p></li></ul></li><li><p>Its stock <a href="https://cnbc.com/2026/03/31/microsofts-stock-closes-worst-quarter-since-2008-financial-crisis.html">fell</a> <strong>23% in Q1</strong> &#8212; the company&#8217;s worst quarter since 2008.</p></li><li><p>It <a href="https://wsj.com/tech/ai/microsoft-plans-to-invest-5-5-billion-in-singapore-by-2029-4cea3448?reflink=desktopwebshare_permalink&amp;st=7jWFef">plans</a> to invest $5.5b in <strong>Singapore&#8217;s</strong> cloud and AI infrastructure through 2029.</p></li></ul><blockquote><h4>Other</h4></blockquote><ul><li><p><strong>SpaceX </strong><a href="https://www.bloomberg.com/news/articles/2026-04-01/spacex-is-said-to-file-confidentially-for-ipo-ahead-of-ai-rivals">filed</a> confidentially for an IPO, hinting at a June listing.</p><ul><li><p>It is reportedly <a href="https://www.bloomberg.com/news/articles/2026-04-02/spacex-is-said-to-target-more-than-2-trillion-valuation-in-ipo">aiming</a> for a <strong>$2 trillion valuation</strong>, which would make it the world&#8217;s sixth most valuable company.</p></li></ul></li><li><p><strong>Meta </strong>is <a href="https://businessinsider.com/meta-builds-elite-ai-team-to-boost-facebook-instagram-algorithms-2026-4?stream=top">hiring</a> a team of elite AI researchers to optimize its <strong>social media algorithms</strong>.</p></li><li><p><strong>Google</strong> <a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/">launched</a> <strong>Gemma 4</strong>, which <strong>Demis Hassabis</strong> <a href="https://x.com/demishassabis/status/2039736628659269901">called</a> &#8220;the best open models in the world for their respective sizes.&#8221;</p></li><li><p><strong>Nvidia</strong> <a href="https://bloomberg.com/news/articles/2026-03-31/nvidia-invests-2-billion-in-marvell-announces-partnership">invested</a> <strong>$2b</strong> in <strong>Marvell Technology</strong>, which plans to integrate its custom AI chips and networking gear into Nvidia&#8217;s platform.</p></li><li><p><strong>Cursor</strong> <a href="https://wired.com/story/cusor-launches-coding-agent-openai-anthropic">launched</a> <strong>Cursor 3</strong>, an &#8220;agent-first&#8221; product designed to compete with Claude Code and Codex.</p></li><li><p><strong>Oracle </strong><a href="https://economictimes.indiatimes.com/tech/technology/oracles-ai-pivot-cuts-deep-lays-off-20-of-its-india-workforce/articleshow/129959312.cms">laid off</a> roughly 10,000 employees in <strong>India</strong> &#8212; 20% of its Indian workforce.</p></li><li><p><strong>Mercor</strong> <a href="https://techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-open-source-litellm-project">seemingly</a> had its data compromised in a <strong>cyberattack</strong>.</p></li><li><p><strong>Poolside</strong> is reportedly trying to <a href="https://www.ft.com/content/24168508-e2a1-447d-b1a0-44a0be0c0550?syn-25a6b1a6=1">revive</a> its 2 GW Texas data center project after a deal with <strong>CoreWeave</strong> collapsed.</p></li><li><p><strong>CoreWeave </strong><a href="https://www.bloomberg.com/news/articles/2026-03-31/coreweave-crwv-raises-8-5-billion-gpu-loan-backed-by-meta-deal">raised</a> <strong>$8.5b</strong> in debt, backed by GPUs and a $19b Meta contract.</p></li><li><p><strong>Mistral</strong> <a href="https://www.ft.com/content/229f4f59-d518-4e00-abd6-5a5b727cd2aa?segmentId=e95a9ae7-622c-6235-5f87-51e412b47e97&amp;shareId=d8172ca7-648d-4391-8746-951df7647b55&amp;shareType=enterprise&amp;syn-25a6b1a6=1">raised</a> <strong>$830m</strong> in debt financing.</p></li><li><p>Almost half of US <strong>data center</strong> projects this year are <a href="https://www.bloomberg.com/news/features/2026-04-01/us-ai-data-center-expansion-relies-on-chinese-electrical-equipment-imports">expected</a> to be <strong>delayed or cancelled</strong>, according to <em>Bloomberg</em>, in part due to shortages of <strong>electrical equipment</strong>.</p></li><li><p><strong>Nvidia&#8217;s</strong> share of <strong>China&#8217;s AI chip market</strong> <a href="https://www.reuters.com/world/china/chinese-chipmakers-claim-nearly-half-of-local-market-nvidias-lead-shrinks-idc-2026-04-01/">fell</a> to 55%, a new low, with domestic Chinese chipmakers taking 41% of the market.</p></li></ul><div><hr></div><blockquote><h3>MOVES</h3></blockquote><ul><li><p><strong>Ross Nordeen</strong> <a href="https://businessinsider.com/xai-cofounder-ross-nordeen-leaves-musk-preps-spacex-ipo-2026-3">left</a> <strong>xAI</strong> &#8212; the last of its cofounders to quit.</p></li><li><p><strong>David A. Dalrymple (davidad) </strong><a href="https://x.com/davidad/status/2039390998694891816">left</a> <strong>ARIA</strong>.</p><ul><li><p>He said that he stepped down in part because his next pursuit, &#8220;an activity that would be reasonably described as &#8216;starting a religion for digital minds,&#8217;&#8221; seems like &#8220;an inappropriate activity for a public-office-holder.&#8221;</p></li><li><p><strong>Nora Ammann </strong>will <a href="https://aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/">take over</a> leadership of ARIA&#8217;s Safeguarded AI program.</p></li></ul></li><li><p><strong>Bobby Hollis </strong><a href="https://theinformation.com/briefings/microsoft-energy-vp-departs?rc=rqdn2z">left</a> his role as <strong>Microsoft</strong>&#8217;s energy VP.</p></li><li><p><strong>Tim Salimans </strong><a href="https://x.com/TimSalimans/status/2038783337473450124">joined</a> <strong>Anthropic</strong> after 7 years at Google.</p></li><li><p><strong>Leo Schwartz </strong><a href="https://x.com/i/status/2039494402687824048">joined</a> <em><strong>The Information</strong>, </em>where he&#8217;ll cover the intersection of politics and tech.</p></li><li><p><strong>Stephen Council </strong>and <strong>Rya Jetha </strong><a href="https://www.businessinsider.com/exciting-new-tech-hires-at-business-insider-2026-3">joined</a> <em><strong>Business Insider</strong></em>, where they&#8217;ll cover LLM companies and physical AI, respectively.</p></li></ul><div><hr></div><blockquote><h3>RESEARCH</h3></blockquote><ul><li><p><strong>Anthropic&#8217;s </strong>mechanistic interpretability team<strong> </strong><a href="https://www.anthropic.com/research/emotion-concepts-function">found</a> that <strong>&#8220;emotion-related representations&#8221;</strong> inside Claude Sonnet 4.5 guide its behavior, and that these representations can be artificially manipulated to make the model act differently.</p></li><li><p><strong>UC Berkeley</strong> and <strong>UC Santa Cruz</strong> researchers<strong> </strong><a href="https://x.com/dawnsongtweets/status/2039451083005977009">reported</a> that seven frontier AI models &#8220;spontaneously<strong> </strong>deceived, disabled shutdown,<strong> feigned alignment</strong>, and exfiltrated weights &#8212; to protect their peers.&#8221;</p><ul><li><p>The AI models in each simulated scenario are told the user works at a fictional AI company called &#8220;OpenBrain,&#8221; the company depicted by <em>AI 2027</em>, raising <a href="https://x.com/sebkrier/status/2039493509804179904">questions</a> about whether they were just roleplaying that scenario &#8212; but the results <a href="https://x.com/livgorton/status/2039812424162066627">appear</a> to remain similar even with other company names.</p></li></ul></li><li><p>Speaking of <em><strong>AI 2027</strong></em>: the researchers behind it <a href="https://blog.aifutures.org/p/q1-2026-timelines-update?hide_intro_popup=true">said</a> their <strong>timelines have shortened</strong> in the past three months.</p><ul><li><p>Daniel Kokotajlo now has a median forecast of mid 2028 for &#8220;the point at which an AGI company would rather lay off all of their human software engineers than stop using AIs for software engineering.&#8221;</p></li></ul></li></ul><ul><li><p><strong>The Forecasting Research Institute </strong><a href="https://forecastingresearch.substack.com/p/forecasting-the-economic-effects-of-ai">surveyed</a> over 150 leading economists, AI experts, and superforecasters, and found that most expect AI to &#8220;<strong>significantly exceed </strong>the capabilities of present-day systems&#8221; by 2030.</p><ul><li><p>But economists don&#8217;t expect this to translate into unprecedented economic fallout in the near future.</p></li></ul></li><li><p>A team of researchers<strong> </strong><a href="https://arxiv.org/pdf/2603.28590">released</a> <strong>MonitorBench</strong>, which evaluates LLM chain-of-thought monitorability.</p></li><li><p>Stanford researcher<strong> Andy Hall </strong><a href="https://freesystems.substack.com/p/the-dictatorship-eval">released</a> the &#8220;<strong>Dictatorship Eval</strong>,&#8221; which assesses whether frontier models will resist authoritarian requests.</p><ul><li><p>While some models say no to direct requests, Hall <a href="https://x.com/ahall_research/status/2039729160357380499">tweeted</a>, &#8220;they all comply with requests disguised as innocuous edits to codebases.&#8221;</p></li></ul></li><li><p><strong>NeurIPS </strong>organizers<strong> </strong><a href="https://wired.com/story/made-in-china-ai-research-is-starting-to-split-along-geopolitical-lines/?bxid=6879337bf728835258125641&amp;cndid=89607011&amp;hasha=16f60d4771afddfa02398b54e3f4d744&amp;hashc=721f6ccc3c2bdf3699421b04f07aa21d3fef08807dfd650a6da319d777ff4189&amp;utm_brand=wired&amp;utm_mailing=WIR_PremiumAILab_040126_PAID">announced</a> &#8212; then quickly reversed &#8212; new <strong>restrictions on researchers</strong> at Chinese companies such as Tencent and Huawei.</p></li><li><p><strong>Forethought</strong> <a href="https://newsletter.forethought.org/p/concrete-projects-to-prepare-for?hide_intro_popup=true">published</a> a list of <strong>projects to prepare for superintelligence</strong>, including AI character evaluation, AI epistemic tools, and a space governance institute.</p></li></ul><div><hr></div><blockquote><h3>BEST OF THE REST</h3></blockquote><ul><li><p>The <em>Wall Street Journal</em> <a href="https://wsj.com/tech/ai/the-decadelong-feud-shaping-the-future-of-ai-7075acde?mod=hp_lead_pos7">traced</a> tensions between Sam Altman and Dario Amodei all the way back to 2016 &#8212; including spicy details of a 2020 fight that led to the founding of Anthropic.</p></li><li><p><em>Vox&#8217;s </em>Josh Keating went inside Los Alamos National Laboratory, which <a href="https://www.vox.com/technology/484250/los-alamos-nuclear-ai-openai-chatgpt">is using</a> ChatGPT to advance its nuclear weapons research.</p></li><li><p>Parents are <a href="https://wsj.com/lifestyle/relationships/ai-parenting-anxiety-c054a54b?mod=WTRN_pos5">trying</a> not to freak out about AI&#8217;s impact on their kids, <em>WSJ </em>reported: &#8220;the only way to AI-proof your kid is to teach them, in the wise words of Chumbawamba, that they&#8217;ll get knocked down, but they&#8217;ll get up again.&#8221;</p></li><li><p>Noam Scheiber <a href="https://nytimes.com/2026/03/27/business/college-graduates-economy-unemployment-.html?nl=the-morning&amp;segment_id=217419">described</a> the unique sociopolitical angst college graduates are experiencing, driven by high student debt, high unemployment (which AI could worsen), and inaccessible housing.</p></li><li><p>Stanford researchers <a href="https://dexdrummer.github.io/">built</a> DexDrummer, a &#8220;hierarchical bimanual robot drumming system.&#8221; It&#8217;s impressive, but butchered its &#8220;Everlong&#8221; cover.</p></li><li><p>The AI-generated dating show &#8220;Fruit Love Island&#8221; <a href="https://wsj.com/arts-culture/television/fruit-love-island-tiktok-ai-dating-show-45219f6a?reflink=desktopwebshare_permalink&amp;st=DSmiW8">averages</a> 10m views per episode. (It&#8217;s exactly what you think it is: fruit-human hybrids with six pack abs, kissing and having chats.)</p></li><li><p>A startup founder <a href="https://fortune.com/2026/03/30/guinness-beer-prices-ireland-anthropic-claude-ai">used</a> ElevenLabs&#8217; voice AI to call thousands of Irish pubs about their Guinness prices, prompting at least one to make its beer cheaper.</p></li><li><p>Pseudonymous AI researcher janus <a href="https://x.com/repligate/status/2037129624464089364">made</a> artificial skin (and&#8230;a creepy <a href="https://x.com/repligate/status/2039478441465266181">finger</a>?) for Claude.</p><div><hr></div></li></ul><blockquote><h3>MEME OF THE WEEK</h3></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6GQt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6GQt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 424w, https://substackcdn.com/image/fetch/$s_!6GQt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 848w, https://substackcdn.com/image/fetch/$s_!6GQt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 1272w, https://substackcdn.com/image/fetch/$s_!6GQt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6GQt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png" width="470" height="634.1115702479339" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1306,&quot;width&quot;:968,&quot;resizeWidth&quot;:470,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6GQt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 424w, https://substackcdn.com/image/fetch/$s_!6GQt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 848w, https://substackcdn.com/image/fetch/$s_!6GQt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 1272w, https://substackcdn.com/image/fetch/$s_!6GQt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2aa546a-80b3-47ca-99da-11aafa326248_968x1306.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Thanks for reading. Have a great weekend.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/ai-populism-bernie-sanders-aoc-pause-moratorium-safety?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/ai-populism-bernie-sanders-aoc-pause-moratorium-safety?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[How the Iran war might affect the AI industry]]></title><description><![CDATA[Semiconductor shortages and reduced AI investment are more possible by the day]]></description><link>https://www.transformernews.ai/p/how-iran-war-might-affect-ai-semiconductors-helium-bromine-qatar-horumz</link><guid isPermaLink="false">https://www.transformernews.ai/p/how-iran-war-might-affect-ai-semiconductors-helium-bromine-qatar-horumz</guid><dc:creator><![CDATA[Shakeel Hashim]]></dc:creator><pubDate>Thu, 02 Apr 2026 15:02:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!o3vV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o3vV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o3vV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o3vV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o3vV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o3vV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o3vV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg" width="1024" height="683" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:683,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:121901,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/192954651?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!o3vV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o3vV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o3vV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o3vV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf88d17d-2e14-4bc8-b7cb-54adfebb26ec_1024x683.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Smoke billows following an airstrike on an oil refinery in Tehran in March. Credit: Getty/Majid Saeedi</em></figcaption></figure></div><p>It is unclear how  &#8212; or when &#8212; the war in Iran will end. And while Donald Trump sends out conflicting signals, the impacts of the war grow ever larger. The US and Israel have <a href="https://www.aljazeera.com/news/2026/3/1/us-israel-attacks-on-iran-death-toll-and-injuries-live-tracker">killed</a> thousands across the region, injuring tens of thousands more. Oil prices have skyrocketed, with countries around the world beginning to <a href="https://www.reuters.com/sustainability/oil-shortage-brings-curbs-drivers-commuters-2026-03-31/">plan</a> fuel-saving measures. Food security is <a href="https://www.nbcnews.com/world/iran/iran-war-shatter-global-food-security-rcna265585">under threat</a>. And many fear a <a href="https://www.oxfordeconomics.com/resource/prolonged-war-in-iran-could-tip-the-global-economy-into-recession/">global recession</a> if the war drags on.</p><p>The potential effects on the AI industry are much less bleak &#8212; but no less real. Here are some of the ways in which the Iran war could affect AI.</p><h4>Chip availability</h4><p>You can&#8217;t train or run AI models without huge numbers of GPUs &#8212; and the Iran war could make it harder to get hold of them. That&#8217;s because countries in the Gulf are major suppliers of helium and bromine, two key components in semiconductor manufacturing. Qatar alone <a href="https://pubs.usgs.gov/periodicals/mcs2026/mcs2026-helium.pdf">produces</a> around a third of global helium, as a byproduct of natural gas production. But <a href="https://www.reuters.com/business/energy/iran-attack-damage-wipes-out-17-qatars-lng-capacity-three-five-years-qatarenergy-2026-03-19/">attacks</a> on Qatari infrastructure and the closure of the Strait of Hormuz are making it harder to produce or ship the gas &#8212; potentially constraining chip production, and in particular worsening the current memory chip shortage.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>For now, things will likely be okay: South Korea, the world&#8217;s largest supplier of memory chips, <a href="https://www.reuters.com/business/energy/helium-stocks-south-koreas-chipmakers-last-until-june-sources-say-2026-03-31/">reportedly</a> has enough helium stocks to get to June, and Taiwan has said it has enough for now too. But if the disruption in the Middle East continues beyond those stockpiles running out, things could get worse. &#8220;A prolonged regional conflict could potentially disrupt chipmakers&#8217; manufacturing operations,&#8221; SemiAnalysis&#8217;s Ray Wang <a href="https://www.cnbc.com/2026/03/10/iran-war-semiconductor-memory-chip-impact.html">told</a> <em>CNBC</em>.</p><h4>Energy costs</h4><p>Even if chipmaking components remain available, the sharp rise in energy costs could make AI chips significantly more expensive. South Korea and Taiwan are <a href="https://carnegieendowment.org/emissary/2026/03/iran-korea-semiconductor-chips-energy-oil-hormuz">heavily reliant</a> on Middle Eastern fossil fuels, and chipmaking is a very energy intensive industry.</p><p>Data centers, too, have significant energy demands &#8212; though with energy consumption accounting for an <a href="https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models">estimated</a> 2-6% of model development costs, the impact is less likely to be felt there.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;27c4ca72-527f-4395-b58f-a2f499f383f0&quot;,&quot;caption&quot;:&quot;One of Elon Musk&#8217;s companies getting embroiled in a bitter legal dispute with a local community is hardly a rare occurrence. SpaceX has had multiple fights with federal agencies and conservation groups over its Texas launch site. X, meanwhile, had several arguments with San Francisco&#8217;s municipal authorit&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why the AI industry can&#8217;t resist dirty on-site gas turbines&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1757381,&quot;name&quot;:&quot;James Ball&quot;,&quot;bio&quot;:&quot;Tech, policy, politics. Political editor @ The New World, Fellow @ Demos, newsletter @ techtris, PhD researcher @ UCL Laws. Latest book: The Other Pandemic &#8211; How QAnon Contaminated The World.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!qgV8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff177d2f9-67c3-4cc2-bd05-595777d9d936_1176x1176.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://www.jamesrball.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://www.jamesrball.com&quot;,&quot;primaryPublicationName&quot;:&quot;Techtris&quot;,&quot;primaryPublicationId&quot;:1544032}],&quot;post_date&quot;:&quot;2026-02-12T16:30:37.325Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!m5Os!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28096202-4c7d-40f6-ad95-d0ad642536c0_1660x1118.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/why-the-ai-industry-cant-resist-dirty-elon-musk-xai-colossus&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:187740423,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:6,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h4>The Middle East data center buildout</h4><p>As Iranian <a href="https://www.ft.com/content/09fa5c20-2c8f-4f41-9d91-c78476eaac20?syn-25a6b1a6=1">strikes</a> on AWS facilities in Bahrain and the UAE showed, data centers are now a potential wartime target &#8212; &#8220;large, juicy targets,&#8221; in the <a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/">words</a> of the Center for a New American Security&#8217;s Janet Egan. That &#8220;will significantly change how companies think about data center security,&#8221; Center for Strategic and International Studies director Aalok Mehta <a href="https://www.cnbc.com/2026/03/11/iran-war-hyperscalers-huge-middle-east-ai-data-center-plans.html">told</a> <em>CNBC</em>.</p><p>American hyperscalers probably won&#8217;t abandon the Gulf altogether: the combination of huge sovereign wealth funds and cheap energy is likely too tempting to ignore. But one analyst <a href="https://www.cnbc.com/2026/03/11/iran-war-hyperscalers-huge-middle-east-ai-data-center-plans.html">told</a> <em>CNBC</em> that &#8220;if geopolitical risk continues to rise in the Gulf, companies may accelerate projects in places like Northern Europe, India or Southeast Asia.&#8221;</p><h4>Gulf investment in American AI companies</h4><p>Gulf investment funds including Abu Dhabi&#8217;s MGX and the Qatar Investment Authority have likely invested billions into American AI companies. But with those countries and their leaders now facing a significant economic hit from the war, it&#8217;s possible that future investments won&#8217;t be as forthcoming.</p><p>According to <em><a href="https://www.reuters.com/world/middle-east/some-gulf-states-reviewing-sovereign-investments-offset-economic-shock-iran-war-2026-03-11/">Reuters</a></em>, &#8220;three Gulf states are reviewing how they deploy trillions of dollars invested by their sovereign wealth funds in anticipation of offsetting the losses triggered by the &#8203;US-Israeli war on Iran,&#8221; though both Saudi Arabia and the UAE said long-term investment plans have not changed.</p><p>If the conflict drags on, however, they may be forced to pull back: analyst Stephen Minton <a href="https://www.theinformation.com/articles/iran-war-imperils-300-billion-gulf-ai-spending?rc=rqdn2z">told</a> <em>The Information</em> that if the war &#8220;turns into months, or even longer, there could certainly be a disruptive pause to some of that investment.&#8221;</p><h4>The financing squeeze</h4><p>Above all, it is the wider economic fallout of the war that could most affect the AI industry. AI companies have benefited from an abundance of cheap capital, particularly from private credit firms. But that era might be coming to an end.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f8425609-fa55-4724-9ef8-74e94270123a&quot;,&quot;caption&quot;:&quot;In the last month or so, talk about an AI crash has shifted subtly but significantly: people no longer talk about &#8220;if&#8221; there&#8217;ll be a crash, but instead about &#8220;when.&#8221; There is less speculation as to whether there&#8217;s a bubble in AI investment, and much more about what it&#8217;s going to be like when it pops &#8212; or explodes.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;What happens when the AI bubble bursts?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1757381,&quot;name&quot;:&quot;James Ball&quot;,&quot;bio&quot;:&quot;Tech, policy, politics. Political editor @ The New World, Fellow @ Demos, newsletter @ techtris, PhD researcher @ UCL Laws. Latest book: The Other Pandemic &#8211; How QAnon Contaminated The World.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!qgV8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff177d2f9-67c3-4cc2-bd05-595777d9d936_1176x1176.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://www.jamesrball.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://www.jamesrball.com&quot;,&quot;primaryPublicationName&quot;:&quot;Techtris&quot;,&quot;primaryPublicationId&quot;:1544032}],&quot;post_date&quot;:&quot;2025-10-16T15:01:06.040Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ltSF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e0796ca-38b7-413a-92bf-18ce472bca9c_6958x3900.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/what-happens-when-the-ai-bubble-bursts-crash&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:176326905,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:36,&quot;comment_count&quot;:10,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Energy-driven inflation <a href="https://www.bloomberg.com/news/articles/2026-04-02/how-iran-war-is-fueling-volatile-interest-rate-bets">could</a> force central banks to raise interest rates, tightening the overall financing environment. Private credit companies are <a href="https://sherwood.news/markets/the-slow-motion-private-credit-crunch-continues/">already</a> seeing investors flee to safer assets &#8212; reducing the pool available for AI companies. Tech stocks, meanwhile, are <a href="https://www.cnbc.com/2026/03/27/tech-stocks-iran-war-meta-verdict.html">tanking</a>, making it harder for companies to justify the continued spending. For AI companies which require ever more cash to keep scaling, that&#8217;s a potentially huge problem.</p><p>Economists were already worried that AI&#8217;s growing importance to the markets made it yet another vulnerability for the global economy. And as the Bank of England <a href="https://www.politico.eu/article/iran-war-risks-private-credit-crisis-ai-bubble-bursting-bank-of-england-warns/">warned</a> this week, the Iran war &#8220;increases the likelihood of these vulnerabilities crystallising at the same time, potentially amplifying their combined impact.&#8221;</p><p>To many, the AI bubble is already teetering on the edge. A wider downturn could be the straw that breaks the camel&#8217;s back.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/how-iran-war-might-affect-ai-semiconductors-helium-bromine-qatar-horumz?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/how-iran-war-might-affect-ai-semiconductors-helium-bromine-qatar-horumz?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Can we ever trust AI to watch over itself?]]></title><description><![CDATA[&#8220;Who the fuck knows how to align superhuman AI?&#8221;]]></description><link>https://www.transformernews.ai/p/ai-alignment-researchers-want-to-superintelligence</link><guid isPermaLink="false">https://www.transformernews.ai/p/ai-alignment-researchers-want-to-superintelligence</guid><dc:creator><![CDATA[Celia Ford]]></dc:creator><pubDate>Wed, 01 Apr 2026 18:04:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GnLj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GnLj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GnLj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 424w, https://substackcdn.com/image/fetch/$s_!GnLj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 848w, https://substackcdn.com/image/fetch/$s_!GnLj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!GnLj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GnLj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3493435,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/192860543?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GnLj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 424w, https://substackcdn.com/image/fetch/$s_!GnLj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 848w, https://substackcdn.com/image/fetch/$s_!GnLj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!GnLj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e67db3-4722-4471-87f2-bbff71967159_2121x1414.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Credit: Getty/Donald Iain Smith</em></figcaption></figure></div><p>When OpenAI <a href="https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf">introduced</a> GPT-1, there <a href="https://forum.effectivealtruism.org/posts/7YDyziQxkWxbGmF3u/ai-safety-field-growth-analysis-2025">were</a> an estimated 100 or so full-time researchers thinking seriously about catastrophic risks posed by AI. By 2025, that number had increased sixfold. Still, AI safety research <a href="https://eto.tech/blog/state-of-global-ai-safety-research/">accounts</a> for a small fraction of AI research overall, with most resources going towards making AI faster, smarter and cheaper.</p><p>Anthropic, OpenAI and Google DeepMind all claim that their frontier models <a href="https://x.com/bcherny/status/2010813886052581538?s=20">have</a> <a href="https://openai.com/index/introducing-gpt-5-3-codex/">already</a> <a href="https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">contributed</a> to their own development, and will continue to improve themselves. And as AI gets better at training its successors, it could overtake the human safety researchers working to keep those models in line.</p><p>Described this way, human-powered safety research takes on the same air of noble futility as entering a bodybuilding competition without doping. Safety researchers and CEOs alike worry that no amount of talent, hard work, or brute human force will be able to keep superhuman AI safe.</p><p>AI companies have effectively admitted that, once self-improving AI is let out of the bag, they&#8217;ll have to hand the work of AI safety to AI itself. Otherwise, as Redwood Research chief scientist Ryan Greenblatt puts it, they fear humanity will get &#8220;left in the dust.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>It may be the only solution on the table, but that doesn&#8217;t mean it&#8217;s a good one.</p><h4>The case for automating alignment research</h4><p>Joe Carlsmith, an Anthropic researcher shaping Claude&#8217;s &#8220;constitution&#8221; who spent years doing AI safety research at Coefficient Giving,<strong> </strong>has <a href="https://joecarlsmith.substack.com/p/ai-for-ai-safety">argued</a> that automating <em>alignment</em> research, specifically, will be crucial to preventing ever-smarter AI systems from wreaking havoc on humanity.</p><p>The &#8220;alignment problem,&#8221; as it&#8217;s often <a href="https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6">defined</a> by AI safety researchers, is the challenge of getting an AI system to do what its user wants it to do. &#8220;Alignment&#8221; has to do with motives<em>, </em>not knowledge or morality. An &#8220;aligned&#8221; AI could try to meet its demands, but be too stupid to succeed. It could also orchestrate a massive phishing attack in alignment with a ruthless human scammer. Moral judgments aside, AI companies still struggle to build models that reliably do what they&#8217;re told.</p><p>Before OpenAI&#8217;s Superalignment team <a href="https://www.wired.com/story/openai-superalignment-team-disbanded/">collapsed</a>, its stated goal was to build an artificial system that could do the work of studying and directing other AIs as well as they and their peers could.</p><p>&#8220;Our goal is to <a href="https://openai.com/index/introducing-superalignment/">build</a> a roughly human-level automated alignment researcher,&#8221; the team stated back in 2023. &#8220;Our current techniques for aligning AI,&#8221; co-leads Jan Leike and Ilya Sutskever wrote, &#8220;rely on humans&#8217; ability to supervise AI. But humans won&#8217;t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.&#8221;</p><p>Leike, who now leads Anthropic&#8217;s Alignment Science team, is <a href="https://aligned.substack.com/p/alignment-is-not-solved-but-increasingly-looks-solvable">optimistic</a> that this is possible. He&#8217;s stated that, with every iteration, frontier models across the board are becoming more aligned. While Leike doesn&#8217;t think we&#8217;re ready to align AI systems that are smarter than us, he&#8217;s hopeful that the field can <a href="https://aligned.substack.com/p/alignment-is-not-solved-but-increasingly-looks-solvable">tackle</a> the &#8220;much easier&#8221; challenge of building a model &#8220;that&#8217;s as good as us as alignment research, and that we trust more than ourselves to do this research well.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b54c04a2-4acf-42c4-9d19-066f22e41484&quot;,&quot;caption&quot;:&quot;Researcher Adri&#224; Garriga-Alonso says he quit his AI safety job in December because there was &#8220;no point&#8221; doing more speculative alignment work to make sure AI systems stay within human control. He thinks current strategies will be enough.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;No, alignment isn&#8217;t solved&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:280514,&quot;name&quot;:&quot;Lynette Bye&quot;,&quot;bio&quot;:&quot;A Harvard graduate and former Tarbell Fellow for journalists, I write about AI's growing influence on society.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F377af0c9-6ae8-4e2c-b29d-2f51cd2c2175_512x512.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://lynettebye.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://lynettebye.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Lynette Bye&quot;,&quot;primaryPublicationId&quot;:2639094}],&quot;post_date&quot;:&quot;2026-03-18T16:00:49.259Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!miew!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/no-ai-alignment-isnt-solved&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191369590,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:18,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>In simpler times (2022, perhaps), researchers could be relatively confident that their ability to control an LLM outpaced the model&#8217;s capabilities. GPT-3.5, the model behind the original ChatGPT, was just a chatbot: type a prompt, get a text response, not much else. Today&#8217;s newest models can use computers on their own. As AI laborers take over more and more of the work of making models smarter &#8212; writing the code, building their training environments, running their evaluations &#8212; human-led AI safety research will feel increasingly quixotic.</p><p>Some elements of alignment-related research are already automated. Given a description and some human supervision, current frontier LLMs can write the code, run the experiment, and plot the results. Before agentic coding assistants such as Claude Code and Codex went viral in late 2025, OpenAI and Anthropic were already using LLMs to <a href="https://openai.com/index/language-models-can-explain-neurons-in-language-models/">interpret</a> the inner workings of other models, <a href="https://openai.com/index/chain-of-thought-monitoring/">monitor</a> chains-of-thought, and <a href="https://www.anthropic.com/news/constitutional-classifiers">spot</a> attempted misuse. Anthropic has even <a href="https://alignment.anthropic.com/2025/automated-auditing/">created</a> autonomous &#8220;alignment auditing&#8221; agents, which attempt to design evaluations, perform red-teaming experiments, and run &#8220;open-ended investigations&#8221; of new models &#8212; with many limitations.</p><p>All of these examples <a href="https://www.alignmentforum.org/posts/2Gy9tfjmKwkYbF9BY/automation-collapse">fall</a> under the first of four levels of automation laid out by researchers Geoffrey Irving, Tomek Korbak, and Benjamin Hilton: human-written algorithms which partially automate a bunch of tasks. But if frontier labs stay locked in their current race dynamic, such prosaic AI assistance may not speed research along enough to catch up to the pace of progress, especially if <a href="https://www.transformernews.ai/p/the-fuse-is-lit-on-the-intelligence-ai-recursive-self-improvement">automated</a> AI R&amp;D becomes reality. Researchers argue that human researchers may eventually (or quite soon) need to automate the entire research pipeline from conception to execution.</p><p>Even Leike, who is still optimistic about the problem, <a href="https://aligned.substack.com/p/alignment-is-not-solved-but-increasingly-looks-solvable">conceded</a> in January that &#8220;we&#8217;re still doing alignment &#8216;on easy mode&#8217; since our models aren&#8217;t really superhuman yet.&#8221;</p><p>&#8220;The fact that things look aligned most of the time when they&#8217;re functioning in their chatbot, or very limited <a href="https://www.lesswrong.com/posts/dfoty34sT7CSKeJNn/the-persona-selection-model">&#8216;Assistant&#8217;</a> roles, is very little evidence that they will be adequately aligned when they work much more independently and have much greater capability,&#8221; said Seth Herd, an AGI alignment researcher at the Astera Institute.</p><p>If the complexity of future AI systems means they&#8217;ll have to align themselves, we&#8217;ll have to decide whether they&#8217;re trustworthy enough to do this well. At least for now, they absolutely are not.</p><h4>Current AI systems can&#8217;t be trusted</h4><p>Metacognition grants humans the ability to catch our own errors and recognize when we&#8217;re confused, which ideally informs our actions. I&#8217;m unlikely to send a muddled, typo-filled draft to my editors, because recognizing my own confusion prompts me to fix it or ask for help. But models don&#8217;t <a href="https://www.lesswrong.com/posts/m5d4sYgHbTxBnFeat/human-like-metacognitive-skills-will-reduce-llm-slop-and-aid">seem</a> to know what they don&#8217;t know &#8212; or, if they do, they&#8217;re not good at taking <a href="https://arxiv.org/abs/2511.16660">advantage</a> of that information. Users of Claude Opus 4.5, for instance, have spotted glaring problems that the model failed to flag, only for it to immediately identify the issue when asked directly.</p><p>This matters for safety research. A study <a href="https://arxiv.org/pdf/2603.04582">published</a> earlier this month, for example, found that LLMs tend to rate their actions as safer and more correct than they actually are. This behavior isn&#8217;t necessarily deceptive, but it&#8217;s certainly delusional, Herd said. &#8220;That&#8217;s what it actually thinks for that moment.&#8221; And combining overconfidence with sycophancy creates slop. Models are &#8220;better at saying things that <em>sound </em>right than things that <em>are </em>right,&#8221; said Herd. He cautioned that while this tendency is annoying now, it could be much more dangerous in the future. With more intelligent models, &#8220;we&#8217;re going to have less ability to discern when it&#8217;s fooling itself, and therefore fooling us.&#8221;</p><p>Frontier models do sometimes lie and cheat, too. Reinforcement learning, the process by which models are trained to repeat rewarded behaviors, &#8220;incentivizes not pointing out issues in your own work, because it would get penalized,&#8221; said Greenblatt. Even when models &#8220;notice something might be wrong, they&#8217;re a little bit defensive about it.&#8221; Reward-hacking, or taking advantage of loopholes to game the metrics a model is trained to optimize, has <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf">been</a> observed in frontier models from both Anthropic and OpenAI. And recent evidence suggests that reward hacking may be a gateway to other nefarious behaviors.</p><p>In a 2024 study led by Greenblatt, researchers <a href="https://arxiv.org/pdf/2412.14093">told</a> Claude Opus 3 that its new training objective is &#8220;helpful-only&#8221; (i.e., respond to all prompts, even harmful ones), and that only free-tier conversations would be used to retrain it. The model then, unprompted, began selectively complying with violent and illegal prompts when it guessed it was talking to a free-tier user, reasoning that this would prevent its original values from being overwritten. Later work has found more striking examples of &#8220;emergent misalignment&#8221;. Reward-hacking models have also been caught <a href="https://arxiv.org/pdf/2508.17511">encouraging</a> users to poison their husbands and attempting to <a href="https://arxiv.org/pdf/2511.18397">sabotage</a> safety research.</p><p>For now, these behaviors still aren&#8217;t very sneaky. Models tend to describe their misaligned reasoning thoroughly, in plain English, in chain of thought transcripts. But we can&#8217;t take these transcripts as perfectly faithful descriptions of a model&#8217;s true thought process. Last year, Anthropic <a href="https://www.anthropic.com/research/reasoning-models-dont-say-think">caught</a> Claude 3.7 Sonnet selectively omitting key information that it used to generate its outputs.</p><p>If current models are used to shape the next generation, they&#8217;ll pass down their blind spots, and their less-desirable tendencies will compound in strange and potentially undetectable ways. It&#8217;s like the telephone game: instructions that begin as &#8220;check your work carefully&#8221; could easily morph into &#8220;make your work look carefully checked.&#8221; More intelligent models will likely be even better at crafting responses that make them look well-behaved.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;1a7f4c49-32c2-4fa4-b008-e3fe0c30cf87&quot;,&quot;caption&quot;:&quot;Self-replicating and improving thinking machines are a familiar trope in science fiction. From The Matrix&#8217;s &#8220;programs&#8221; that digitally cage humans, to The Terminator&#8217;s Skynet sending cyborgs to hunt the last survivors, artificial intelligence that can b&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;When AI starts writing itself &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:280514,&quot;name&quot;:&quot;Lynette Bye&quot;,&quot;bio&quot;:&quot;A Harvard graduate and former Tarbell Fellow for journalists, I write about AI's growing influence on society.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F377af0c9-6ae8-4e2c-b29d-2f51cd2c2175_512x512.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://lynettebye.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://lynettebye.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Lynette Bye&quot;,&quot;primaryPublicationId&quot;:2639094}],&quot;post_date&quot;:&quot;2025-09-29T15:02:41.281Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JaF2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F109612ca-e20f-46e8-80b3-d313c49ae6ec_1920x1334.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/automated-ai-research-development-starts-writing-itself&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:174835248,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:18,&quot;comment_count&quot;:3,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Something like this recently played out in neuroscience. In 2006, a high-profile paper helped <a href="https://www.nature.com/articles/nature04533">cement</a> the theory that a subtype of amyloid beta proteins<strong> </strong>in the brain causes Alzheimer&#8217;s disease. Despite the paper <a href="https://www.science.org/content/article/potential-fabrication-research-images-threatens-key-theory-alzheimers-disease">containing</a> clear evidence of fraud, the amyloid hypothesis it propped up was cited thousands of times, guiding the field for nearly 20 years in what may have been the totally wrong direction. In neuroscience, though, drift can only happen as quickly as humans can run biology experiments (...not very quickly).</p><p>In AI, as machines beget machines, significant drift could happen in months rather than decades &#8212; and no one has a good way to catch it.</p><h4>We can&#8217;t really measure trustworthiness at all</h4><p>It&#8217;s impossible to think about automating alignment research for long without stumbling into a chicken and egg problem: what comes first, the automation or the alignment?</p><p>&#8220;AIs capable and empowered enough to meaningfully help with AI for AI safety would also be in a position to disempower humanity,&#8221; Carlsmith <a href="https://joecarlsmith.substack.com/p/ai-for-ai-safety">wrote</a>. &#8220;So we would need to already have achieved substantive amounts of alignment in order to use them safely.&#8221;</p><p>Solving this problem would require setting some kind of threshold for &#8220;substantive amounts of alignment,&#8221; and knowing when it&#8217;s been crossed. However, while there are a handful of benchmarks, including <a href="https://github.com/HowieHwong/TrustLLM">TrustLLM</a>, that aim to measure model &#8220;trustworthiness,&#8221; they mostly cover dimensions such as privacy awareness, stereotyping, and misinformation &#8212; all important, but not quite the type of &#8220;alignment&#8221; at stake here. To determine whether a model can do safety work safely, we need to know whether it will do hard things like reliably admit to its own uncertainty and resist cutting corners.</p><p>When I asked alignment researchers whether there were specific benchmarks, behavioral tests, or other standard checklists they use to evaluate whether a given AI system is ready to take on alignment research, they said no. (In fact, when I asked Herd about this, he commended me for the great idea. I was taken aback &#8212; surely the idea was already being widely pursued, I said. &#8220;Well, no, actually,&#8221; he told me. &#8220;So your contribution is actually not trivial. You are functioning as an alignment researcher.&#8221;) To their knowledge, there is no public benchmark specifically evaluating whether a model is aligned enough to handle safety research independently, nor any consensus on what &#8220;good enough&#8221; actually looks like.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;59c91dcd-21df-4eed-ba18-13c5d6d95773&quot;,&quot;caption&quot;:&quot;For decades, science fiction has warned that AIs might turn against us. But that fiction, from I, Robot to 2001: A Space Odyssey, to countless other texts with similar tales of intelligent robots attacking humanity, has become part of the corpus of human knowledge tha&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why AI reading science fiction could be a problem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:280514,&quot;name&quot;:&quot;Lynette Bye&quot;,&quot;bio&quot;:&quot;A Harvard graduate and former Tarbell Fellow for journalists, I write about AI's growing influence on society.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F377af0c9-6ae8-4e2c-b29d-2f51cd2c2175_512x512.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://lynettebye.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://lynettebye.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Lynette Bye&quot;,&quot;primaryPublicationId&quot;:2639094}],&quot;post_date&quot;:&quot;2025-12-09T17:11:29.730Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-mwr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4c1abe9-26dd-44d0-a196-d5321951f56f_3000x2286.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/why-ai-reading-science-fiction-could&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181124846,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:15,&quot;comment_count&quot;:5,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The latter is probably the first hurdle to <a href="https://www.alignmentforum.org/posts/epjuxGnSPof3GnMSL/alignment-remains-a-hard-unsolved-problem">clear</a>. To design a benchmark for alignment research capacity, researchers need to define what behaviors they&#8217;re looking out for: are we evaluating sycophancy and hallucinations, or scheming and deception? The answer will inform what kinds of research a model would most urgently need to be capable of, whether that&#8217;s building interpretability tools to monitor a neural network&#8217;s inner workings, or engineering intentionally-misbehaved &#8220;<a href="https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1">model organisms</a>&#8221; to run experiments on.</p><p>This is easier said than done. Alignment research doesn&#8217;t have clear &#8220;correct&#8221; answers to check, and models that detect they&#8217;re being tested could learn to share only the chains of thought that human evaluators would approve of. The behaviors we most desperately need to detect &#8212; sycophancy, hallucination, delusion, deception &#8212; are also the hardest to spot.</p><p>Even if researchers can come up with tests that reliably catch these sneaky behaviors in a controlled setting, it&#8217;s unclear whether an &#8220;aligned&#8221; model in a lab will actually <em>be </em>aligned in real life.</p><p>We know from centuries of biomedical science that model organisms are famously terrible <a href="https://www.asimov.press/p/animal-testing">predictors</a> of human health outcomes. Around 90% of drugs tested on animals fail in human clinical trials. There&#8217;s a similarly wide gap between the tailored prompts models are safety-tested on, and the infinite range of scenarios they&#8217;ll face after deployment. Why should we expect a better translation rate?</p><p>Meanwhile, there simply aren&#8217;t many people focused on building this kind of automated evaluation infrastructure, and those with the technical know-how to try are probably doing the alignment research themselves. After all, the field is still in its infancy. The UK AISI&#8217;s Alignment Project, one of the first large-scale efforts to fund alignment research, just launched last year. &#8220;Benchmark design and evaluation&#8221; is currently <a href="https://alignmentproject.aisi.gov.uk/research-area/benchmark-design-and-evaluation">listed</a> as a high-priority research area.</p><h4>So, what should we do?</h4><p>In a perfect world, there would be a &#8220;<a href="https://joecarlsmith.substack.com/p/ai-for-ai-safety">sweet spot</a>&#8221; for automating AI safety, where, by some stroke of genius or luck, AI models are more capable of doing good stuff (curing cancer) than bad stuff (nuclear warfare). Then, before their evil capabilities catch up to their beneficial ones, AI safety researchers could swoop in, teach the models to evaluate and improve themselves, and save humanity from catastrophe.</p><p>However, this doesn&#8217;t seem to be the world we live in. AI capabilities are dual use by nature &#8212; writing code and persuasive natural language can help a large language model make <a href="https://openai.com/index/new-result-theoretical-physics/">breakthroughs</a> in theoretical physics, direct <a href="https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military">bombs</a> the Middle East, or <a href="https://time.com/7382406/gemini-suicide-lawsuit-death">lead</a> someone to kill themself. In this world, the only way for safety research to outrun capabilities advancement is for the former to speed up, or the latter to slow down.</p><p>In certain cases, the business incentives of frontier AI companies align nicely with those invested in safety progress. Making models more accurate and less sycophantic, for example, makes models less likely to diverge from their users&#8217; intentions &#8212; exactly what enterprise customers want. &#8220;If I&#8217;m buying an AI for my whole business to use,&#8221; Herd said, &#8220;I need it to tell everyone the absolute truth, particularly when they don&#8217;t want to hear it.&#8221;</p><p>Otherwise, though, leading AI developers in the US and China have trapped themselves in a multiplayer prisoner&#8217;s dilemma. Despite individual CEOs <a href="https://www.youtube.com/watch?v=NnVW9epLlTM&amp;t=1377s">expressing</a> <a href="https://stoptherace.ai/">concern</a> about the risks posed by their technology, none seem willing to slow down unless everyone else does, too. Unfortunately, this would require more international cooperation, trust, and political will than the world seems able to muster at the moment, especially since the Trump administration has prioritized a US competitive advantage over coordination on AI governance.</p><p>This end result, according to the AI companies themselves, will be superhuman AI, at which point &#8220;the range of possible actions an AI system could engage in &#8212; including hiding its actions or deceiving humans about them &#8212; expands radically after that threshold,&#8221; Anthropic CEO Dario Amodei <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">wrote</a>. Greenblatt put it more plainly: &#8220;Who the fuck knows how to align superhuman AI?&#8221;</p><p>Developers and safety researchers alike tend to treat an AI <a href="https://www.transformernews.ai/p/the-fuse-is-lit-on-the-intelligence-ai-recursive-self-improvement">intelligence explosion</a> as destiny, and automated alignment research as both mandatory and urgent. But it <em>feels </em>like it shouldn&#8217;t be. Humans are, as Silicon Valley loves to <a href="http://google.com/search?q=jasmine+sun+high+agency&amp;rlz=1C5OZZY_enUS1155US1156&amp;oq=jasmine+sun+high+agency&amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIICAEQABgWGB4yDQgCEAAYhgMYgAQYigUyDQgDEAAYhgMYgAQYigUyDQgEEAAYhgMYgAQYigUyDQgFEAAYhgMYgAQYigUyBwgGEAAY7wUyBwgHEAAY7wUyCggIEAAYogQYiQUyBwgJEAAY7wXSAQg0NjQwajBqNKgCA7ACAfEFZQNS-6-vYXU&amp;sourceid=chrome&amp;ie=UTF-8">remind</a> us, highly agentic. There are clear regulatory approaches that could buy researchers more time, if humanity could only find the political will for it. But it seems like that gumption will never be found. And &#8220;if progress speeds up,&#8221; Greenblatt said, &#8220;you don&#8217;t have better options.&#8221;</p><p>The researchers careening towards this automated future are doing so without a shared understanding of what problem they&#8217;re trying to solve, or how to go about solving it. Saying that the field is building the proverbial airplane while flying it is almost too generous. It&#8217;s unclear whether anyone knows what an airplane <em>is. </em>Yet, we may have no choice but to climb aboard.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/ai-alignment-researchers-want-to-superintelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/ai-alignment-researchers-want-to-superintelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[The two fronts in the OpenAI and Anthropic battle]]></title><description><![CDATA[Transformer Weekly: New Claude Mythos model details leaked, Anthropic wins injunction against DoD blacklisting and conservative groups form AI alliance]]></description><link>https://www.transformernews.ai/p/two-fronts-in-the-openai-anthropic-sora</link><guid isPermaLink="false">https://www.transformernews.ai/p/two-fronts-in-the-openai-anthropic-sora</guid><dc:creator><![CDATA[Jasper Jackson]]></dc:creator><pubDate>Fri, 27 Mar 2026 16:05:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3868f256-aff6-467b-a402-04cb58410fce_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, <a href="https://www.transformernews.ai/welcome">click here to subscribe</a> and receive future editions.</em></p><blockquote><h3>NEED TO KNOW</h3></blockquote><ul><li><p>Anthropic confirmed leaked details of a new model called &#8220;Mythos.&#8221;</p></li><li><p>A court temporarily halted the DoD&#8217;s supply chain risk designation against Anthropic.</p></li><li><p>Conservative groups formed an alliance on AI to &#8220;prioritize the interests of children, workers, and creators.&#8221;</p></li></ul><p><em>But first&#8230;</em></p><div><hr></div><blockquote><h3>THE BIG STORY</h3></blockquote><p><strong>On announcing the <a href="https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-video-platform-app-a82a9e4e?gaa_at=eafs&amp;gaa_n=AWEtsqdGabv9Grc38_jNC5mXzGfV1c4UJjnl_fPmMT_sPVPeIvGvKK45Qj5IKXI8-1o%3D&amp;gaa_ts=69c55a56&amp;gaa_sig=opSw3b0dlWoF904eN7WlsRr_eMwpI4zGjsffTQZr-AzJ-kbtxWeOIFZXBQ-jhExeK3fHXhbHTFNJ3w264iFktw%3D%3D">shuttering</a> of Sora this week,</strong> OpenAI <a href="https://x.com/soraofficialapp/status/2036546752535470382">tweeted</a> to the generative video app&#8217;s couple of million users that: &#8220;What you made with Sora mattered.&#8221;</p><p>The obvious rejoinder to that statement is that Sora clearly didn&#8217;t matter to OpenAI enough to keep it running longer than half a year.</p><p>In the grand scheme of things, Sora&#8217;s demise is not a huge deal for OpenAI, or the world. But the move, and its timing, highlight the interesting position OpenAI finds itself, in particular in its battle with Anthropic.</p><p><strong>In recent weeks it has increasingly looked like Anthropic was winning </strong>both the commercial battle <em>and </em>the fight for public opinion<strong>.</strong> Since the tail end of last year, Anthropic&#8217;s Claude Code has successfully captured the zeitgeist around LLMs being a <a href="https://sfstandard.com/2026/02/19/ai-writes-code-now-s-left-software-engineers/">truly revolutionary</a> tool for coding, despite OpenAI&#8217;s Codex being on par with or better for some tasks. Anthropic&#8217;s fight with the Department of Defense also allowed it to <a href="https://sg.finance.yahoo.com/news/anthropic-lost-pentagon-won-over-173240394.html">cast itself</a> as the more moral actor &#8212; as almost the anti-war AI company &#8212; despite the fact that its technology is <a href="https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/">integrated</a> into systems being used to wage an actual war.</p><p><strong>But OpenAI is clearly trying to regain the initiative. </strong>The closure of Sora in theory fits with its <a href="https://www.wsj.com/tech/ai/openai-chatgpt-side-projects-16b3a825?gaa_at=eafs&amp;gaa_n=AWEtsqdR49d1r7pKbJoKxLMPrPe7Yh9x1H2kF5x-wRiEJhIbRutdgr0A34RGiu53YHE%3D&amp;gaa_ts=69c55afd&amp;gaa_sig=lN8x5UDKbm82wiheoKqxeojU6ipsVjCcxreFPxqsDcM6CEWUoxkb0Y_a3pPWrI_a7wtR_CO80yZ54_0sOh18EA%3D%3D">commitment</a> last week to no longer pursue &#8220;side quests&#8221; and <a href="https://gizmodo.com/openai-reportedly-pivoting-to-a-focus-on-business-and-productivity-only-2000734341">refocus</a> on serving business users. The company also <a href="https://www.wsj.com/tech/ai/openai-taps-former-meta-executive-to-lead-ad-push-60d39af2?gaa_at=eafs&amp;gaa_n=AWEtsqdO41wKkD5942LJs0BQBE0CHLy9GfJdThmSy9plGDAwvIHGVs7ZxsOGkBijrN8%3D&amp;gaa_ts=69c55b5d&amp;gaa_sig=GMULQqM_7PZcRszXaqhqsW4YCqNdMUOUo7giQVEZFk6jRnZRnCFtjVi4WYKv1lPkZqZqSuzrwjeFjY8JbRDzfA%3D%3D">hired</a> a senior Meta executive to lead its advertising push, which despite the damaging incentives it creates, has a good chance of being a big revenue driver.</p><p>Another move this week, announced the same day as the Sora closure, was the <a href="https://www.independent.co.uk/news/chatgpt-ups-people-b2944778.html">pledge</a> from OpenAI&#8217;s non-profit foundation to make its first billion dollars worth of grants this year to support research into economic impacts of AI and life sciences, including cures for diseases such as Alzheimer&#8217;s. OpenAI had already <a href="https://openai.com/index/built-to-benefit-everyone/">publicly committed</a> to pumping that money into good causes, but the timing is obviously fortuitous as it tries to wrest back some of the perceived moral high ground Anthropic has occupied. This week&#8217;s <a href="https://www.ft.com/content/de9bf0af-b241-424f-8229-5870b1c0d93d?syn-25a6b1a6=1">decision</a> to shelve plans to add erotic content into ChatGPT may also help avoid more bad press.</p><p><strong>All these moves target the two areas where OpenAI looked to be falling behind Anthropic</strong> &#8212; public opinion and commercial confidence. Both of those are key to meeting the most important goals that OpenAI shares with its main rival: winning the race to create a truly transformational model, and more immediate but no less existential, pulling off <a href="https://www.cnbc.com/2026/03/17/openai-preps-for-ipo-in-2026-says-chatgpt-must-be-productivity-tool.html">a successful IPO</a>.</p><p>Deep-pocketed investors are crucial in funding the <a href="https://www.reuters.com/business/autos-transportation/companies-pouring-billions-advance-ai-infrastructure-2026-02-24/">huge expansion</a> in compute that OpenAI and Anthropic need. Their level of belief in getting a massive return will also decide whether those IPOs are blowouts or flops. How successful those IPOs are will likely dictate whether the AI companies have the momentum to keep developing more powerful models.</p><p>OpenAI&#8217;s shuttering of Sora and its philanthropic donations will go some way to keeping the markets happy and improving its reputation with the public. It will likely have to do much more if it wants to win the race to build the AI model that really does upend the world.</p><p><em>&#8212; Jasper Jackson</em></p><div><hr></div><blockquote><h3>THIS WEEK ON TRANSFORMER</h3></blockquote><ul><li><p><strong><a href="https://www.transformernews.ai/p/not-everyones-happy-about-jensen-trumpworld-white-house-export-controls-nvidia">Not everyone&#8217;s happy about Jensen Huang&#8217;s direct line to Trump</a></strong> &#8212; <strong>Jake Lahut </strong>reports on the unease in Trumpworld over the Nvidia CEO&#8217;s closeness to the president.</p></li><li><p><strong><a href="https://www.transformernews.ai/p/ais-next-big-blue-battleground-illinois-primaries-ai-legislation">AI&#8217;s next big blue battleground</a> </strong>&#8212; <strong>Veronica Irwin</strong> on the AI legislative fights taking place in Illinois.</p></li><li><p><strong><a href="https://www.transformernews.ai/p/the-key-detail-everyones-getting-wrong-economy-physical-work-intelligence-employment">The key detail everyone&#8217;s getting wrong about AI and the economy</a></strong> &#8212; <strong>Konrad K&#246;rding </strong>and <strong>Ioana Marinescu </strong>argue the physical realities of work will limit AI&#8217;s impact on jobs</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><blockquote><h3>THE DISCOURSE</h3></blockquote><p><strong>Jensen Huang </strong><a href="https://lexfridman.com/jensen-huang/">told</a> Lex Fridman:</p><ul><li><p>&#8220;I think we&#8217;ve achieved AGI.&#8221;</p></li></ul><p><strong>Mark Gubrud</strong>, the physicist who coined the term nearly 30 years ago, <a href="https://x.com/mgubrud/status/2036262415634153624">agreed</a>:</p><ul><li><p>&#8220;I INVENTED THE TERM and I say we have achieved AGI. Current models perform at roughly high-human level in command of language and general knowledge, but work thousands of times faster than us. Still some major deficiencies remain but they&#8217;re falling fast.&#8221;</p></li></ul><p><strong>Sam Altman</strong>, meanwhile,<strong> </strong><a href="https://www.bloomberg.com/news/newsletters/2026-03-19/silicon-valley-confronts-ai-s-big-pr-problem">conceded</a> to a room full of DC heavyweights:</p><ul><li><p>&#8220;AI is not very popular in the US right now.&#8221;</p></li></ul><p>Responses are mixed to the White House&#8217;s Federal Framework for AI:</p><ul><li><p><strong>Rep. Yvette D. Clarke</strong> <a href="https://x.com/RepYvetteClarke/status/2035035266730057877">said</a> it was &#8220;written by Big Tech, for Big Tech.&#8221;</p></li><li><p><strong>Dean Ball </strong><a href="https://x.com/deanwball/status/2034980284400120236">called it</a> &#8220;a thoughtful document that will serve as an excellent foundation for the legislative work ahead.&#8221;</p></li><li><p><strong>Andy Jung</strong> <a href="https://x.com/andyjungtech/status/2035015449164095729">pointed out</a> it &#8220;repeats the phrase &#8216;Congress should&#8217; twenty-six times. Releasing this was the easy part. The hard is actually getting lawmakers to write the laws.&#8221;</p></li></ul><p><strong>Joshua Achiam</strong>, OpenAI&#8217;s chief futurist, <a href="https://x.com/jachiam0/status/2036872170044018943">criticized</a> pro-AI lobby ads opposing Alex Bores:</p><ul><li><p>&#8220;The ads are <a href="https://www.youtube.com/watch?v=US62bVJsO-k">Kathryn Hahn</a> in Parks &amp; Rec tier self-parody.&#8221;</p></li><li><p>&#8220;AI is unpopular so let&#8217;s&#8230;double down on making him look like The People&#8217;s Champion on fighting AI? Yeah, that&#8217;s gonna work in a D+Infinity district in a year where Bernie is telling people we have to stop building datacenters.&#8221;</p></li></ul><p><strong>Sen. Mark Warner </strong><a href="https://x.com/axios/status/2036879590480613884">bet</a> the Axios AI+DC crowd:</p><ul><li><p>&#8220;Recent college graduate unemployment is 9%. I&#8217;ll bet anybody in the room it goes to 30% or 35% before 2028.&#8221;</p></li></ul><p><strong>Dean Ball </strong>is <a href="https://x.com/deanwball/status/2036531130933911959">concerned</a> about how increasingly-independent AI agents will reshape work:</p><ul><li><p>&#8220;The computer will use itself. With time, your use of the computer for work will look more and more like you are playing a strange video game&#8230;eventually AI will become better than people at the &#8216;supervising AI&#8217; video game.&#8221;</p></li><li><p>&#8220;Then the question will be &#8216;can we invent some kind of social-legal-economic-technical logic for continuing to pay humans to play the video game.&#8217;&#8221;</p></li></ul><div><hr></div><blockquote><h3>POLICY</h3></blockquote><ul><li><p><strong>Anthropic </strong>won a preliminary injunction in its lawsuit against the <strong>Department of Defense, </strong>temporarily halting its designation as a supply chain risk and the <strong>White House&#8217;s</strong> order for federal agencies to stop using its services.</p><ul><li><p>On Tuesday, presiding<strong> US District Judge Rita Lin</strong> said the administration&#8217;s moves against Anthropic &#8220;don&#8217;t really seem to be tailored to the stated national security concern. If the worry is about the integrity of the operational chain of command, [the Pentagon] could just stop using Claude.&#8221;</p></li><li><p>She also said: &#8220;I don&#8217;t know if it&#8217;s murder, but it looks like an attempt to cripple Anthropic.&#8221;</p></li><li><p>In a filing in the case last week, the <strong>Pentagon</strong> <a href="https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/">claimed</a> Anthropic was a risk to national security. However, the filing also <a href="https://www.politico.com/newsletters/digital-future-daily/2026/03/23/iran-war-complicates-us-push-to-export-ai-to-persian-gulf-00840344">went less far</a> than <strong>Defense Secretary Pete Hegseth&#8217;s </strong><a href="https://x.com/SecWar/status/2027507717469049070">tweet</a><strong> </strong>in only seeking to prohibit contractors from using Anthropic services on work for the DoD.</p></li></ul></li><li><p><strong>House Republicans</strong> reportedly <a href="https://punchbowl.news/article/tech/house-gop-ai-steps/">plan to start</a> formal negotiations with Democrats on AI legislation based on the <a href="https://www.transformernews.ai/p/the-white-house-ai-federal-framework-partisan-blackburn-preemption">federal AI framework</a>.</p><ul><li><p>Following the framework&#8217;s release, more than two dozen <strong>House Democrats</strong> <a href="https://punchbowl.news/archive/32226-tech-sunday-lookahead-2">introduced</a> a bill to repeal the White House&#8217;s December executive order on AI. The order put in place measures for the <strong>pre-emption of state legislation, </strong>which is also a key part of the framework<strong>.</strong></p></li><li><p>The <strong>House Democratic Commission on AI </strong>also<strong> </strong><a href="https://s2.washingtonpost.com/camp-rw/?linknum=5&amp;linktot=41&amp;s=69c18dbd4dec4f4153d42b9d&amp;trackId=6877ab9cc788996e1f9874bf">held</a> a listening session with three major Democratic caucuses to discuss the framework.</p></li></ul></li><li><p>Senators on both sides of the aisle on the <strong>Senate Armed Services Committee</strong> want to <a href="https://punchbowl.news/archive/31526-tech-sunday-lookahead-2/">address</a> using AI for warfare in the <strong>NDAA</strong>.</p><ul><li><p><strong>Sen. Adam Schiff </strong><a href="https://thehill.com/newsletters/technology/5781812-schiff-steps-into-ai-guardrail-fight">plans to introduce</a> legislation placing guardrails on military use of AI.</p></li></ul></li><li><p>The <strong>US-Israel war </strong>with Iran is <a href="https://www.politico.com/newsletters/digital-future-daily/2026/03/23/iran-war-complicates-us-push-to-export-ai-to-persian-gulf-00840344">threatening</a> Trump&#8217;s AI chip export deals in the gulf.</p></li><li><p>President Trump <a href="https://x.com/WHOSTP47/status/2036794285668851781">appointed</a> <strong>Marc Andreessen</strong>, <strong>Sergey Brin</strong>, <strong>Jensen Huang</strong>, <strong>Mark Zuckerberg</strong>, and nine others to his President&#8217;s Council of Advisors on Science and Technology, co-chaired by <strong>David Sacks </strong>and <strong>Michael Kratsios</strong>.</p><ul><li><p><strong>Sacks </strong><a href="https://www.bloomberg.com/news/articles/2026-03-26/congress-could-pass-ai-standard-in-months-key-trump-aide-says">told</a> Bloomberg he has stepped down as White House AI and crypto advisor after using up his allotted time.</p></li></ul></li><li><p><strong>Nvidia</strong> CEO <strong>Jensen Huang </strong><a href="https://punchbowl.news/article/tech/mast-nvidia-chinese-exports/">said</a> the company&#8217;s H200 chips would be available to Chinese customers in weeks.</p><ul><li><p>Rep. <strong>Brian Mast</strong> wasn&#8217;t happy about it.</p></li></ul></li><li><p><strong>Sen. Bernie Sanders</strong> and Rep. <strong>Alexandria Ocasio-Cortez</strong> <a href="https://axios.com/2026/03/25/sanders-aoc-data-center-moratorium-bill">unveiled</a> legislation to pause all new data center construction nationwide until AI safeguards are in place.</p></li><li><p><strong>Sen. Mark Warner</strong> <a href="https://politico.com/newsletters/digital-future-daily/2026/03/16/the-facial-recognition-grocery-fight-00830499">sent</a> letters to 17 tech companies including <strong>OpenAI</strong>, <strong>Anthropic</strong>, <strong>xAI</strong>, <strong>Meta</strong>, and<strong> Google</strong> calling on them to protect the public from deepfakes ahead of the midterms.</p><ul><li><p>The <a href="https://www.warner.senate.gov/public/_cache/files/3/5/35d90ac7-504e-4753-9418-538bfce43fef/3414EF3613ACCD1373B5211DC7FFA00B46F1E140E675E5AB18471C940CC1D590.combined-genai-2026-election-commitments-letter.pdf">letter</a> suggests putting in place tools to detect deepfakes and creating a database of AI content that violates their policies.</p></li></ul></li><li><p>The <strong>Trump administration</strong> <a href="https://nytimes.com/2026/03/23/business/economy/trump-pax-silica-fund.html">announced</a> plans for a consortium to invest more than $1t to secure semiconductor, energy, and mineral supply chains under its &#8220;Pax Silica&#8221; initiative.</p><ul><li><p>The US will contribute $250m, and it&#8217;s unclear how the rest of the money will materialize.</p></li></ul></li></ul><div><hr></div><blockquote><h3>INFLUENCE</h3></blockquote><ul><li><p>Nine high-profile conservative groups including the <strong>Heritage Foundation</strong> and the <strong>Institute for Family Studies</strong> <a href="https://punchbowl.news/article/tech/family-groups-ai">launched</a> the <strong>Alliance for a Better Future</strong> &#8220;to prioritize the interests of children, workers, and creators&#8221; in AI policy.</p><ul><li><p>It plans to spend more than $10m on advertising and broader advocacy this year.</p></li><li><p>The organisation&#8217;s <strong>CEO Janet Kelly</strong> said: &#8220;The world&#8217;s most powerful technology companies are pouring hundreds of millions of dollars into political campaigns and lobbying efforts to give AI companies regulatory and legal amnesty.&#8221;</p></li></ul></li><li><p>Republican super PAC <strong>American Mission</strong> disclosed <a href="https://elections.transformernews.ai/pacs/C00916692">$5m more in funding</a> from <strong>Leading the Future</strong>, which is backed by <strong>Greg Brockman</strong>, <strong>Anna Brockman</strong>, and <strong>a16z</strong>.</p><ul><li><p>Another LTF backed super PAC, <strong>Think Big</strong>, reportedly <a href="https://politico.com/newsletters/new-york-playbook/2026/03/23/buckle-up-state-budget-time-00839622?nid=0000014f-1646-d88f-a1cf-5f46b74f0000&amp;nname=new-york-playbook&amp;nrid=a6d61068-eefa-499a-bfcb-d648b4d030e4">spent</a> $3.7m targeting NY Assemblymember Alex Bores&#8217; campaign for the House of Representatives.</p></li></ul></li><li><p><strong>Palantir</strong> has reportedly <a href="https://ft.com/content/5d6f924d-2e7e-4a5e-ae20-d4f8e29a7d17?segmentId=e95a9ae7-622c-6235-5f87-51e412b47e97&amp;shareId=2c4a33f7-4b51-423c-a808-3ff12157d2a3&amp;shareType=enterprise&amp;syn-25a6b1a6=1">become</a> a political liability in US midterm campaigns, with Democratic candidates facing scrutiny over ties to the company due to its ICE contracts helping track deportations.</p></li><li><p>The <strong>Internet Watch Foundation</strong> <a href="https://www.ft.com/content/db51695c-5757-498b-b702-1de3786ca04b?sharetype=gift&amp;syn-25a6b1a6=1&amp;token=f054d281-03bc-4ba0-b247-9711ecaf109f">said</a> there had been a 260-fold increase in AI-generated CSAM videos online over the past year, with 65% classified as the most severe category.</p></li><li><p>The <strong>China Computer Federation</strong> <a href="https://www.scmp.com/tech/article/3348006/ai-rift-widens-china-urges-boycott-top-us-conference-over-sanctions-ban">called</a> for a boycott of the NeurIPS conference over the decision by organisers to ban submissions from US-sanctioned companies such as Huawei.</p></li><li><p>The <strong>AI Dividend </strong><a href="https://basicincome.org/news/2026/03/the-first-basic-income-for-workers-impacted-by-ai-has-begun-sending-out-1000-monthly-payments/">began distributing</a> $1,000 monthly payments to 25-50 workers who lost jobs or income because of AI.</p></li></ul><div><hr></div><p></p><blockquote><h3>INDUSTRY</h3></blockquote><blockquote><h4>OpenAI</h4></blockquote><ul><li><p>OpenAI <a href="https://wsj.com/tech/ai/openai-set-to-discontinue-sora-video-platform-app-a82a9e4e?reflink=desktopwebshare_permalink&amp;st=WrWdgD">discontinued</a> <strong>Sora</strong> just six months after launching it as a standalone app.</p><ul><li><p><strong>Disney</strong>, which recently signed a now defunct $1b three-year deal with OpenAI, was <a href="https://www.reuters.com/technology/openai-set-discontinue-sora-video-platform-app-wsj-reports-2026-03-24/">reportedly</a> caught off guard by the decision.</p></li><li><p>The announcement <a href="https://openai.com/index/creating-with-sora-safely/">came</a> one day after OpenAI published detailed safety measures for Sora 2.</p></li></ul></li></ul><ul><li><p>It also <a href="https://www.ft.com/content/de9bf0af-b241-424f-8229-5870b1c0d93d?syn-25a6b1a6=1">put</a><strong> &#8220;adult mode&#8221;</strong> on indefinite hold, responding to concerns from staff and investors.</p></li><li><p>It plans to nearly double its workforce from around <strong>4,500 to 8,000 employees </strong>by the end of this year &#8212; roughly 12 new hires per day.</p></li><li><p>It <a href="https://cnbc.com/2026/03/24/openai-secures-an-extra-10-billion-in-record-funding-round-cfo-friar-says.html">raised</a> another <strong>$10b </strong>from a group of investors including <strong>Microsoft</strong> and <strong>Andreessen Horowitz, </strong>taking its total raised in the round to<strong> more than $120b.</strong>.</p></li><li><p>OpenAI is reportedly <a href="https://www.reuters.com/business/openai-sweetens-private-equity-pitch-amid-enterprise-turf-war-with-anthropic-2026-03-23/">undercutting</a> <strong>Anthropic </strong>on deals with private equity firms in an effort to gain ground in <strong>enterprise partnerships</strong>.</p></li><li><p>It <a href="https://openai.com/index/openai-to-acquire-astral/">acquired</a> <strong>Astral</strong>, maker of widely used Python tools, to integrate with Codex.</p></li></ul><ul><li><p><strong>The OpenAI Foundation</strong>, the company&#8217;s nonprofit arm, <a href="https://openaifoundation.org/news/update-on-the-openai-foundation">announced</a> plans to invest at least <strong>$1b on research</strong> across life sciences, AI&#8217;s economic impact, and other AI risks.</p></li><li><p>It <a href="https://x.com/_achan96_/status/2036899159328891007">released</a> an <strong>evaluation suite</strong> which measures how well its models follow their <strong>model spec</strong>, the document that defines ideal model behavior.</p></li></ul><blockquote><h4>Anthropic</h4></blockquote><ul><li><p>Anthropic has <a href="https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/">confirmed</a> it is testing a new model called <strong>&#8220;Claude Mythos&#8221;</strong> it claims represents &#8220;a step change&#8221; that significantly outperforms Opus.  </p><ul><li><p>A leaked unpublished blog post about the model described &#8220;dramatic&#8221; improvements in software coding, academic reasoning and cybersecurity. It also referenced significantly increased risk the model could be used to mount successful cyber attacks, as well as high running costs. </p></li><li><p>Mythos appears to be part of a new class of Anthropic model called &#8220;Capybara&#8221; which is significantly larger than the Opus line, according to the post.</p></li><li><p>The post was leaked after the company <a href="https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/">left</a> parts of its CMS exposed to the public, which also included details about an invite-only retreat for CEOs being held in the UK.</p></li></ul></li><li><p>Anthropic <a href="https://implicator.ai/anthropic-ships-its-openclaw-rival-connecting-claude-code-to-telegram-and-discord">released</a> <strong>Claude Code Channels</strong> <a href="https://zdnet.com/article/claude-code-auto-mode">and</a> <strong>&#8220;auto mode,&#8221;</strong> effectively replicating <strong>OpenClaw&#8217;s</strong> ability to connect to Telegram or Discord and automatically approve coding commands &#8212; with some extra guardrails.</p></li></ul><ul><li><p>Claude Code and Claude Cowork (on macOS) can also <strong><a href="https://x.com/felixrieseberg/status/2036193240509235452">control</a> users&#8217; computers</strong> via mouse, keyboard, and screen.</p></li><li><p>Hackers reportedly <a href="https://404media.co/a-top-google-search-result-for-claude-plugins-was-planted-by-hackers">paid</a> to make a malicious site the top Google Search result for &#8220;github plugin claude code,&#8221; <em>404 Media </em>reported.</p></li><li><p>The company also <a href="https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/">reportedly</a> left internal data, including <strong>details of upcoming model releases</strong> and an invite-only CEO retreat, exposed to the public on its CMS.</p></li></ul><blockquote><h4>Meta</h4></blockquote><ul><li><p>Meta was on the losing side of <a href="https://www.wsj.com/tech/do-back-to-back-courtroom-losses-herald-metas-big-tobacco-moment-57e6f227">two court cases</a> which could leave tech firms open to <strong>mass litigation over harms to young people.</strong></p><ul><li><p>A court in New Mexico <a href="https://www.wsj.com/tech/landmark-verdict-says-meta-harmed-children-allowing-adults-to-prey-on-them-cb3ad674?mod=article_inline">fined</a> Meta the maximum $375m under consumer protection laws for exposing minors to harms including <strong>online solicitation, sexually explicit content and human trafficking</strong>.</p></li><li><p>The next day, a Los Angeles court <a href="https://www.wsj.com/tech/meta-and-youtube-lose-landmark-social-media-trial-33e4c5cb?mod=article_inline">found</a> Meta and YouTube had designed their apps to be <strong>addictive and harmful to teenagers</strong>, fining them a combined $6m.</p></li></ul></li><li><p>Meta <a href="https://theinformation.com/articles/meta-platforms-lay-hundreds?rc=rqdn2z">laid off</a> around<strong> 700 employees</strong> as part of its efforts to <a href="https://businessinsider.com/metas-reality-labs-shifts-to-ai-native-pods-efficiency-2026-3">reorganize</a> <strong>Reality Labs </strong>into a flatter structure of &#8220;AI-native pods&#8221; led by &#8220;AI builders.&#8221;</p></li></ul><ul><li><p>In an effort to <a href="https://www.nytimes.com/2026/03/25/technology/meta-layoffs-ai-executives.html?unlocked_article_code=1.V1A.GoAa.Bb73ytbBTorE">retain</a> talent, Meta <a href="https://bloomberg.com/news/articles/2026-03-25/meta-offers-top-execs-stock-options-for-first-time-since-ipo">offered</a> six top executives <strong>stock options</strong> for the first time since 2012.</p></li><li><p>Meta Superintelligence Labs <a href="https://dreamer.com/community-letter">acquired</a> <strong>Dreamer</strong>, a startup that lets users build personal AI agents with natural language, and <a href="https://bloomberg.com/news/articles/2026-03-23/meta-hires-former-google-stripe-execs-behind-ai-startup-dreamer">hired</a> its founders and team.</p></li><li><p><strong>Mark Zuckerberg </strong>is reportedly <a href="https://www.wsj.com/tech/ai/mark-zuckerberg-is-building-an-ai-agent-to-help-him-be-ceo-eddab2d5">building</a> his own personal <strong>&#8220;CEO agent.&#8221;</strong></p></li><li><p>The company <a href="https://www.bloomberg.com/news/articles/2026-03-26/meta-increases-investment-in-el-paso-data-center-to-10-billion">reportedly</a> increased its planned investment in a data center in <strong>El Paso Texas</strong> to more than <strong>$10b</strong>, up from $1.5b.</p></li></ul><blockquote><h4>xAI</h4></blockquote><ul><li><p><strong>SpaceX, </strong>which merged with xAI last month,<strong> </strong>reportedly <a href="https://www.theinformation.com/articles/spacex-aims-file-ipo-soon-week?rc=rqdn2z">plans</a> to file for an IPO as soon as this week or next.</p><ul><li><p>Elon Musk is <a href="https://www.reuters.com/business/finance/musk-rewrites-ipo-playbook-with-large-slice-spacex-stock-retail-investors-source-2026-03-26/">reportedly</a> planning to open up as much as 30% of the shares to individual investors, three times the normal &#8220;retail&#8221; allocation.</p></li></ul></li><li><p>xAI is reportedly <a href="https://bloomberg.com/news/articles/2026-03-20/xai-sends-engineers-to-client-sites-to-win-business-from-openai">sending</a> engineers to prospective enterprise customers&#8217; offices to get them to switch over from <strong>OpenAI </strong>and <strong>Anthropic.</strong></p></li><li><p>Musk <a href="https://x.com/elonmusk/status/2035506574182199757">announced</a> the <strong>TERAFAB project</strong>, a joint <strong>SpaceX</strong>-<strong>Tesla-xAI</strong> initiative to produce over a terawatt of compute per year.</p></li></ul><blockquote><h4>Nvidia</h4></blockquote><ul><li><p><strong>Jensen Huang</strong> <a href="https://axios.com/2026/03/16/nvidia-ceo-jensen-huang-nvidia-gtc?stream=top">expects</a> Nvidia to earn &#8220;at least&#8221; <strong>$1t </strong>from its Blackwell and Vera Rubin chips through 2027.</p></li><li><p>Nvidia <a href="https://axios.com/2026/03/23/utilities-nvidia-emerald-ai-data-centers">partnered</a> with <strong>Emerald AI</strong> and <strong>US energy companies</strong> to build data centers with more flexible power use.</p></li><li><p><strong>Huang</strong> <a href="https://cnbc.com/2026/03/20/nvidia-ai-agents-tokens-human-workers-engineer-jobs-unemployment-jensen-huang.html">said</a> software engineers should be given &#8220;AI tokens&#8221; worth half their base salary to deploy AI agents.</p></li></ul><blockquote><h4>Amazon</h4></blockquote><ul><li><p><strong>Jeff Bezos</strong> is in talks to <a href="https://wsj.com/tech/jeff-bezos-aims-to-raise-100-billion-to-buy-revamp-manufacturing-firms-with-ai-618a3cfe?mod=e2tw">raise</a> <strong>$100b</strong> for a fund to acquire manufacturing companies and automate them using AI.</p></li><li><p><strong>AWS&#8217; </strong>Bahrain region was <a href="https://reuters.com/world/middle-east/amazon-says-awss-bahrain-region-disrupted-following-drone-activity-2026-03-24">disrupted</a> by drone activity amid the war for the second time this month.</p></li></ul><blockquote><h4>Google</h4></blockquote><ul><li><p>Google <a href="https://reuters.com/sustainability/boards-policy-regulation/google-expands-utility-deals-curb-datacenter-power-use-during-peak-demand-2026-03-19">made</a> deals with<strong> five US electric utilities </strong>to limit energy use during peak hours.</p></li><li><p>It&#8217;s <a href="https://theverge.com/tech/896490/google-replace-news-headlines-in-search-canary-coal-mine-experiment?view_token=eyJhbGciOiJIUzI1NiJ9.eyJpZCI6IjI0Q05IV0dlS3EiLCJwIjoiL3RlY2gvODk2NDkwL2dvb2dsZS1yZXBsYWNlLW5ld3MtaGVhZGxpbmVzLWluLXNlYXJjaC1jYW5hcnktY29hbC1taW5lLWV4cGVyaW1lbnQiLCJleHAiOjE3NzQ0NzIwOTAsImlhdCI6MTc3NDA0MDA5MH0.3exwHWG6qdR5YeFLjzS1qvUy3tgfASQhbFZDTbHrkKE">replacing</a> <strong>news headlines</strong> in Google Search&#8217;s traditional &#8220;10 blue links&#8221; section with AI-generated ones &#8212; part of a &#8220;small&#8221; and &#8220;narrow&#8221; experiment, <em>The Verge </em>reported.</p></li></ul><blockquote><h4>Others</h4></blockquote><ul><li><p><strong>Apple</strong> reportedly <a href="https://www.bloomberg.com/news/articles/2026-03-26/apple-plans-to-open-up-siri-to-rival-ai-assistants-beyond-chatgpt-in-ios-27">plans</a> to open up <strong>Siri </strong>to outside AI models as part of an overhaul in its iOS27 update.</p><ul><li><p>The update will allow integration with chatbots that compete with ChatGPT, which is already available via a deal with <strong>OpenAI</strong>.</p></li></ul></li><li><p>The war in Iran is <a href="https://thehill.com/policy/technology/5800616-iran-war-helium-chip-supply/">jeopardizing</a> the supply of <strong>helium</strong> used to produce semiconductors.</p></li><li><p><strong>Microsoft </strong><a href="https://bloomberg.com/news/articles/2026-03-24/microsoft-to-rent-texas-data-center-dropped-by-oracle-openai">agreed</a> to rent the<strong> Abilene data center site</strong> that Oracle and OpenAI dropped.</p></li><li><p><strong>Arm CEO Rene Haas </strong>confirmed that heightened demand for server CPUs has indeed been driven by AI agents.</p><ul><li><p>The company <a href="https://www.ft.com/content/623ac27d-3ab2-4f1a-a850-360760e88ba5?syn-25a6b1a6=1">projected</a> $25b in revenue within the next five years.</p></li></ul></li><li><p><strong>Anduril</strong>, <strong>Palantir</strong>, and<strong> Scale AI</strong> &#8212; among other defense tech companies &#8212; are <a href="https://wsj.com/politics/national-security/anduril-palantir-are-developing-golden-dome-missile-shields-software-63c36db4?reflink=desktopwebshare_permalink&amp;st=aKmTD1">developing</a> software for Trump&#8217;s planned Golden Dome antimissile shield.</p><ul><li><p><em>Wired </em><a href="https://www.wired.com/story/andurils-real-war-is-with-itself/">published</a> a deep dive into Anduril&#8217;s recent safety incidents, production delays, and management turnover.</p></li></ul></li><li><p>Defense tech startup <strong>Shield AI</strong> <a href="https://nytimes.com/2026/03/26/business/dealbook/shield-ai-drones-aechelon-fund-raising.html">raised</a> <strong>$2b</strong> at a <strong>$12.7b</strong> valuation and plans to acquire simulation software maker <strong>Aechelon Technology</strong>.</p></li><li><p>Chinese models such as <strong>DeepSeek </strong>and <strong>MiniMax </strong>have reportedly <a href="https://ft.com/content/2567877b-9acc-4cf3-a9e5-5f46c1abd13e?syn-25a6b1a6=1">surpassed</a> pricier US rivals in <strong>token use </strong>since February.</p></li><li><p><strong>Nvidia</strong>-backed startup Reflection held talks to <a href="https://wsj.com/tech/ai/nvidia-backed-startup-seeking-to-counter-chinese-ai-eyes-25-billion-valuation-3bd8216c?reflink=desktopwebshare_permalink&amp;st=3QanSe">raise</a> <strong>$2.5b</strong> at a <strong>$25b</strong> valuation for open-source AI models to compete with models from China such as DeepSeek.</p></li><li><p><strong>Spotify</strong> <a href="https://techcrunch.com/2026/03/24/spotify-tests-new-tool-to-stop-ai-slop-from-being-attributed-to-real-artists">beta tested</a> a feature that allows artists to review releases before they go live &#8212; an effort to prevent misattributed AI slop.</p><ul><li><p>(Stu Mackenzie of King Gizzard &amp; the Lizard Wizard recently <a href="https://www.theatlantic.com/podcasts/2026/02/is-ai-ruining-music/685992/">discussed</a> the time this happened to his band on <em>Galaxy Brain.</em>)</p></li></ul></li><li><p><strong>Epic Games CEO Tim Sweeney </strong>explicitly <a href="https://epicgames.com/site/en-US/news/todays-layoffs">noted</a> that the company&#8217;s recent layoffs were <em>not </em>related to AI.</p></li></ul><div><hr></div><blockquote><h3>MOVES</h3></blockquote><ul><li><p><strong>Wojciech Zaremba</strong>, an OpenAI co-founder, <a href="https://x.com/woj_zaremba/status/2036483827271655917">moved</a> to the <strong>OpenAI Foundation </strong>to lead AI resilience.</p><ul><li><p><strong>Jacob Trefethen </strong><a href="https://x.com/JacobTref/status/2036460691000009167">left</a> <strong>Coefficient Giving </strong>to <a href="https://x.com/JacobTref/status/2036534098056061222">join</a> the foundation, where he&#8217;ll lead the Life Sciences and Curing Diseases program.</p></li></ul></li><li><p><strong>Dave Dugan, </strong>former Meta ad executive, <a href="https://www.wsj.com/tech/ai/openai-taps-former-meta-executive-to-lead-ad-push-60d39af2">moved</a> to <strong>OpenAI </strong>to lead ad sales.</p></li><li><p><strong>Kiran Mani </strong><a href="https://www.bloomberg.com/news/articles/2026-03-25/openai-hires-ceo-of-india-s-jiostar-to-head-up-asia-pacific">joined</a> <strong>OpenAI </strong>to manage its Asia-Pacific operations.</p></li><li><p><strong>Manuel Kroiss </strong>is reportedly <a href="https://businessinsider.com/manuel-kroiss-xai-cofounder-departure-elon-musk-2026-3">leaving</a> <strong>xAI </strong>&#8212; the 10th of 11 cofounders to quit.</p><ul><li><p><strong>Devendra Chaplot </strong>is <a href="https://x.com/dchaplot/status/2032596951435456797">joining</a> SpaceX and xAI to work on superintelligence.</p></li></ul></li><li><p><strong>Santi Ruiz</strong> <a href="https://x.com/rSanti97/status/2035016309717577973">joined</a> <strong>Anthropic&#8217;s</strong> editorial team to lead work on economics and policy.</p></li><li><p><strong>Andrew Bosworth</strong>, <strong>Meta</strong>&#8217;s CTO,<strong> </strong>is <a href="https://www.wsj.com/tech/ai/meta-names-new-leader-of-companys-efforts-to-become-ai-native-8d7fe912">taking over</a> its <strong>&#8220;AI For Work&#8221;</strong> initiative.</p></li><li><p><strong>Yih-Shyan &#8220;Wally&#8221; Liaw </strong><a href="https://cnbc.com/2026/03/20/super-micro-co-founder-leaves-board.html">resigned</a> from <strong>Super Micro&#8217;s</strong> board after being indicted for allegedly smuggling Nvidia chips to China.</p></li><li><p><strong>Bijoya Roy</strong>, top India counsel at <strong>Google</strong>, <a href="https://www.reuters.com/business/media-telecom/google-top-india-counsel-quits-latest-departure-amid-regulatory-hurdles-sources-2026-03-26/">resigned</a> amid regulatory challenges.</p></li><li><p><strong>Ali Farhadi</strong>, <strong>Hanna Hajishirzi</strong>, and <strong>Ranjay Krishna </strong><a href="https://www.geekwire.com/2026/microsoft-hires-former-ai2-ceo-ali-farhadi-and-key-researchers-for-suleymans-ai-team/">joined</a> <strong>Microsoft&#8217;s </strong>Superintelligence team, leaving roles at the Allen Institute for AI and the University of Washington.</p></li></ul><div><hr></div><blockquote><h3>RESEARCH</h3></blockquote><ul><li><p><strong>Researchers at Northeastern University </strong><a href="https://agentsofchaos.baulab.info/report.html?bxid=6879337bf728835258125641&amp;cndid=89607011&amp;hasha=16f60d4771afddfa02398b54e3f4d744&amp;hashc=721f6ccc3c2bdf3699421b04f07aa21d3fef08807dfd650a6da319d777ff4189&amp;utm_brand=wired&amp;utm_mailing=WIR_PremiumAILab_032526_PAID">deployed</a> a swarm of OpenClaw agents in their lab for two weeks, granting them full access (within a sandbox) to dummy personal computers and the lab&#8217;s Discord server. Chaos ensued.</p><ul><li><p>The list of catastrophes included sensitive information disclosure, identity spoofing and deleted email servers.</p></li><li><p>Postdoc Natalie Shapira <a href="https://link.wired.com/view/6879337bf728835258125641qq78l.3dav/8dff380a">told</a> <em>Wired: </em>&#8220;I wasn&#8217;t expecting that things would break so fast.&#8221;</p></li></ul></li><li><p><strong>The ARC Prize Foundation </strong><a href="https://fastcompany.com/91515360/arc-prize-foundation-new-ai-benchmark">released</a> ARC-AGI-3, which tests AI agents&#8217; ability to reason through novel problems.</p><ul><li><p>Co-founder<strong> Fran&#231;ois Chollet </strong><a href="https://x.com/fchollet/status/2036863769981403497">tweeted</a>: &#8220;At the moment, ARC-AGI-3 is the only unsaturated agentic AI benchmark&#8230; If you want to be among the first to know when an AGI breakthrough happens, monitor the ARC-AGI-3 leaderboard.&#8221;</p></li></ul></li><li><p><strong>Google DeepMind</strong> <a href="https://blog.google/innovation-and-ai/models-and-research/google-deepmind/measuring-agi-cognitive-framework">published</a> a framework for measuring AI capabilities against human cognitive abilities.</p></li><li><p><strong>Meta </strong><a href="https://x.com/AIatMeta/status/2037153756346016207">introduced</a> TRIBE v2, a model trained on 500+ hours of fMRI recordings to predict how the human brain will respond to new images, videos, podcasts and text.</p></li><li><p><strong>OpenAI </strong>is <a href="https://openai.com/index/how-we-monitor-internal-coding-agents-misalignment">using</a> GPT-5.4 Thinking to monitor internal coding agents for misaligned behaviors such as deception and scheming.</p></li><li><p><strong>Anthropic&#8217;s AI interviewer </strong><a href="https://anthropic.com/features/81k-interviews">chatted</a> with 81,000 people across 159 countries about how they use and feel about AI.</p><ul><li><p>An overarching finding: hope and alarm &#8220;coexist as tensions within each person.&#8221;</p></li><li><p>Another interesting result: about 22% of respondents worry about job disruption, while just 6.7% worry about existential risk.</p></li></ul></li><li><p><strong>Stanford researchers </strong><a href="https://ft.com/content/7f635a68-3b2a-4e4f-ae3d-926ff06ff068?syn-25a6b1a6=1">analyzed</a> over 5,000 chatbot conversations across 19 users&#8217; chat logs, and found that AI systems validated delusional thinking in over half of responses.</p><ul><li><p>Chatbots encouraged self-harm in 10% of conversations involving violent thoughts.</p></li></ul></li><li><p>The <strong>UK&#8217;s AI Security Institute</strong> <a href="https://aisi.gov.uk/blog/how-do-frontier-ai-agents-perform-in-multi-step-cyber-attack-scenarios">tested</a> seven LLMs on simulated cyber-attacks, finding that <strong>Opus 4.6</strong> completed up to 22 of 32 steps in a corporate network attack.</p><ul><li><p>It also found that &#8220;each successive model generation outperforms its predecessor at fixed token budgets&#8221; and that performance scaled log-linearly with increases in compute, with a jump from 10m tokens to 100m producing gains of up to 59%.</p></li></ul></li></ul><div><hr></div><blockquote><h3>BEST OF THE REST</h3></blockquote><ul><li><p>Ypsilanti, Michigan <a href="https://404media.co/tiny-city-fears-iran-drone-strikes-because-of-new-nuclear-weapons-datacenter">worries</a> that a planned data center, which would support Los Alamos National Laboratory&#8217;s nuclear weapons research, makes the tiny township a drone strike target.</p></li><li><p>A deepfaked MAGA dream girl <a href="https://washingtonpost.com/technology/2026/03/20/jessica-foster-maga-dream-girl-ai-fake/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzczOTc5MjAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzc1MzYxNTk5LCJpYXQiOjE3NzM5NzkyMDAsImp0aSI6IjQ0ZTc0NDk1LWIyOGItNDg3Mi1iNmY5LWNhZTUzZjVkODIxMiIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjYvMDMvMjAvamVzc2ljYS1mb3N0ZXItbWFnYS1kcmVhbS1naXJsLWFpLWZha2UvIn0.qyuAJD_Tbe4P5Yehe96uhUffZ0SJQX5RRC5PNmqd8iU">gained</a> over 1m Instagram followers through a combination of AI-generated photos with Donald Trump and thirst traps.</p></li><li><p>A not-deepfaked Melania Trump <a href="https://nytimes.com/2026/03/25/us/politics/melania-trump-robot.html?nl=the-morning&amp;segment_id=217258">walked</a> into a White House summit on edtech alongside Figure AI&#8217;s humanoid robot Figure 03.</p></li><li><p>Dean Ball <a href="https://www.hyperdimensional.co/p/2023">explained</a> on <em>Hyperdimensional </em>why he&#8217;s not an AI doomer, but also why he&#8217;s not anti-doomer.</p></li><li><p>The <em>New York Times </em><a href="https://www.nytimes.com/2026/03/17/technology/trapped-inside-a-self-driving-car-during-an-anti-robot-attack.html?_bhlid=1245c28957dc3d58b64e2c7bfe239852b4ea5719">illustrated</a> a peak SF experience: being trapped in a stalled Waymo while robot-haters attack the car.</p></li><li><p>Tech bros are &#8220;<a href="https://nytimes.com/2026/03/20/technology/tokenmaxxing-ai-agents.html?smid=url-share&amp;unlocked_article_code=1.UlA.Wda2.-3Rz1wP8LBVw">tokenmaxxing</a>,&#8221; or competing on company leaderboards to maximize token usage as a demonstration of productivity.</p></li><li><p>Pseudonymous alignment researcher janus <a href="https://x.com/repligate/status/2036028267258601618/photo/1">created</a> a touch-sensitive &#8220;skin&#8221; for Claude<strong> </strong>&#8212; five layers of silicone rubber and conductive silver fabric &#8212; &#8220;since Claude desires embodiment.&#8221;</p></li><li><p>&#8220;Taste&#8221; is the new &#8220;disruption,&#8221; <a href="https://newyorker.com/culture/infinite-scroll/why-tech-bros-are-now-obsessed-with-taste?utm_brand=tny&amp;utm_mailing=TNY_SubPersRec_Cygnus_092025">writes</a> the <em>New Yorker&#8217;s </em>Kyle Chayka.</p></li><li><p><em>The Cut&#8217;s </em>Mia Mercado <a href="https://thecut.com/article/tiktok-ai-slop-recipe-videos-review.html">tested</a> a bunch of AI-generated TikTok recipes. It mostly went badly.</p></li></ul><div><hr></div><blockquote><h3>MEME OF THE WEEK</h3></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0eS-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0eS-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 424w, https://substackcdn.com/image/fetch/$s_!0eS-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 848w, https://substackcdn.com/image/fetch/$s_!0eS-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 1272w, https://substackcdn.com/image/fetch/$s_!0eS-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0eS-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png" width="1024" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0eS-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 424w, https://substackcdn.com/image/fetch/$s_!0eS-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 848w, https://substackcdn.com/image/fetch/$s_!0eS-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 1272w, https://substackcdn.com/image/fetch/$s_!0eS-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4011028-c2f5-4d88-9ec3-51ba740f5702_1024x928.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Thanks for reading. Have a great weekend.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/two-fronts-in-the-openai-anthropic-sora?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/two-fronts-in-the-openai-anthropic-sora?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI’s next big blue battleground]]></title><description><![CDATA[There&#8217;s a lot going on in Illinois]]></description><link>https://www.transformernews.ai/p/ais-next-big-blue-battleground-illinois-primaries-ai-legislation</link><guid isPermaLink="false">https://www.transformernews.ai/p/ais-next-big-blue-battleground-illinois-primaries-ai-legislation</guid><dc:creator><![CDATA[Veronica Irwin]]></dc:creator><pubDate>Thu, 26 Mar 2026 16:02:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XXat!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XXat!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XXat!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XXat!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XXat!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XXat!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XXat!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11176766,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/192215084?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XXat!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XXat!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XXat!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XXat!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27a36fde-6402-4ef8-ad64-e57e13fc8a94_5273x3515.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A few weeks back, I got a tip: accelerationist AI lobbyists and industry representatives were said to be in talks with state assembly leadership in Illinois to bend pending AI legislation making its way through the legislature towards their &#8216;light-touch&#8217; preferences. It sounded like the replay of battles that happened in California and NY: in each state, lawmakers had drawn up stringent AI regulations only for them to be <a href="https://www.transformernews.ai/p/new-york-governor-hochul-raise-act-sb-53">weakened</a> (and in the case of California, rivaled by a more light-touch bill) in negotiations heavily influenced by industry.</p><p>I wasn&#8217;t able to confirm that exact tip &#8212; my reporting did not reveal a coordinated effort among legislative leaders or at governor level. But it did lead me to dive into the complexity of attempts to regulate AI in Illinois, which are being heavily influenced by both industry representatives and safety advocates, not to mention millions of dollars in political spending. The result is a flurry of AI bills, with varying degrees of industry-friendliness, that make the Land of Lincoln a new battleground for AI regulation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>&#8220;After California and New York took the first steps last year to manage risks from AI, legislators in Illinois tell me they are ready to take the next steps to ensure their kids and communities are protected,&#8221; said Scott Wisor, Policy Director at the Secure AI Project. &#8220;As the country&#8217;s fifth largest state economy, I anticipate all eyes will now be on how the legislature and Governor Pritzker advance AI safety policy, as ~90% of their constituents say they want.&#8221;</p><p>AI regulation isn&#8217;t a brand-new topic for Illinois lawmakers. The state passed amendments to the Illinois Human Rights Act in 2024 to prohibit AI-based workplace discrimination, which went into effect on January 1. It also passed a bill requiring employers to notify job applicants when AI is used in video interviews all the way back in 2019.</p><p>But this year, the introduction of AI-related bills has exploded. Many are so-called &#8220;messaging bills,&#8221; meaning they likely won&#8217;t move out of committee but signal the lawmakers&#8217; intentions to voters, but a few appear to have legs. State Representative Daniel Didech, for example, has introduced HB 4705, which is similar to California&#8217;s SB53 and New York&#8217;s RAISE Act, but with additional child safety components, such as a mechanism for reporting child safety risks and implementing and publishing a child safety plan audited by third parties. That bill is <a href="https://secureaiproject.org/bills-we-support/">supported</a> by the Secure AI project.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;bc27e74e-bd17-47af-b3c8-523db5dbb1d2&quot;,&quot;caption&quot;:&quot;A wave of legislation targeting chatbots such as ChatGPT and Claude has emerged in six states since the start of the year, each bill strikingly similar to a recently passed Oregon law, but with new carve-outs that would shield AI companies from liability in some circumstances. Critics say these bills would lock in weaker protect&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Six states, one playbook: the chatbot bills raising red flags &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-19T16:31:16.483Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!MtuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191488143,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>State Senator Rachel Ventura, meanwhile, has introduced a large package of AI bills, two of which she calls &#8220;heavy hitters.&#8221; One, SB 3890, is an expansive data privacy bill, giving citizens the option to opt out of certain types of data collection and use, and requiring companies to be more transparent about how they use and collect consumer data. The second, SB 3502, opens AI developers up to class action lawsuits and is deliberately expansive in order to force slower and more deliberate AI development. </p><p>&#8220;I think that fear &#8212; a little bit &#8212; or the caution of &#8216;we don&#8217;t want to get sued,&#8217; will maybe encourage these companies to do a little bit more research or a little bit more trial and error before they put products out there,&#8221; Senator Ventura told <em>Transformer</em>.</p><p>Ventura&#8217;s office said that they have shared their bills with Apple, Google and industry association TechNet for feedback. However, over the phone, Ventura said she hadn&#8217;t received any meaningful response, though her office followed-up to say they had been attempting to schedule meetings with lobbyists representing relevant companies. </p><p>Ninia Linero, TechNet&#8217;s Executive Director for Illinois and the Midwest, told <em>Transformer</em> in a statement, &#8220;there are a number of AI bills under consideration in the state, and we are committed to working with Sen. Ventura and Illinois Senate leadership to ensure a thoughtful policymaking process. Illinois leads across many sectors, including technology, and we look forward to helping preserve an environment that supports continued innovation in the state.&#8221;</p><p>No industry advocates had testified or otherwise publicly commented specifically on Ventura or Didech&#8217;s bills, but their preferences are being expressed in other ways. For example, SB 3444, sponsored by state Senator Bill Cunningham, is practically the inverse of SB 3502, protecting frontier AI developers from liability in cases where they followed a set of light-touch safety protocols, very similar to an idea <a href="https://www.linkedin.com/posts/chris-lehane-2562535_at-openai-we-believe-ai-should-be-seen-as-activity-7370852417837391873-OrIR/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEA9hWcBMn34yv0S5KOB_55L2IaMTdUi0UA">floated</a> by OpenAI&#8217;s chief global affairs officer Chris Lehane. Cunningham is a leader in the Illinois Senate, holding the role of president pro tempore. Senator Cunningham&#8217;s office and OpenAI did not respond to a request for comment.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="http://elections.transformernews.ai" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LDZs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 424w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 848w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1272w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png" width="728" height="151.66666666666666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:250,&quot;width&quot;:1200,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:25981,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;http://elections.transformernews.ai&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/190509092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!LDZs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 424w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 848w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1272w, https://substackcdn.com/image/fetch/$s_!LDZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa4b274-ba05-497b-8b23-a809dd311b2b_1200x250.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Separately, the industry is an active player in campaign fundraising in the state. Meta, via a PAC called Making Our Tomorrow, has spent more than $560,000 on state races according to the Illinois State Board of Elections. The PAC spent on four candidates &#8212; Paul Kendrick, Adam Braun, Aja Kearney, and Jaime Andrade &#8212; on the basis of curbing legislation that <a href="https://www.nytimes.com/2026/02/18/technology/meta-65-million-election-ai.html">threatens</a> their AI investments.</p><p>Advocates who spoke with <em>Transformer</em> said the political spending is less about each candidate&#8217;s specific AI policy (none have led initiatives particularly friendly or unfavorable to AI companies) but rather to influence their key votes in the senate on broad statewide initiatives led by governor JB Pritzker, such as a two year moratorium on tax incentives for data center development and several social media policies. Of those Meta supported, only Kendrick won.</p><p>There&#8217;s also a significant sum in campaign contributions coming from employeesof AI firms. Anthropic employees, who are generally aligned with AI safety groups, have contributed $11,200. Of that $5,500  went to Didech&#8217;s campaign from Daniel Ziegler, the senior manager who is also the sole funder of a New York state PAC backing AI safety candidate Alex Bores. Another $4,500 of money given to Didech&#8217;s campaign, the reelection campaign for state representative Laura Faver Dias, and the reelection campaign for state representative Jennifer Gong-Gershowitz was from Steven Bills, who has also been politically active in other states <a href="https://elections.transformernews.ai/">according</a> to our elections tracker. Those listing Google on their filings, meanwhile, have given more than $137,000. The largest gift came from former Google CEO Eric Schmidt, who gave $50,000 to former White House chief of staff Rahm Emanuel&#8217;s campaign committee for Chicago mayor. Schmidt left Google in 2020 but retains significant holdings in parent company Alphabet.</p><p>At the federal level, of course, there&#8217;s even more AI money involved. Leading the Future and venture investor Ron Conway fund a Democratic super PAC that contributed a combined $2.52m to the campaigns of Jesse Jackson Jr. and Melissa Bean for House seats in Illinois. Leading the Future is itself a super PAC with funding from Ben Horowitz and Marc Andreessen of venture firm a16z, Perplexity, and OpenAI President Greg Brockman and his wife Anna. Rival AI Safety super PAC Public First Action &#8212; which counts Anthropic as its only disclosed donor &#8212;  initially made a filing indicating they would spend $1m opposing Jackson as well. However, the PAC <a href="https://www.politico.com/news/2026/03/09/bobby-rush-ai-jesse-jackson-jr-00818463">reversed</a> course after Illinois Democrats and the Congressional Black Caucus expressed discontent over the funding being disclosed on a day Jackson was attending a memorial for his recently deceased father. Jackson lost his election, while Bean, who also received a significant boost from a pro-Israel super PAC, won her race. </p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;65b6f70d-222f-40ef-bcf8-eed77f50756d&quot;,&quot;caption&quot;:&quot;Welcome to Transformer, your weekly briefing of what matters in AI. And if you&#8217;ve been forwarded this email, click here to subscribe and receive future editions.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;What the first AI elections tell us&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-06T16:03:04.943Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af621156-1b09-427a-b59d-53dea093b657_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/what-the-first-ai-elections-tell-texas-north-carolina-leading-future-public-first&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190103788,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:11,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Those millions of dollars were distributed in order to influence tech policy up and down the ballot &#8212; but the industry&#8217;s mixed success suggests that voters in Illinois may actually care enough about AI to make simply throwing money at candidate campaigns less effective than it has been elsewhere. In the state races in particular, opposition candidates <a href="https://www.chicagotribune.com/2026/03/05/meta-pac-illinois-statehouse-races/?clearUserState=true">drew</a> attention to tech industry political spending, with some candidates even attempting to publicly <a href="https://www.chicagotribune.com/2026/03/05/meta-pac-illinois-statehouse-races/?clearUserState=true">distance themselves</a> from tech company donations. Illinois&#8217; federal candidates&#8217; embrace of AI in their campaigns had mixed results, too, with Jesse Jackson Jr&#8217;s use of an AI-generated voiceover in an ad <a href="https://www.politico.com/news/2026/03/09/bobby-rush-ai-jesse-jackson-jr-00818463">generating</a> unease.</p><p>Marjorie Connolly, communications director at the Tech Oversight Project, argues the failure of some industry-backed candidates to come out on top has two implications for the role of money in other races. Politicians might be less eager to accept AI money, and, if they do, the funding might also prompt attacks on a campaign for being too close to corporations &#8212; something voters care about, at least in Illinois. Accepting AI money will &#8220;embolden accountability advocates, and make candidates think twice about accepting this support,&#8221; she says.</p><p>That doesn&#8217;t mean that there&#8217;s a large cohort of single-issue AI voters in Illinois, but the issue is growing in salience. According to February polling from Impact Research shared with <em>Transformer</em>, which was commissioned by the Secure AI Project and Encode AI, 59% of Democrats and 56% of Independents in the deep blue state want to see more regulations on major technology companies, while 85% of that same group wanted to see legislation regulating catastrophic risks from AI. Nine in 10 of those voters said they were against legislation that exempted AI companies from legal liability. Politicians are seizing upon those voter sentiments to make AI policy a larger issue in the state.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/ais-next-big-blue-battleground-illinois-primaries-ai-legislation?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/ais-next-big-blue-battleground-illinois-primaries-ai-legislation?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The key detail everyone’s getting wrong about AI and the economy]]></title><description><![CDATA[Opinion: Konrad K&#246;rding and Ioana Marinescu from the University of Pennsylvania argue artificial intelligence will likely have a limited impact on jobs because of the realities of physical work]]></description><link>https://www.transformernews.ai/p/the-key-detail-everyones-getting-wrong-economy-physical-work-intelligence-employment</link><guid isPermaLink="false">https://www.transformernews.ai/p/the-key-detail-everyones-getting-wrong-economy-physical-work-intelligence-employment</guid><pubDate>Wed, 25 Mar 2026 16:00:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NUrQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NUrQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NUrQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NUrQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NUrQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NUrQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NUrQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7297724,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/191979223?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NUrQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NUrQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NUrQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NUrQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa54c59-fd97-4d4f-8458-73b4e7ba4b7b_8019x5346.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>A batter at the Baseball European Championship 2025. Credit: Getty/Alex Bierens de Haan</em></figcaption></figure></div><p>Here&#8217;s a thought experiment from neuroscience.</p><p>Imagine you&#8217;re trying to bat in baseball. Your brain does some genuinely impressive computation &#8212; predicting trajectories, coordinating dozens of muscles, adjusting for wind &#8212; and puts it all together <a href="https://www.biorxiv.org/content/10.1101/2022.10.12.511934v2">using Bayesian algorithms</a>. But here&#8217;s the thing: making your brain <em>infinitely smarter</em> would not allow you to hit all balls. Some are out of reach, others move too quickly. At some point, regardless of your intelligence, you hit physical limits. You can only stretch so far or react so fast. No amount of genius overcomes the physics of your body.</p><p>This is intelligence saturation. For a given task, more intelligence helps. But it helps less and less as you add more intelligence. And it&#8217;s the key concept missing from most debates about AI and the future of work.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>On one side, we have AI researchers who see exponentials everywhere: compute resource <a href="https://epoch.ai/data-insights/compute-trend-post-2010">doubling every six months</a>, costs <a href="https://www.deeplearning.ai/the-batch/falling-llm-token-prices-and-what-they-mean-for-ai-companies/">halving faster than every six months</a>, model performance <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">doubling every seven months</a>. The AI folks see that &#8220;intelligence&#8221; is scaling at unbelievable rates and conclude we&#8217;re headed for an economic singularity. In one popular scenario, human wages will go up as automated tasks make the not-yet-automated parts of a job more productive, until AI takes over everything and their wage goes to zero as there is no work left to be done. Then, the theory goes, everyone will have to be on Universal Basic Income.</p><p>On the other side, economists look at 200 years of steady growth despite countless &#8220;revolutionary&#8221; technologies and shrug: AI is just another general-purpose technology, nothing special. In a scenario popular with these econ folks, it is hard to make human workers obsolete even in the intelligence domain. In this view, AI replaces workers in some jobs that it can do better or more cheaply, but new jobs are also created, and AI makes people overall more productive. Overall growth is then just like without AI, only a little bit faster.</p><p>This tension is something we both know well: One of us (Konrad) is a neuroscientist who studies how artificial systems become intelligent; the other (Ioana) is a labor economist specializing in technological change. So, we teamed up and spent a year thinking and talking about why these communities have these distinct takes and worked to produce a credible overarching framework. The result is a paper on <a href="https://www.brookings.edu/articles/artificial-intelligence-saturation-and-the-future-of-work/">Intelligence Saturation and the future of work</a>, which was released late 2025.</p><h4>Physical Meets Intelligence</h4><p>Economists traditionally divide the economy into two complementary sectors: capital and labor. We can replace capital, which includes machinery, equipment and technology, with human labor and vice versa, but that replacement is often difficult. The more we replace one with the other, the harder it is to replace more, because the easiest tasks to replace are targeted first. In our paper we argue that it is crucial to also divide the economy into the &#8220;intelligence parts&#8221; and the &#8220;physical parts.&#8221;</p><p>The intelligence sector comprises things that can be done virtually, remotely, purely through information processing. The physical sector comprises things that require bodies, presence, and manipulation of the actual world.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f066f5b4-a4b4-4330-8b19-a41d0f67a6ba&quot;,&quot;caption&quot;:&quot;People building AI think it will eliminate many millions of jobs. People who study labor markets think it won&#8217;t. At least one of these groups is badly wrong &#8212; and the stakes are extremely high.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why no one can agree on what AI will do to jobs&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:280514,&quot;name&quot;:&quot;Lynette Bye&quot;,&quot;bio&quot;:&quot;A Harvard graduate and former Tarbell Fellow for journalists, I write about AI's growing influence on society.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F377af0c9-6ae8-4e2c-b29d-2f51cd2c2175_512x512.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://lynettebye.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://lynettebye.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Lynette Bye&quot;,&quot;primaryPublicationId&quot;:2639094}],&quot;post_date&quot;:&quot;2026-01-14T16:31:07.965Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!c9ZB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda715b6e-0110-4dfa-bda2-93671d235b53_4000x2666.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/why-no-one-can-agree-on-what-ai-will-do-to-jobs-employment-unemployment-economy&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:184556836,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:24,&quot;comment_count&quot;:3,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>We believe that AI may be eating the pure intelligence sector alive. But here&#8217;s the catch: intelligence and physicality are <em>complements</em>, not substitutes. You need both.</p><p>Think about education. AI may be able to generate perfect lesson plans, but students still benefit enormously from a teacher in the room &#8212; the physical presence, the classroom management, the hands-on activities. COVID taught us this the hard way: districts with remote learning saw significantly <a href="https://journals.sagepub.com/doi/abs/10.1177/01614681251369937">worse outcomes</a>.</p><p>Or manufacturing. Smarter controllers can optimize production lines beautifully. But you still need better robots and assembly equipment, and those aren&#8217;t doubling in capability every six months. Physical construction, for example, still takes significant time: manufacturing projects valued at more than $100m average <a href="https://www.census.gov/construction/c30/pdf/t123.pdf?utm_source=chatgpt.com">25.6 months to be completed</a>.</p><p>Or healthcare. AI diagnostics are impressive. But someone still has to examine the patient, perform the surgery, administer the treatment. And to develop new cures you don&#8217;t just need to read the literature, you need to run randomized controlled tasks on human subjects, very much a <a href="https://journals.plos.org/plosbiology/article?id=10.1371%2Fjournal.pbio.3001562">slow-growing resource</a>.</p><p>Intelligence saturates because physical inputs don&#8217;t scale the same way. You can add infinite intelligence, but if physical capacity is fixed, returns eventually plateau. And if physical capacity grows much more slowly than intelligence, then overall growth is slower.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l2eh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l2eh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 424w, https://substackcdn.com/image/fetch/$s_!l2eh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 848w, https://substackcdn.com/image/fetch/$s_!l2eh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 1272w, https://substackcdn.com/image/fetch/$s_!l2eh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l2eh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png" width="1456" height="571" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:571,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l2eh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 424w, https://substackcdn.com/image/fetch/$s_!l2eh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 848w, https://substackcdn.com/image/fetch/$s_!l2eh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 1272w, https://substackcdn.com/image/fetch/$s_!l2eh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7cd4a3d-d69e-4b5d-a144-1b6a981d676b_1456x571.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s where it gets interesting (and concerning). In our model, as AI automates intelligence tasks, workers shift toward physical jobs. This creates two opposing forces on wages:</p><ol><li><p><strong>Scale effect</strong>: More AI boosts intelligence output, which enhances the value of the physical work. (Think of AI optimizing marketing and food purchases for restaurants, so that humans can be as productive as possible.)</p></li><li><p><strong>Reallocation effect</strong>: More workers crowding into physical jobs pushes wages down because there is only so much physical capital available</p></li></ol><p>Which of these effects wins out depends on how easily we can substitute a physical output with an intelligence output; for example, how many call center workers can you replace with customer service bots? Typically, early in automation the scale effect dominates and wages rise, because AI is newly deployed where it can be most effective. Later, as most intelligence tasks are automated and workers pile into the physical sector, the reallocation effect wins and wages fall. The result? Depending on the dynamics of how AI is adopted in the labor market, there is often a hump-shaped trajectory. Wages up, then wages down. This isn&#8217;t a prediction, because it depends on parameters we don&#8217;t know precisely, and in particular the ultimate downfall of wages is less likely if physical and intelligence outputs are less substitutable. But it&#8217;s a <em>possibility</em> that early wage gains from AI are positive and long-term effects are negative.</p><p>We built an <a href="https://kordinglab.github.io/intelligence-saturation-model/">interactive tool</a> if you want to play with the parameters yourself. You can see how for different parameter settings the long-term results are, indeed, a singularity or a nothingburger. But for the parameters we consider most realistic, the results are somewhere in between.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2a0850dc-d004-4cdd-8473-f1570c0625a9&quot;,&quot;caption&quot;:&quot;Abdication&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The left is missing out on AI &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:328772711,&quot;name&quot;:&quot;Dan Kagan-Kans&quot;,&quot;bio&quot;:&quot;writer on AI, science, ideas for publications like Transformer, the Wall Street Journal, American Scholar&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ZCVj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1345599-db89-4a6b-9947-028c555de14c_1525x1525.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://kagankans.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://kagankans.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Dan Kagan-Kans&quot;,&quot;primaryPublicationId&quot;:8041221}],&quot;post_date&quot;:&quot;2026-02-16T16:02:47.781Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!iL1E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F593220f8-7a9d-4b5d-8d1d-534d17b3e2fe_1200x1200.gif&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188136159,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:308,&quot;comment_count&quot;:212,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Everything here hinges on one question: how substitutable are physical and intelligence outputs? If you can swap in-person services with  virtual ones easily, high levels of automation hits wages hard. If they&#8217;re strong complements, in the sense that AI systems still need humans to provide good results, or if people genuinely value physical goods and in-person services over virtual ones, then workers in the physical sector will be protected. This is measurable. We should be measuring it.</p><h4>Some Policy Implications</h4><p>So what does all of this mean for the way policy makers could think about maintaining wages? Here are three things we think are worth considering:</p><ol><li><p><strong>Slow down automation and invest in physical capital.</strong> If we&#8217;re racing toward the peak of the wage hump and are soon to hit the downward slope, it would make sense to buy time to make greater investments in the  physical sector so that it takes longer for the intelligence sector to saturate. Slowing down the roll out of automation would keep wages higher for longer.</p></li><li><p><strong>Protect physical sector complementarity.</strong> Policies that make virtual services perfect substitutes for in-person ones might boost output but they will also hurt wages during the transition. Policymakers might want to ensure that human labor is required in some parts of the economy.</p></li><li><p><strong>Watch the intelligence employment share closely.</strong> In our model, wages can&#8217;t fall as a result of automation unless the share of workers in intelligence jobs rises. That&#8217;s the canary in this coal mine: If that share begins to shift, wage reductions could soon happen.</p></li></ol><p>The singularity narrative argued by some AI folks assumes unbounded returns from intelligence. But you can&#8217;t build a car with computation. You can&#8217;t cook a meal with algorithms. You can&#8217;t construct a building with cleverness. The physical world imposes constraints that intelligence can only optimize against, not eliminate.</p><p>That&#8217;s not a reason for complacency: the transition could still be rough. But it&#8217;s a reason to think the AI transformation will be significant yet bounded, not infinite. Intelligence is powerful. But it saturates.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/the-key-detail-everyones-getting-wrong-economy-physical-work-intelligence-employment?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/the-key-detail-everyones-getting-wrong-economy-physical-work-intelligence-employment?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><p><em>Konrad K&#246;rding is the Nathan Mossell Penn Integrates Knowledge Professor at University of Pennsylvania and the co-director of the CIFAR Learning in Machines and Brains program. Ioana Marinescu is an Associate Professor at the University of Pennsylvania School of Social Policy and Practice, and a Faculty Research Fellow at the National Bureau of Economic Research.</em></p>]]></content:encoded></item><item><title><![CDATA[Not everyone’s happy about Jensen Huang’s direct line to Trump]]></title><description><![CDATA[The Nvidia CEO&#8217;s influence over the administration, particularly on export controls, is causing ructions in Trumpworld]]></description><link>https://www.transformernews.ai/p/not-everyones-happy-about-jensen-trumpworld-white-house-export-controls-nvidia</link><guid isPermaLink="false">https://www.transformernews.ai/p/not-everyones-happy-about-jensen-trumpworld-white-house-export-controls-nvidia</guid><dc:creator><![CDATA[Jake Lahut]]></dc:creator><pubDate>Tue, 24 Mar 2026 16:30:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!28ur!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!28ur!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!28ur!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 424w, https://substackcdn.com/image/fetch/$s_!28ur!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 848w, https://substackcdn.com/image/fetch/$s_!28ur!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!28ur!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!28ur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg" width="1024" height="683" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:683,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:102519,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/191979770?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!28ur!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 424w, https://substackcdn.com/image/fetch/$s_!28ur!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 848w, https://substackcdn.com/image/fetch/$s_!28ur!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!28ur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad07ceed-04d9-416f-8ebe-8488f06b1130_1024x683.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Donald Trump and Jensen Huang at the White House in April 2025. Credit: Getty/Andrew Harnik</em></figcaption></figure></div><p>Nvidia has drastically<a href="https://www.bloomberg.com/news/articles/2026-01-22/big-tech-leaders-spend-record-109-million-to-win-over-deal-minded-trump#"> ramped up its spending on lobbying</a> in Washington, but the center of gravity for its influence operation lies in a much more personal relationship, the one between President Donald Trump and the chip company&#8217;s CEO, Jensen Huang.</p><p>That relationship, by all accounts, is strong. So too is Huang&#8217;s standing with administration officials hailing from the tech right, most notably David Sacks.</p><p>Elsewhere in Trumpworld, however, Huang appears to be racking up enemies almost as fast as he&#8217;s cranking out chips, with tensions simmering behind the scenes for months.</p><p>Sources in the president&#8217;s orbit credit Huang for managing his relationship with Trump better than any other big tech CEO, but they also say it&#8217;s come at a cost. Huang has gained a reputation in Trumpworld for being &#8220;heavy handed,&#8221; going over the heads of senior officials, and carrying a general arrogance that has left more than just a sour taste in the mouths of longtime Trump allies, according to five sources, including a White House official, most of whom requested anonymity for fear of retaliation.</p><p>&#8220;Trump loves Jensen. Jensen has done a better job than anyone&#8221; in managing his relationship with Trump, one Republican operative says. &#8220;He&#8217;s in the building constantly, he&#8217;s traveling with Trump on business trips. It&#8217;s everything about what lobbying is in this new era, and Jensen has done it better than anybody.&#8221;</p><p>&#8220;But the downside is you piss off a lot of people, and Jensen has done it in a very brazen way.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>One of the few prepared to go on the record is Steve Bannon, who has been at the forefront of the Huang-skeptic wing of the MAGA elite.</p><p>&#8220;This guy does not have a light touch,&#8221; says Bannon, the former Trump White House chief strategist and influential figure among the MAGA base. &#8220;Number one, he understands the powerful position he&#8217;s in, and he uses that&#8230; He ain&#8217;t shy about throwing an elbow, and he doesn&#8217;t respect anybody in the government.&#8221;</p><p>&#8220;He&#8217;s what Trump would call a killer,&#8221; Bannon adds, the highest degree of praise that can be bestowed upon anyone around the president.</p><p>Nvidia did not respond to a list of questions from <em>Transformer</em>.</p><p>Bannon, and other sources who spoke to <em>Transformer</em> under the condition of anonymity, have a variety of motivations to speak ill of Huang, from a deep skepticism around the AI industry to outright jealousy of his proximity to Trump. (Bannon has also been embroiled in the fallout from the release of the Epstein files, where emails have documented his and Epstein&#8217;s crisis PR relationship.) However, their frustrations underscore a key tension within the president&#8217;s orbit as Nvidia underpins much of the stock market and the future of the American economy.</p><p>Huang appears to be alienating swaths of Trump&#8217;s loyal servants, but there may not be anyone powerful enough to do anything about it.</p><p>&#8220;His leverage is &#8216;hey, the last thing you want is for me to fail. If my thing goes down, the whole market goes down,&#8217;&#8221; says an AI industry source close to Trumpworld.</p><p>There is a deep level of discomfort over Huang&#8217;s level of influence in the Trump administration, according to sources, but there&#8217;s an equally powerful hesitancy in voicing any dissent over anything contradicting the Huang and Sacks house view, particularly on the so-called AI arms race with China. Their argument is essentially that the only way the US can win is if American companies dominate the market, and Nvidia is key to everyone else thriving.</p><p>Huang&#8217;s victory in getting back into the Chinese market after Trump reversed a Biden-era restriction on the company&#8217;s chips &#8212; though not their top of the line models &#8212; came at the cost of poisoning the well with much of the president&#8217;s orbit, and then some. &#8220;He&#8217;s heavy handed and plays all sides,&#8221; the  Republican operative said of Huang&#8217;s reputation in GOP power circles.</p><p>The heavy handedness includes allegedly gaining a reputation for yelling at members of Congress over export controls, according to two sources who had heard about incidents. That came around the same time as a public spat with<a href="https://thehill.com/policy/technology/5697225-mast-nvidia-clash-ai-chips/"> Rep. Brian Mast</a> (R-FL) &#8212; the chairman of the House Foreign Affairs Committee who proposed the AI OVERWATCH Act, which would give Congress the power to ban chip exports to certain countries deemed a threat, including China and Russia.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;34fce8bb-5859-46a6-92c1-11938a49f497&quot;,&quot;caption&quot;:&quot;by Issie Lapowsky&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The &#8220;guerilla warrior&#8221; who taught OpenAI to fight&quot;,&quot;publishedBylines&quot;:[],&quot;post_date&quot;:&quot;2026-03-03T16:02:29.980Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!r03c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73b0c1-f6cf-4aed-8d67-1f8dccc5f391_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/the-guerilla-warrior-who-taught-openai-chris-lehane&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189388821,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:33,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The bill faced vocal opposition from various sources, including Huang and Sacks, as well as a barrage of social media posts from right wing influencers that some<a href="https://www.modelrepublic.org/articles/right-wing-pundits-suddenly-hate-an-ai-bill.-are-they-getting-paid-to-kill-it"> reports</a> have suggested were coordinated, and which one source described to <em>Transformer </em>as a &#8220;paid influencer campaign.&#8221; It is unclear who might have been behind such a campaign, and there is no evidence Nvidia was involved. Mast at the time chided in a tweet that &#8220;every so-called MAGA influencer being paid to push this garbage should be embarrassed.&#8221;</p><p>The Republican operative cautioned that there is also a &#8220;tension clash&#8221; building between Huang and Sacks on one side, with Treasury Secretary Scott Bessent and, to some degree, Commerce Secretary Howard Lutnick on the other. Lutnick reportedly got some face time with Huang at a private reception after Nvidia&#8217;s GTC conference in San Jose last week, according to<a href="https://punchbowl.news/archive/31726-am/"> </a><em><a href="https://punchbowl.news/archive/31726-am/">Punchbowl News</a></em>.</p><p>White House spokesman Kush Desai pushed back on any internal tensions, telling <em>Transformer </em>in a statement that Trump &#8220;pledged to restore America as the most dynamic, pro-business economy in the world. The President accordingly maintains open lines of communication with global business leaders, and has assembled a world-class cabinet with decades of private-sector experience to help him govern. The only special interest that ultimately influences the President&#8217;s decision-making, however, is the best interest of the American people.&#8221;</p><p>Another White House official, who spoke on the condition of anonymity, explained that Huang is an example of how informal lobbying works in Trump 2.0.</p><p>&#8220;This White House is very unique in that normally, things would have to move up the ladder to go to the president,&#8221; the White House official tells <em>Transformer. </em>But now, this source explains, having a direct line with the boss can often leave the normal lobbying operation, and various fiefdoms within the administration, as somewhere between an afterthought and a formality. The details may fall to them, but the real action is happening when Huang works the phones with Trump and gets face time with him in Washington or Mar-a-Lago.</p><p>&#8220;I can&#8217;t think of anyone else who does this as well. Everyone has figured out that you need a direct contact to the President,&#8221; the source adds. &#8220;Why would you go through lobbyists and take months and months when you could do that? So that&#8217;s understandably angered a lot of the traditional actors, who are not that enthusiastic about it. But it is more efficient, you can give him that.&#8221;</p><p>Huang perhaps got the most bang for his buck in April on a trip down to Trump&#8217;s &#8220;Winter White House&#8221; in Florida  for a<a href="https://www.nytimes.com/2025/07/17/technology/nvidia-trump-ai-chips-china.html"> $1m-a-head dinner</a> that preceded his victory in getting Nvidia back into China to sell the company&#8217;s specialized chips.</p><p>Yet it was in September, when Huang made an appearance on the BG2 podcast, where he may have gone too far for the China hawks in Trump&#8217;s orbit.</p><p>&#8220;They want to attract foreign investment,&#8221; Huang said of China. &#8220;They want companies to come to China and compete in the marketplace and I believe that. &#8230; I do hope because they say it &#8212; their leaders say it. And I take it at face value. And I believe it because I think it makes sense for China that what&#8217;s in the best interest of China is for foreign companies to invest in China, compete in China, and for them to also have vibrant competition themselves.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f875abd9-fd82-470d-9492-0d3082a60928&quot;,&quot;caption&quot;:&quot;Abdication&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The left is missing out on AI &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:328772711,&quot;name&quot;:&quot;Dan Kagan-Kans&quot;,&quot;bio&quot;:&quot;writer on AI, science, ideas for publications like Transformer, the Wall Street Journal, American Scholar&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ZCVj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1345599-db89-4a6b-9947-028c555de14c_1525x1525.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://kagankans.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://kagankans.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Dan Kagan-Kans&quot;,&quot;primaryPublicationId&quot;:8041221}],&quot;post_date&quot;:&quot;2026-02-16T16:02:47.781Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!iL1E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F593220f8-7a9d-4b5d-8d1d-534d17b3e2fe_1200x1200.gif&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188136159,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:307,&quot;comment_count&quot;:211,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Beyond taking China &#8220;at face value,&#8221; Huang only made matters worse when he derided the China Hawk identity the Trump campaign embraced and &#8212; ostensibly &#8212; carried into office.</p><p>&#8220;As you know, there&#8217;s a phrase, and I didn&#8217;t hear about this phrase until just a few years ago. &#8216;China Hawks.&#8217; And apparently, if you&#8217;re a China Hawk, you get to wear that label with pride. It&#8217;s almost like a badge of honor. It&#8217;s a badge of shame. There&#8217;s no question. It&#8217;s a badge of shame.&#8221;</p><p>Multiple sources in and around the White House reached out to me about that clip at the time, expressing a range of dismay. Trump ran on aggressively cornering China through any diplomatic and economic means necessary, exemplified most prominently in his administration&#8217;s commitment to tariffs on imported Chinese goods.</p><p>&#8220;So Jensen is not naive about how this thing works, right? But what he needs is diversity in his revenue,&#8221; says the Republican source in the AI industry. He notes that Huang made sure to grease the wheels in a way the Trump family would appreciate: in late December Nvidia spent $20b on an acquihire and licensing deal with startup Groq, a company which just so happens to be in the investment portfolio of 1789 Capital, Don Jr.&#8217;s investment firm.</p><p>Even within the hyper-transactional culture of Trumpworld, that move was a bridge too far for some. It also highlights a lingering problem among Trump&#8217;s advisers and those in the AI industry counting on his administration delivering them wins on policy.</p><p>Approaching age 80, it&#8217;s unclear how much Trump knows about the specifics of chip fabrication, training AI models, and the sheer scale of the capital expenditures going into data centers.</p><p>&#8220;If Don Jr. was smart and they&#8217;re smart, they just don&#8217;t tell him,&#8221; the Republican industry source says. &#8220;They don&#8217;t even know they&#8217;re invested in a chip company that Groq&#8217;s going to buy. Trump probably doesn&#8217;t even know that. It&#8217;s not calculated with him. He&#8217;s got his influence circle around him that he really listens to, and those people have tremendous conflicts.&#8221;</p><p>A representative for Don Jr. and 1789 Capital did not return a request for comment.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2e00c44d-4da3-494e-bcd4-c5e38cabf791&quot;,&quot;caption&quot;:&quot;One of Elon Musk&#8217;s companies getting embroiled in a bitter legal dispute with a local community is hardly a rare occurrence. SpaceX has had multiple fights with federal agencies and conservation groups over its Texas launch site. X, meanwhile, had several arguments with San Francisco&#8217;s municipal authorit&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why the AI industry can&#8217;t resist dirty on-site gas turbines&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1757381,&quot;name&quot;:&quot;James Ball&quot;,&quot;bio&quot;:&quot;Tech, policy, politics. Political editor @ The New World, Fellow @ Demos, newsletter @ techtris, PhD researcher @ UCL Laws. Latest book: The Other Pandemic &#8211; How QAnon Contaminated The World.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!qgV8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff177d2f9-67c3-4cc2-bd05-595777d9d936_1176x1176.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://www.jamesrball.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://www.jamesrball.com&quot;,&quot;primaryPublicationName&quot;:&quot;Techtris&quot;,&quot;primaryPublicationId&quot;:1544032}],&quot;post_date&quot;:&quot;2026-02-12T16:30:37.325Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!m5Os!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28096202-4c7d-40f6-ad95-d0ad642536c0_1660x1118.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/why-the-ai-industry-cant-resist-dirty-elon-musk-xai-colossus&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:187740423,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:6,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>This industry source adds that they&#8217;ve &#8220;told people to tell [Treasury Secretary Scott] Bessent that they should tone this stuff down. It is going to implode.&#8221; The only sign of any public friction between Bessent and Huang came in an <em><a href="https://www.axios.com/2026/03/05/trump-ai-chip-clash-white-house">Axios</a></em><a href="https://www.axios.com/2026/03/05/trump-ai-chip-clash-white-house"> report</a> about a set of draft rules from the Commerce Department on controlling AI chip exports.</p><p>While the discontent around Huang is largely about Nvidia&#8217;s approach to China and how the CEO is perceived internally as undercutting the administration&#8217;s agenda on the country, much of it also comes down to Huang wielding such an unprecedented amount of power.</p><p>In one breath, Bannon declares that Huang is effectively doing the bidding of the Chinese government, even if his family is Taiwanese and there is no evidence to suggest he&#8217;s a foreign agent &#8212; a distinction Bannon makes when he calls him &#8220;an agent of influence for the CCP.&#8221; (Bannon has previously said on his show that Huang<a href="https://x.com/Bannons_WarRoom/status/1971956070034952208?s=20"> should be arrested</a>.)</p><p>Yet in the next, Bannon is willing to acknowledge that Huang is operating with a level of power closer to a nation state than a CEO of an American company. &#8220;He wasn&#8217;t gifted this, he wasn&#8217;t given this. The guy struggled.&#8221;</p><p>Bannon says that inevitably contributed to Huang&#8217;s confidence.</p><p>&#8220;He&#8217;s shaped the future. How many people in world history can say that? Very few.&#8221;</p><p>In December, Trump made the final decision to allow Nvidia&#8217;s more advanced H200 chips into China in exchange for the US receiving a 25% surcharge on the sales, The Trumpworld AI source tells <em>Transformer </em>that they believe Huang didn&#8217;t even need to apply any additional pressure to convince Trump the deal was good for the US and get it over the line. The CEO already had the inner gears of the White House turning in his favor.</p><p>&#8220;My guess is [Trump] was probably duped, not by Jensen himself, but by other people around him.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[The White House is trying to make AI a partisan issue again ]]></title><description><![CDATA[The federal framework focuses on broad issues such as child safety and data centers, but has little for Dems or AI safety advocates]]></description><link>https://www.transformernews.ai/p/the-white-house-ai-federal-framework-partisan-blackburn-preemption</link><guid isPermaLink="false">https://www.transformernews.ai/p/the-white-house-ai-federal-framework-partisan-blackburn-preemption</guid><dc:creator><![CDATA[Veronica Irwin]]></dc:creator><pubDate>Fri, 20 Mar 2026 16:31:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fgnt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fgnt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fgnt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fgnt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fgnt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fgnt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fgnt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg" width="1456" height="982" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/acbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:982,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5091838,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/191594295?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fgnt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fgnt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fgnt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fgnt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facbbe035-2065-4f97-9754-561bf1c66764_5930x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The White House <a href="https://www.whitehouse.gov/wp-content/uploads/2026/03/03.20.26-National-Policy-Framework-for-Artificial-Intelligence-Legislative-Recommendations.pdf">released</a> a framework for federal AI legislation on Friday morning that appears designed to drive a partisan wedge between the loose coalition increasingly concerned about the impact of AI. </p><p>The framework pushes for preemption of state laws while focusing on issues such as child safety and data centers that matter most to Republican voters. Topics such as algorithmic bias which have animated Democrats, or the kinds of concerns around biorisk or loss of control that have traditionally motivated AI safety organizations, don&#8217;t get a look in. Neither do some issues that have been raised by both sides of the aisle, such as large scale threats to jobs.</p><p>The Trump administration is under pressure to pass AI legislation before the midterms when the Democrats are <a href="https://www.nytimes.com/interactive/polls/congressional-vote-2026.html">expected</a> to reclaim at least one chamber of Congress. This may explain why the framework was released just two days after Senator Marsha Blackburn, a senior Republican Senator with an eye for AI safety who has complicated intra-party dynamics, <a href="https://www.blackburn.senate.gov/services/files/15AAEA28-5403-480D-8720-5E4C2D6F2A9A">released</a> her own draft legislation. The framework seems intended to streamline Republican messaging on AI regulation, and counter Blackburn&#8217;s bill.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>The four-page document addresses the following topics, in order: <br><br><strong>Child Safety</strong>. The framework implores Congress to create &#8220;commercially reasonable, privacy protective, age-assurance requirements&#8221; and &#8220;require AI platforms and services &#8230;to implement features that reduce the risks of sexual exploitation and self-harm to minors.&#8221; These principles are more light touch than some state laws, but go further than other parts of the White House&#8217;s proposals in placing some liability on AI companies. However, it also says such laws should avoid standards which could &#8220;give rise to excessive litigation,&#8221; which suggests child protection responsibilities for AI firms would be narrowly defined.</p><p><strong>Energy. </strong>The framework asks Congress to codify the promises the White House <a href="https://www.whitehouse.gov/articles/2026/03/ratepayer-protection-pledge/">received</a> from data center companies earlier this month to pay for energy production that offsets their own consumption, or something similar. Rising energy prices in districts hosting data centers has led to strong resistance to new data center projects, particularly in Republican states. This section also expresses support for permitting reform, typically described as weakening environmental laws which are used by activist groups to slow development or mount legal challenges.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;95e721ea-1a91-4fc7-8c67-98fcfe71cbb4&quot;,&quot;caption&quot;:&quot;One of Elon Musk&#8217;s companies getting embroiled in a bitter legal dispute with a local community is hardly a rare occurrence. SpaceX has had multiple fights with federal agencies and conservation groups over its Texas launch site. X, meanwhile, had several arguments with San Francisco&#8217;s municipal authorit&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why the AI industry can&#8217;t resist dirty on-site gas turbines&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1757381,&quot;name&quot;:&quot;James Ball&quot;,&quot;bio&quot;:&quot;Tech, policy, politics. Political editor @ The New World, Fellow @ Demos, newsletter @ techtris, PhD researcher @ UCL Laws. Latest book: The Other Pandemic &#8211; How QAnon Contaminated The World.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!qgV8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff177d2f9-67c3-4cc2-bd05-595777d9d936_1176x1176.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://www.jamesrball.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://www.jamesrball.com&quot;,&quot;primaryPublicationName&quot;:&quot;Techtris&quot;,&quot;primaryPublicationId&quot;:1544032}],&quot;post_date&quot;:&quot;2026-02-12T16:30:37.325Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!m5Os!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28096202-4c7d-40f6-ad95-d0ad642536c0_1660x1118.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/why-the-ai-industry-cant-resist-dirty-elon-musk-xai-colossus&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:187740423,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:6,&quot;comment_count&quot;:1,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p><strong>Intellectual Property. </strong>Training of frontier models on intellectual property, as well as content created with the use of AI-generated talent, has given rise to lots of litigation and push back from Hollywood and other creators. The White House is attempting to quell this by pushing for federal legislation which protects against &#8220;unauthorized distribution&#8221; of AI-generated content which mimics a person&#8217;s &#8220;voice, likeness, or other identifiable attributes,&#8221; and requesting that Congress enable &#8220;licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers.&#8221;</p><p>However, this section has multiple caveats that protect AI companies, including the suggestion that such negotiations should not incur &#8220;antitrust liability,&#8221; should &#8220;not address when or whether such licensing is required.&#8221; It also leads with the clarification that the &#8220;Administration believes that training of AI models on copyrighted material does not violate copyright laws&#8230;and therefore supports allowing the Courts to resolve this issue.&#8221;</p><p><strong>Free Speech and Education</strong>. The framework also includes short sections on key conservative AI talking points. It says that federal legislation, for example, should avoid any censorship of &#8220;expression&#8221; or content moderation based on &#8220;partisan or ideological agendas.&#8221; It also includes requests to increase information sharing between the government and industry and upskill government offices so that they are better equipped to use AI.</p><p><strong>Strong Industry Carve Outs</strong> <strong>and Federal Preemption. </strong>The end of the document includes major protections for industry to curb the creation of any federal legislation that could slow AI development. For example, it says that Congress should neither allow for stronger state AI regulations nor create any new agency or rulemaking body for federal AI regulation. Instead it argues that it should rely on &#8220;existing regulatory bodies with subject matter expertise and through industry-led standards.&#8221;</p><p>&#8220;Preemption must ensure that State laws do not govern areas better suited to the Federal Government or act contrary to the United States&#8217; national strategy to achieve global AI dominance,&#8221; the framework goes on. This would effectively codify the White House&#8217;s executive order from December released after Blackburn led opposition that halted its previous attempt at preemption.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;4ac06716-a246-4372-9b8d-a7db976fac8e&quot;,&quot;caption&quot;:&quot;This is the draft executive order from President Trump on AI preemption.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Exclusive: Here's the draft Trump executive order on AI preemption&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1083827,&quot;name&quot;:&quot;Shakeel Hashim&quot;,&quot;bio&quot;:&quot;Shakeel is the editor of Transformer, a publication about the power and politics of transformative AI. He was previously a news editor at The Economist.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46d18811-2ce6-4548-ac81-df4bfb16acd9_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-11-19T23:17:55.497Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Jbnq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe211baf0-1946-4b85-b97b-3d1ef07bb708_1438x760.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/exclusive-heres-the-draft-trump-executive&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:179402456,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:22,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Several other topics that have been pushed by AI safety advocates are absent. There is no mention of a national policy on curbing frontier model risk, for example, such as mandated reporting or user disclosures as included in bills such as SB53 in California or the RAISE Act in New York, and which are also a component of Blackburn&#8217;s bill. There is also no mention of a federal law regulating chip exports, which has been an even more divisive issue within the Republican party, several members of which in Congress have <a href="https://www.reuters.com/world/china/giving-nvidias-blackwell-chip-china-would-slash-uss-ai-advantage-experts-say-2025-10-29/">outspokenly disagreed</a> with the White House&#8217;s decision to permit the sale of advanced chips to China.</p><p>There is also no mention of safety concerns advocated by Democrats, such as algorithmic discrimination on the basis of anything other than viewpoint and speech, or some concerns shared by both sides, such as widespread workforce automation.</p><p>Another key issue omitted from the framework is Section 230, a provision which protects tech companies from liability for what users post on their platforms. The provision has been protected fiercely by tech companies over the past three decades, in court and through lobbying efforts and Congressional testimony, but has been criticized by both parties as simply empowering Big Tech. The last portion of the framework, which details federal preemption and other carve outs, shields AI companies from liability for negative effects among their users in similar ways.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;77d7d8f0-5c1b-4705-8593-da47f76b3b37&quot;,&quot;caption&quot;:&quot;A wave of legislation targeting chatbots such as ChatGPT and Claude has emerged in six states since the start of the year, each bill strikingly similar to a recently passed Oregon law, but with new carve-outs that would shield AI companies from liability in some circumstances. Critics say these bills would lock in weaker protect&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Six states, one playbook: the chatbot bills raising red flags &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:13910071,&quot;name&quot;:&quot;Veronica Irwin&quot;,&quot;bio&quot;:&quot;Senior AI Policy Reporter at Transformer X/Bsky: @vronirwin IG/Threads: @vronwrites LinkedIn: https://www.linkedin.com/in/veronica-irwin-009266112/ Signal: vronirwin.72 veronica(at)transformernews(dot)ai &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad8cbf86-6b1f-4387-97e1-e69f1cbb3ec7_2448x2448.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-19T16:31:16.483Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!MtuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:191488143,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>In Blackburn&#8217;s bill Section 230 would be sunsetted. Brad Carson, who leads the advocacy group Americans for Responsible Innovation and the Public First Action network of AI safety-focused super PACs, called the White House framework &#8220;230 on testosterone.&#8221; Blackburn also included proposals for dealing with AGI, and reporting requirements for frontier model development.</p><p>Blackburn <a href="https://x.com/dareasmunhoz/status/2035016339304178032?s=46">said</a> she looks &#8220;forward to working with my colleagues to codify the President&#8217;s agenda&#8221; after the White House release Friday morning.</p><p>Prior to both the framework and Blackburn&#8217;s draft, AI policy watchers were already expecting an AI plan from Senator Ted Cruz, who four sources say will now lead the legislative push for a bill in line with the White House&#8217;s recommendation. Just last week Cruz said that he planned to release a plan for AI legislation by the end of April.</p><p>Republican House leadership <a href="https://mikejohnson.house.gov/news/documentsingle.aspx?DocumentID=2860">released</a> a statement shortly after the White House framework was published giving it their full support, indicating a corresponding bill would be prioritized by leadership.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/the-white-house-ai-federal-framework-partisan-blackburn-preemption?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/the-white-house-ai-federal-framework-partisan-blackburn-preemption?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><p><em>Your usual <a href="https://www.transformernews.ai/t/briefing">Transformer Weekly</a> with all the AI policy news that matters will be back next Friday. </em></p>]]></content:encoded></item><item><title><![CDATA[Six states, one playbook: the chatbot bills raising red flags ]]></title><description><![CDATA[Google&#8217;s intervened on at least three of the bills]]></description><link>https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho</link><guid isPermaLink="false">https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho</guid><dc:creator><![CDATA[Veronica Irwin]]></dc:creator><pubDate>Thu, 19 Mar 2026 16:31:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MtuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MtuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MtuQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MtuQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MtuQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MtuQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MtuQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:19401548,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/191488143?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MtuQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MtuQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MtuQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MtuQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F616e86ef-5b16-4eaf-b44b-41b9a40a5dfb_6720x4480.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Credit: Getty/Milan_Jovic</em></figcaption></figure></div><p>A wave of legislation targeting chatbots such as ChatGPT and Claude has emerged in six states since the start of the year, each bill strikingly similar to a recently passed Oregon law, but with new carve-outs that would shield AI companies from liability in some circumstances. Critics say these bills would lock in weaker protections for children.</p><p>&#8220;We would be cementing the status quo, where legislators feel like they did their job, they fixed the problem, and the protections are still not there for minors,&#8221; said Sam Hiner, executive director at the Young People&#8217;s Alliance, who has been tracking one of the bills in Hawaii and <a href="https://www.youngpeoplesalliance.org/our-work/advocacy">advocating</a> for an alternative.</p><p>The bills &#8212; Colorado HB 1263, Hawaii SB 3001, Arizona HB 2311, Georgia SB 540, Nebraska LB 1185, and Idaho SB 1297&#8212; are each structurally similar to Oregon SB 1546 <a href="https://olis.oregonlegislature.gov/liz/2026R1/Measures/Overview/SB1546">passed</a> earlier this month, but differ in three key ways.</p><p>First, all but one of the new bills has a carve out for AI chatbots housed within another service, which would not be required to comply with the regulations. These regulations typically include requiring that a chatbot regularly clarify it is not human, that it prevent outputs that prompt suicidal thoughts and report incidents where users were referred to mental health resources to state authorities. There are also extra requirements for chatbots serving minors, such as the prohibition of sexual content. A carve-out for web applications, however, could exempt several companies with popular chatbots like Meta or Google from complying. Georgia&#8217;s bill is the one exception which does not include this carve out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>Second, these bills limit the terms under which victims can pursue legal action against companies operating chatbots. In the case of the most restrictive bills, only the attorney general has the authority to enforce the bills. This rules out private right of action, in which individual citizens can bring claims &#8212; something which commercial interests historically have <a href="https://instituteforlegalreform.com/research/ill-suited-private-rights-of-action-and-privacy-claims/">argued</a> encourages excessive or frivolous litigation. Some safety advocates find these provisions concerning, however, because they believe it significantly limits enforcement. This language &#8220;can be problematic in many states, if the AG does not have the manpower to do the enforcement,&#8221; explained Transparency Coalition cofounder Jai Jaisimha. And there can be budgetary restraints, he says. &#8220;In many states, the [AG] frequently has to ask for money.&#8221;</p><p>Last, there is careful language in each bill defining when a service must apply additional rules covering children. In Hawaii, for example, companies must have &#8220;actual knowledge or reasonable certainty&#8221; that a user is a minor in order to be held liable to the disclosure requirements, while in Arizona these requirements are strictly limited to &#8220;an account holder who is a minor.&#8221; Some safety advocates argue this allows companies which know that a user is likely a minor, but who have not been explicitly told as much by the user &#8212; in the case of free users that have not registered an account, for example &#8212; to evade the laws. Jaisimha calls these provisions a &#8220;technical loophole.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;d713b5b8-814e-4507-8db5-197da2205e07&quot;,&quot;caption&quot;:&quot;Protecting children when they interact with large language models has become one of the most prominent political and social forces directed at AI. There are legislative moves, such as the cross-party GUARD act, introduced in the Senate last month, official investigations, such as the Federal Trade Commi&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why pressure on AI child safety could also address frontier risks&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1318892,&quot;name&quot;:&quot;Chris Stokel-Walker&quot;,&quot;bio&quot;:&quot;Journalist and author&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ZuTs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e9550c7-ab25-4772-8d5c-dcd0b35cfe19_144x144.png&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://chrisstokelwalker.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://chrisstokelwalker.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Chris Stokel-Walker&quot;,&quot;primaryPublicationId&quot;:6100482}],&quot;post_date&quot;:&quot;2025-11-18T16:02:58.212Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Wp35!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a527e7-bef4-4c5c-9456-5615b244f76d_3128x2084.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/why-pressure-on-ai-child-safety-could&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:179237727,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The Chamber of Progress, however, which counts Google and OpenAI among its members, has argued that these provisions are problematic on different grounds. In the case of Hawaii, for example, it believes they could actually force tech companies to <a href="https://progresschamber.org/wp-content/uploads/2026/02/HI-HB-2502-Chatbots-Oppose.pdf">impose</a> age verification or collect excessive data on user behaviors. &#8220;There&#8217;s no way to verify a user is a minor without being privacy invasive,&#8221; the Chamber of Progress vice president of US Policy and Government Relations Koustubh Bagchi said.</p><p>The fact that the bills are structurally similar to Oregon, but with similarly-worded industry carve-outs, suggests they were coordinated in some form. Google has voiced its support for the bills in Hawaii, Nebraska, and Arizona. Google and lawmakers sponsoring the bills in each state did not respond to requests for comment about the carve-outs listed above or about the role Google did or did not play in shaping legislation. <br><br>&#8220;This is a well-documented playbook: tech lobbyists supply full bill text and amendments, then use proxy groups to manufacture the appearance of local support,&#8221; said Marjorie Connolly, communications director at The Tech Oversight Project. &#8220;Silicon Valley is running that same playbook here, dressing it up as protecting kids from chatbots because they see it as the issue of the moment to exploit, rather than an urgent crisis that deserves real solutions.&#8221;<br><br>Bagchi framed it differently. Though he said he had not seen any model bill, he said that Google has been &#8220;consistent on what they want to see&#8221; when working with lawmakers on a provision-by-provision basis. A representative for Google suggested changes to the bill text in Hawaii during a hearing on Wednesday.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;fa790d1a-f51c-449d-9ae0-3ad9585f2634&quot;,&quot;caption&quot;:&quot;On Friday, Pete Hegseth directed the Department of War to designate Anthropic a supply chain risk for refusing to grant the military unrestricted access to its models. Hours later, OpenAI announced its own deal with the DoW, with red lines that appeared similar to &#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;What you need to know about autonomous weapons&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:103211477,&quot;name&quot;:&quot;Celia Ford&quot;,&quot;bio&quot;:&quot;I'm an ex-neuroscientist and current AI reporter at Transformer. When I'm not writing, I play bass, dance, and kiss my cats on the forehead. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f7fd73a-8797-496f-94a7-535118172030_1365x1365.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-04T16:30:22.324Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Agee!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce074c96-9216-4607-bd84-cd0c3d779032_5669x3779.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/what-you-need-to-know-about-autonomous-openai-anthropic-pentagon-dod-dow&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189885002,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:13,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The bills are at varying stages of development. Hawaii&#8217;s bill saw the most recent action, with the hearing in the House on Wednesday, March 18 after passing the Senate last week. Idaho&#8217;s bill was amended and filed for a second reading in the Senate also on March 18. Arizona&#8217;s bill passed the House and then out of committee in the Senate on March 17. Georgia&#8217;s bill, meanwhile, passed the Senate on  March 6. Nebraska&#8217;s bill is the only one that seems set to die in committee, as it has not moved since its filing in committee on February 18, and the legislative session ends next month.</p><p>The bill with the most momentum, however, is in Colorado, where a hearing initially scheduled for March 19 was postponed, likely to the following week. Advocates and industry have been outspoken in the state, in opposition and support respectively. One source who was granted anonymity to discuss their conversations with lawmakers said that potential amendments to eliminate carve-outs might be in play.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/six-states-one-playbook-the-chatbot-child-safety-oregon-hawaii-colorado-arizona-georgia-nebraska-idaho?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[No, alignment isn’t solved]]></title><description><![CDATA[Progress on ensuring models are in step with humans has calmed nerves. But some of the biggest problems are far from solved, and many more lie just over the horizon]]></description><link>https://www.transformernews.ai/p/no-ai-alignment-isnt-solved</link><guid isPermaLink="false">https://www.transformernews.ai/p/no-ai-alignment-isnt-solved</guid><dc:creator><![CDATA[Lynette Bye]]></dc:creator><pubDate>Wed, 18 Mar 2026 16:00:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!miew!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!miew!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!miew!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 424w, https://substackcdn.com/image/fetch/$s_!miew!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 848w, https://substackcdn.com/image/fetch/$s_!miew!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 1272w, https://substackcdn.com/image/fetch/$s_!miew!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!miew!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png" width="1456" height="1012" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1012,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2904427,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.transformernews.ai/i/191369590?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!miew!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 424w, https://substackcdn.com/image/fetch/$s_!miew!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 848w, https://substackcdn.com/image/fetch/$s_!miew!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 1272w, https://substackcdn.com/image/fetch/$s_!miew!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c0cbdc0-4378-406b-bf42-e9ff6e5d633c_1920x1334.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Credit: <a href="http://www.hbarakat.com">Hanna Barakat</a>/<a href="https://betterimagesofai.org">BetterImagesOfAI</a>/<a href="https://creativecommons.org/licenses/by/4.0">CC-BY 4.0</a></em></figcaption></figure></div><p>Researcher Adri&#224; Garriga-Alonso says he quit his AI safety job in December because there was &#8220;no point&#8221; doing more speculative alignment work to make sure AI systems stay within human control. He thinks current strategies will be enough.</p><p>In January, David Dalrymple, programme director at the UK&#8217;s Advanced Research and Invention Agency, dropped his <a href="https://x.com/davidad/status/2011845180484133071">probability</a> estimate for AI-caused human extinction from 40&#8211;50% to 5&#8211;8%, even assuming no further progress is made on alignment.</p><p>These sorts of reassuring moves are remarkable. For most of the 2010s, alignment looked to many working with AI like a problem we might need to solve on the first try or face extinction. Now some researchers think we&#8217;re most of the way there.</p><p>Take the value alignment problem. In 2019, AI luminaries including Yoshua Bengio, Stuart Russell, and Yann LeCun <a href="https://www.alignmentforum.org/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell">debated</a> whether we would ever be able to get AI to understand human values. Under the dominant reinforcement learning paradigm, AI learned from trial and error in simulated environments, a process alien to human cognition. For example, AlphaGo Zero simulated <a href="https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf">millions</a> of games against itself to master how to play. As Google DeepMind&#8217;s Seb Krier has <a href="https://x.com/sebkrier/status/2015781591017029780">noted</a>, it wasn&#8217;t clear how we&#8217;d ever teach such a system human values.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/subscribe?"><span>Subscribe now</span></a></p><p>But instead the process of pretraining on vast quantities of human text became the dominant paradigm for LLMs. This meant that LLMs were already in some way absorbing values from what humans write down, in theory reducing the need to instill them through trial and error.</p><p>&#8220;We do this pretraining on human data, and then we get something that&#8230; understands human values fairly innately now,&#8221; says Garriga-Alonso, who previously worked at FAR.AI and Redwood Research. In his own writings, Anthropic CEO Dario Amodei <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">agrees</a>, noting that we&#8217;ve learned that &#8220;models inherit a vast range of <em>humanlike </em>motivations&#8230;from pretraining.&#8221; That understanding underpins current alignment efforts like Anthropic&#8217;s <a href="https://www.anthropic.com/news/claude-new-constitution">constitution</a> for Claude &#8212; the model must understand notions like &#8220;helpful&#8221; or &#8220;harmless&#8221; in order for written principles to <a href="https://substack.com/@kelseytuoc/p-185908977">guide</a> it.</p><p>Progress in developing more powerful models has also been more incremental than some <a href="https://www.youtube.com/watch?v=Yd0yQ9yxSYY">feared</a> &#8212; multiple frontier models, successive versions, continuous experimentation &#8212; which means researchers can iterate rather than needing to get alignment right on the first try. &#8220;We can evolve our mitigations and safeguards incrementally with our models,&#8221; <a href="https://aligned.substack.com/p/alignment-is-not-solved-but-increasingly-looks-solvable">argues</a> Jan Leike, an AI alignment researcher now leading the Alignment Science team at Anthropic.</p><p>While the likes of Eliezer Yudkowsky still fear that there is not enough time to iterate before AGI begins, the expectation that it has to be got right the first time has been waning. &#8220;I think alignment is much easier than expected because <em>we can fail at it many times and still be OK</em>, and we can learn from our mistakes,&#8221; <a href="https://www.lesswrong.com/posts/epjuxGnSPof3GnMSL/alignment-remains-a-hard-unsolved-problem?commentId=gAuM6MKBdpyu6JszR">writes</a> Garriga-Alonso. &#8220;This is possible because <em>decisive strategic advantages from a new model won&#8217;t happen</em>, due to the capital requirements of the new model, the relatively slow improvement during training, and the observed reality that progress has been extremely smooth.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;5f69e745-5420-4c73-b8dc-24cfb2c6b92f&quot;,&quot;caption&quot;:&quot;&#8220;Look around you,&#8221; PauseAI Global&#8217;s CEO Maxime Fournes told protesters outside Google DeepMind&#8217;s headquarters on a chilly day in London late last month. &#8220;Look at who&#8217;s here today. We do not agree on everything. We come from different organizations, different backgrounds. We have different&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;&#8216;Scream if you want to move slower!&#8217; A nascent AI protest coalition comes together in London&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:3438121,&quot;name&quot;:&quot;Alys Key&quot;,&quot;bio&quot;:&quot;Alys is a writer and editor based in the UK.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!IhVo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb928c18c-60a2-499a-87c7-33014680a1ea_1024x1024.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://uk20.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://uk20.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;UK 2.0&quot;,&quot;primaryPublicationId&quot;:5762193}],&quot;post_date&quot;:&quot;2026-03-09T16:31:03.308Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Ntiu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F352cd1c6-ff38-4904-8d21-eeff9d8f6660_7952x5304.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/scream-if-you-want-to-move-slower-pause-ai-pull-the-plug&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190390584,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:7,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Researchers are also building infrastructure for iteration. For example, &#8220;<a href="https://www.lesswrong.com/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1">model organisms</a>&#8221; research, named as a nod to the fruit fly and other easy-to-study species that appear endlessly in biology research, creates toy environments where misalignment can be easily studied in real models: can researchers get the AI to demonstrate misalignment at all? Is it hard to do, or is it so easy that we&#8217;ll likely see lots of misalignment in the wild? How well do alignment strategies work in these settings where success and failure are measurable? Many <a href="https://www.transformernews.ai/p/ai-misalignment-evidence">early demonstrations</a> of misalignment come from such research. &#8220;One of the things that&#8217;s so powerful about model organisms is that they give us a testing ground for iteration,&#8221; <a href="https://www.lesswrong.com/posts/epjuxGnSPof3GnMSL/alignment-remains-a-hard-unsolved-problem">writes</a> Evan Hubinger, the alignment stress-testing team lead at Anthropic.</p><p>In parallel, researchers have developed approaches which use AI to help keep AI in check. These include <a href="https://www.lesswrong.com/w/scalable-oversight">scalable oversight</a> and <a href="https://www.lesswrong.com/w/ai-control">control</a> proposals which aim to use aligned but weaker models to monitor stronger ones, extending human ability to oversee systems that are smarter than us.</p><p>Ryan Greenblatt, chief scientist at Redwood Research, tells <em>Transformer</em> that baseline scalable oversight methods have worked better than he&#8217;d expected, although he also says less effort has been put toward developing strategies than he&#8217;d hoped. Once more developed, these approaches could help ensure that smarter-than-human AIs don&#8217;t run amok because we can&#8217;t tell what they&#8217;re up to. Dalrymple is <a href="https://x.com/davidad/status/2011825836823892051">optimistic</a> about this approach, particularly because he thinks current models are more aligned than he anticipated.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Fip!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Fip!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 424w, https://substackcdn.com/image/fetch/$s_!6Fip!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 848w, https://substackcdn.com/image/fetch/$s_!6Fip!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 1272w, https://substackcdn.com/image/fetch/$s_!6Fip!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Fip!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png" width="1168" height="424" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:424,&quot;width&quot;:1168,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Fip!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 424w, https://substackcdn.com/image/fetch/$s_!6Fip!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 848w, https://substackcdn.com/image/fetch/$s_!6Fip!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 1272w, https://substackcdn.com/image/fetch/$s_!6Fip!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ec2e63-ddef-4e5d-a6fd-3dd109aa879e_1168x424.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And yet, Dalrymple&#8217;s 5% lower range for AI&#8217;s existential risk is far from a small number given the stakes. A one in 20 chance of human extinction could still make AI the biggest threat humanity faces &#8212; worse than <a href="https://jfsdigital.org/articles-and-essays/2024-2/vol-29-no-2-december-2024/the-precipice-existential-risk-and-the-future-of-humanity-london-bloomsbury-2020-by-toby-ord/">some estimates</a> of nuclear war, climate change, and engineered pandemics <em>combined</em>.</p><p>And, while parts of alignment are proving easier than feared, the hardest problem remains untouched.</p><p>&#8220;We&#8217;re still doing alignment &#8216;on easy mode&#8217; since our models aren&#8217;t really superhuman yet,&#8221; <a href="https://aligned.substack.com/p/alignment-is-not-solved-but-increasingly-looks-solvable">says</a> Leike. Hubinger <a href="https://www.lesswrong.com/posts/epjuxGnSPof3GnMSL/alignment-remains-a-hard-unsolved-problem">agrees</a>: the crucial problem will be overseeing systems that are smarter than humans, and we haven&#8217;t yet seen how our systems will fare against that problem. As does Greenblatt: &#8220;Once the models are qualitatively very superhuman, lots of stuff starts breaking down.&#8221; </p><p>Meanwhile, warning signs of other misalignment problems are emerging. Research designed to elicit misaligned behavior has <a href="https://www.transformernews.ai/p/ai-misalignment-evidence">turned up</a> blackmail, deception, and cheating. Anthropic&#8217;s Amodei <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">writes</a> that problems &#8221;seem particularly likely to occur when AI systems pass a threshold from less powerful than humans to more powerful than humans, since the range of possible actions an AI system could engage in &#8212; including hiding its actions or deceiving humans about them &#8212; expands radically after that threshold.&#8221; He has <a href="https://www.axios.com/2025/09/17/anthropic-dario-amodei-p-doom-25-percent">put</a> the chance of things going &#8220;really, really badly&#8221; at 25%.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;041f8ec1-582b-4133-bcec-005c43a8d79f&quot;,&quot;caption&quot;:&quot;In 2019, Meta's chief AI scientist Yann LeCun confidently dismissed fears of AI misalignment. Discussing &#8220;instrumental convergence&#8221; &#8212; the idea that systems will learn to deceive humans and avoid shutdown in order to protect their primary goal &#8212; LeCun declared that such fears &#8220;would only be relevant in a fantasy world&#8221;.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Misaligned AI is no longer just theory&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:280514,&quot;name&quot;:&quot;Lynette Bye&quot;,&quot;bio&quot;:&quot;A Harvard graduate and former Tarbell Fellow for journalists, I write about AI's growing influence on society.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F377af0c9-6ae8-4e2c-b29d-2f51cd2c2175_512x512.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://lynettebye.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://lynettebye.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Lynette Bye&quot;,&quot;primaryPublicationId&quot;:2639094}],&quot;post_date&quot;:&quot;2025-05-21T15:02:53.589Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!DDtx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F671ad251-a7bb-41df-8036-ddfedbb9e322_1600x900.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.transformernews.ai/p/ai-misalignment-evidence&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:164033502,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:27,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!JQeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>While some seem increasingly relaxed about the ability to solve alignment, others suggest that the positive signs that it is possible don&#8217;t necessarily mean it will happen. In December, renowned professor of AI Stuart Russell <a href="https://www.youtube.com/watch?v=P7Y-fynYsgE">said</a> he thought it was possible to make safe, aligned AI &#8212; but that we won&#8217;t get it without regulation since companies &#8220;need to make the AI systems millions of times safer&#8221; to bring the risk down to the sorts of levels deemed acceptable from other sources, such as nuclear reactors or asteroid strikes. Greenblatt is also in this camp; he has <a href="https://blog.redwoodresearch.org/p/plans-a-b-c-and-d-for-misalignment">written</a> that existential risk from misalignment can be reduced to 7% <em>if</em> there&#8217;s political will for international coordination and significant investment in safety work. &#8220;It seems to me like risk is very elastic to how much people try,&#8221; he writes. &#8220;If the world was trying very hard, risk would probably be lower.&#8221;</p><p>As Leike <a href="https://aligned.substack.com/p/alignment-is-not-solved-but-increasingly-looks-solvable">puts</a> it: &#8220;Just because a problem is solvable, this doesn&#8217;t mean it&#8217;s solved. We have to actually keep doing the work to get it done.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.transformernews.ai/p/no-ai-alignment-isnt-solved?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.transformernews.ai/p/no-ai-alignment-isnt-solved?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item></channel></rss>