Lawmakers are using AI to write laws. What could go wrong?
Lawmakers and companies are quietly using AI to draft legislation. Experts warn the risks are underappreciated
by Katie McQue
In 2023, California congressman Ted Lieu introduced what he called the first piece of federal legislation ever written by AI, using ChatGPT to generate the text of a resolution expressing support for Congress’s focus on AI itself.
Fast forward a few short years, and what was once an oddity is now increasingly prevalent. The Trump administration is reportedly planning to use Google Gemini to draft federal transportation regulations. The US Department of Education is experimenting with AI-assisted regulatory drafting. And companies are building tools specifically designed to help legislators analyze and write laws. But while industry and legislators charge ahead, some experts are raising concerns about the risks of using AI tools to write laws.
Though they are in their infancy, companies making AI tools for lawmakers are already gaining clients among state and federal governments.
Vulcan Technologies, a Y Combinator-backed AI regulatory review company founded in 2025, is developing what it calls a “regulatory operating system”. The company says its agentic platform aggregates laws, regulations and court decisions across federal, state and municipal jurisdictions. It allows users to analyze statutory language, draft compliance guidance, answer legal queries and generate proposed statutory or regulatory text with supporting citations.
In July, Virginia governor Glenn Youngkin mandated that Vulcan’s technology be used across all state agencies in a bid to reduce regulation by one-third, scanning existing regulations and guidance documents to identify ways they can be streamlined. The company says its tools are also used by the US Department of Education and a regulatory reform PAC in South Carolina named South Carolina Department of Government Efficiency (DOGESC), which bears the slogan “We kneel to God, not government”. DOGESC has said it has talked to Vulcan about using its platforms to analyze and rewrite state regulations, with the aim of cutting red tape and identifying areas of perceived regulatory overreach.
FiscalNote markets a similar platform, PolicyNote, an AI-driven legislative and regulatory tracker for government agencies. The system delivers notifications about new bills and executive actions, assists with drafting legislation and policy reports, and offers predictive forecasting of bill outcomes. While FiscalNote has not disclosed which bodies use PolicyNote, the company claims it has clients in all three branches of the US federal government, as well as “dozens of other national, state, and local government entities.”
Vulcan Technologies and FiscalNote did not respond to requests for comment.
There are no regulations that limit the use of AI to write laws, or that require legislative text to be drafted by a human. Several experts say that the technology is possibly being used without a full understanding of its limitations and risks, which range from bias to plain inaccuracy.
“We are leaping ahead into a world where AI creates our laws without actually agreeing that AI should create our laws. We’re simply passive passengers on this,” says Kay Firth-Butterfield, a lawyer and CEO of consultancy Good Tech Advisory. “The ability for the general public to actually question how and why the tools were used doesn’t exist.”
The tools have obvious appeals, especially for policymakers who may not be legal or subject-matter experts.
Monique Priestley, a Democratic member of the Vermont House of Representatives, says she uses LLM tools regularly, especially to summarize legislation and produce public-facing explanations that make complex bills easier for constituents to understand.
Priestley uses AI to distill lengthy bills into a few clear sentences describing the underlying problem and how the proposed legislation addresses it, making the substance of the bill more accessible to legislative colleagues whose support she is seeking.
AI tools are also useful for brainstorming, research and early-stage drafting. One of the bills she introduced this year, the State Information Practices Act, was developed with research support from LLM tools, she says.
Priestley also uploads draft bill language into an LLM to review specific provisions and suggest revisions when she is considering adjustments. She has used the tools to examine how other states structure protections for state-held data, particularly regarding information sharing with the federal government and commercial entities.
“That highlighted the State Information Practices Act, which then led me to reach out to the Electronic Frontier Foundation and ask them if they had been involved in those efforts. And then that led me to introducing a bill based on California,” Priestley says.
Using LLMs may give lawmakers a “leg up” on writing legislative drafts, similar to how a spell checker can help produce something faster and free of spelling errors, says Cary Coglianese, a professor of law and political science at the University of Pennsylvania.
“A large language model tool could help accelerate going from a blank page to having something on a page,” says Coglianese. State lawmakers and lobbyists are also using AI tools to obtain transcriptions of legislative hearings uploaded on YouTube, enabling them to keep track of hundreds of bills across multiple committees and different states.
“It does allow you to be in more places at one time … as in Vermont and many other states, legislators don’t have offices and they don’t have paid staff,” says Priestley.
“A lot of legislation is written by 20 year old interns, and so AI might be an improvement on some of them,” says Ryan Calo, professor of law at the University of Washington.
Still, initial drafts are vetted by more senior staffers, and then eventually by the members themselves, Calo notes. It’s a context in which there are plenty of off ramps and plenty of chances to edit, say several experts interviewed. The question, then, is not whether lawmakers will use AI, but how far that reliance should go.
“AI is not a substitute for due diligence, and nor is AI cause for panic in the legislation field,” Calo says. “People are fallible. AI is a tool.”
“I feel in no way am I ever 100% leaning on these tools,” says Priestley. “I’m using it to research, brainstorm, and develop proposals, and then I run it by another person. I don’t know how many colleagues are just blindly asking ChatGPT for things. And so there is a real danger there.”
Some argue that AI tools have serious limitations. Coglianese argues that LLMs are not capable of providing “policy judgment and analysis that is needed to make sure legislation is actually effective, efficient, equitable.”
“Unique policy choices that call for more than just putting words together that might make some sense really have to map onto the problems in the world,” he says. “All these large language models are doing is basically making sense of language and predicting what is a sensible next word in a response to a question or a task.”
To do a truly good job of analyzing an issue, the tools also need to be trained on the right underlying data. If the goal is to inform a specific regulatory proposal, the model must be trained on reliable and relevant information to mitigate risks of biases, says Coglianese.
Others fear AI-written laws will lack the innovation sometimes needed in governance. “These LLMs are trained on laws that have already been written,” says Calo. “Lawmakers may wish to depart from past practice and improve the drafting of legislation – while it may save them a little time, it may come at the cost of conformity to what we’ve done in the past.”
Overconfidence is another risk. Policymakers may be “seduced” by the speed and polish of their outputs, says Coglianese. Because the tools can produce answers almost instantly, and in language that sounds authoritative and confident, users may be tempted to treat the results as reliable guidance.
“In reality, there is always uncertainty, and yet the large language models tend not to sufficiently display that,” he says. Chatbots’ tendency to validate users’ opinions could also be a problem, Priestley says. “If you’re a legislator, often you’re trying to build a case to support something you’re trying to pass. So I think you are inherently setting up a situation in which the data could be biased because you’re trying to get it to support a particular outcome.”
And of course there is the risk of accountability. A knowledgeable expert must review and verify what the system produces, checking for errors, gaps, or misinterpretations.
“You can’t hold the AI to account because it can’t explain why it put those words in those sentences, in that sentence, because all it does is make up the words that goes along,” says Firth-Butterfield at Good Tech Advisory.
The concern extends beyond lawmakers themselves. Priestley notes that policy staffers have raised concerns about the possibility of lobbyists using AI to draft proposals and amendments presented to a legislative committee — language that could end up in a bill without a thorough legal review. (She is not, however, aware of any specific examples of this happening so far.)
“LLMs enable people to think they have a lawyer in their pocket,” she says. “If they are able to get that in front of a chair, and then cases skip legislative counsel as the drafters of that language, then there’s potential for text to pass that has never been reviewed by a lawyer.”
It is essential to keep human lawyers in the loop and there are dangers to replacing entry-level attorneys, says Priestley. “We’re potentially deteriorating our own legal system … It’s very cannibalistic.”
Calo raises a related problem: the use of AI to file public comments to government agencies drafting regulatory laws. Under the Administrative Procedure Act, agencies like the FDA and FCC must consider public comments when drafting rules. “I’m worried about those kinds of processes being flooded by AI comments,” Calo says. “They do not need to be vetted, and they might strain the capacity of agencies to review comments just through their sheer volume, and then real comments by actually interested affected parties might get lost in the shuffle.”
“[Agencies] may begin to tune [these letters] out because they don’t know what’s slop and what’s real, and that would eviscerate the capacity for public participation.”
As with plenty of other AI uses, designing an effective validation process is key to ensuring LLMs are used appropriately. One could, for instance, ask both trained humans and LLMs to spot errors and inconsistencies in draft legislation, and compare the results. Such side-by-side testing would provide a clearer picture of AI’s strengths and limitations in legislative review, says Coglianese.
“There’s a lot of nuance here,” he says. “These tools can be very positive and constructive, but they can also be abused. They have to be used in the right way, and we have to make sure that the people who are using or relying on AI have the awareness of what they can do, how they’re designed, and what they’re not capable of.”
Will we soon reach scenarios where lawmakers are tasking AI to draft their own laws to regulate AI? As the experts Transformer spoke to flagged, AI tools are shaped by companies that are themselves subjects of regulation — and extremely interested in how laws governing their products are written.
The risk that an AI company could shape or subtly manipulate the output of models used to draft legislation in ways that align with their policy interests is, Priestley says, a serious concern. She says she sometimes wonders whether AI systems can objectively assess their own risks: when prompting a model to provide data on the harms artificial intelligence may pose to children, businesses, or society, she questions whether the system will fully acknowledge its own weaknesses and limitations.
“It’s always in the back of my mind, I wonder if the AI will actually give me an answer that highlights its own weaknesses and risks,” she says. “But when I ask that type of thing, it does give me data that supports that it is a risky system. But I was actually kind of surprised that it did.”
Katie McQue is a freelance journalist based in New York.





