Why pressure on AI child safety could also address frontier risks
Keeping kids safe is a priority for legislators globally — and might increase attention on other risks, too
Protecting children when they interact with large language models has become one of the most prominent political and social forces directed at AI. There are legislative moves, such as the cross-party GUARD act, introduced in the Senate last month, official investigations, such as the Federal Trade Commission probes into seven tech firms over the way their products interact with children, and high-profile legal cases such as the suits being filed by parents whose children have died by suicide or self-harmed after interactions with chatbots.
Doing something about the impact of AI on minors has widespread support: 90% of the US public want Congress to prioritise protecting children from AI harms over fostering tech sector growth, and 78% of the UK public want safety checks on AI products before they’re made available to children.
If it all sounds a little bit like Helen Lovejoy’s exasperated screams of “Won’t somebody think of the children?” from The Simpsons, that’s not necessarily a bad thing. Popular interventions designed to keep children safe may also lay the groundwork for wider action. They could even help address some of the more fundamental, and potentially existential, frontier risks affecting us all.
There already appear to be key areas of overlap. Measures specifically targeted at children could encourage, or mandate, steps such as greater transparency requirements, that those concerned about broader risks are already calling for. At the same time it might also be far easier to build in specific safeguards for children within broader sets of policies. “Ideally, there would be federal requirements for transparency about, and auditing of, the effectiveness of frontier AI companies’ safety and security practices, with specific risks like [those to children] treated within that larger framework,” says Brundage.
Of course, focusing on children comes with potential downsides. “I’m glad policymakers are taking child safety seriously, but I worry about a patchwork of different laws targeting different risks in different ways,” says Miles Brundage, an AI policy researcher who previously worked for OpenAI between 2018 and 2024. Thinking about child safety in isolation could add to that patchwork of different laws pulling AI companies and their models in different directions — and ultimately leaving more holes within the weave.
So merely banning marketing or providing chatbots to children that are adept at developing emotional attachment in their users — as has been promoted by groups such as Common Sense Media — might help protect minors from a specific risk, but it does little to protect adults, and even less to improve protections against wider problems.
It helps that some of those most concerned about protecting children are already pushing for measures that would have far wider applicability. “We think it just makes basic sense that before children are using AI products, we should know if those AI products are safe for children to use,” says Danny Weiss, chief advocacy officer at Common Sense Media. His organization lobbied California for a more full-throated version of AB1064, the Leading Ethical AI Development (LEAD) for Kids Act, that was vetoed by governor Gavin Newsom last month. Justifying his decision to veto, Newsom claimed the bill was so broad that it could “unintentionally lead to a total ban on the use of these products by minors.”
The original version of AB1064, which Weiss prefers to the “watered-down” version that eventually made it to the legislature before being rejected by the governor, would have established a system that required AI model makers to put their products through a safety audit.
“It’s very natural to me, if you’re going to build a new car seat for kids, it’s going to have to go through some testing,” explains Weiss. “And if the kid flies through the window of the car after being in the car seat, then you can’t use that car seat.” The same ought to be true for AI models, he says.
That kind of testing, if broadly applied, could help head off more fundamental AI risks, such as the fears that models could be used to develop chemical, biological, radiological and nuclear (CBRN) weapons, or that humans lose control of AI systems they have created. With a more red-team-focused ‘what does your product do that’s harmful and how can we stop it?’ approach, the benefits would be felt wider than simply for child users, Weiss reckons.
The approach comes down to whether you want to regulate the harms caused, or regulate the process, says Sonia Livingstone, professor of social psychology at the London School of Economics, and one of Europe’s leading experts on child safety. In an ideal world, Livingstone backs a safety by design approach, because that better “future proofs” risks that haven’t yet been conceived of, she says.
Livingstone points out that some of the risks that children face are shared by adults — including being drawn in by the persuasive powers of flattery that are baked into the leading commercial AI models. Taking action against them would therefore benefit more than just the youngest users. “All these things actually, all adults benefit from,” she says. “They are equally vulnerable to manipulation and persuasion, so the benefits would indeed be to everybody.” The alternative — teaching children not to trust in the systems they’re likely to operate for decades to come in their lives — isn’t a viable alternative, she says. “There’s something mad about telling our educators to teach children not to trust,” she says.
Of course, child safety advocates are not going to give up on measures that have more limited applications when minors are their top priority. “We strongly support age assurance,” says Weiss. “It’s the linchpin to most regulatory issues when it comes to kids.”. But there is a realism around AI that means they share concerns with campaigners focused on other areas. “The question isn’t ‘Should kids be allowed to use AI?,’” he says.
There is also, fundamentally a simple advantage in having issues around AI safety driven up the agenda by immediate concerns around children.
“One of the things that the kids angle has done is it has caused safety to become a conversation in AI, as opposed to just, how do we win the race against China,” says Weiss. “And that’s no bad thing.”
Specific overlapping concerns and goals may in some cases benefit both child safety advocates and the wider community concerned about AI risks. In other instances, their focuses and approaches will have little mutual benefit. But increased visibility may be the rising tide that lifts all boats.





