Sora is here. The window to save visual truth is closing
Opinion: Sam Gregory argues that generative video is undermining the notion of a shared reality, and that we need to act before it’s lost forever
OpenAI’s release of the generative video model Sora 2 and its related social network signals a major turning point in how we understand and trust visual truth. How we choose to act now will determine whether our shared basis for reality — what we see and hear — remains trustworthy.
Sora points to a fundamental change in synthetic media. We are approaching the moment when visual evidence — the backbone of accountability and proof — becomes unreliable, undermined by widespread distribution of hyper-realistic content before tech companies have fully committed to the safeguards we need to distinguish real from synthetic. A protestor confessing to violence that never happened. A politician in a supposedly compromising situation. A human rights atrocity or a child at risk that might be real — or not.
I lead WITNESS, a human rights organization that has worked for the past eight years to prepare communities for deepfakes and deceptive AI as well as push for AI infrastructure that enables trust and confidence in visual truth. Our Rapid Response Force of leading media forensics experts investigates both fabricated media and false claims of fakery because both are now used to distort truth. With Sora’s release, the stakes rise dramatically.
Cases we’ve tracked show us what’s ahead:
A still photo turned into a fake AI video, passed off as real footage of a prison being bombed then circulated in a foreign influence campaign.
A fictional protestor addresses the camera, declaring they have committed an act of violence. Despite AI labels, people sharing it online believed it was real.
A walkie-talkie audio of military leaders calling for attacks on civilians — potentially authentic according to detection tools that struggle with under-represented languages and novel formats, but now deniable as “AI.”
These aren’t edge cases; they’re a primary use case for these tools. Sora accelerates risk in many directions: fake evidence, appropriated likenesses, dismissed reality, paralyzed institutions, and a confused public.
The weaponization of doubt, or what researchers call the liar’s dividend, isn’t theoretical: Real content was dismissed with claims that it had been fabricated with AI in one third of the cases we engaged with recently. In some cases, we’ve been asked by other human rights groups to pre-emptively confirm that evidence of atrocities is unquestionably real — anticipating that claims that AI has been used to falsify reality will be deployed. Tools like Sora undermine reality. When any piece of evidence can be credibly dismissed as synthetic and any fabrication defended as real, we lose the ability to establish shared truth.
Individual fakes like the ones WITNESS encounters day in, day out matter: they can derail lives, justice and elections. But the deeper damage is a fog of doubt settling over everything we see. While much of Sora’s output feels mundane, even funny, the truth is that each fabricated bodycam clip, each synthetic CCTV feed, each resurrected figure saying words never spoken erodes our collective confidence. This is part of an epistemic crisis, the fracture of a shared, verifiable reality.
Detection — as we do in the Rapid Response Force — is essential but insufficient. It only works for selected high-profile cases; it’s too labor-intensive to scale across the internet and requires extensive contextualization of results. Meanwhile, current detection tools work far less reliably in the real world generally, as well as for content from countries like Sudan or Myanmar, or with non-majority languages within the US. Poor media quality, gaps in training data that don’t represent global communities, and tools that aren’t designed to help explain uncertain results to skeptical publics all contribute to failure rates. The communities most vulnerable to synthetic media manipulation have the weakest defenses. We need genuine investment in detection capabilities and skills that work in Kinshasa and Jakarta, not just Silicon Valley.
Individual vigilance also won’t be enough. Even experts are fooled by some of today’s AI videos. Asking people to look and listen harder, or to “spot the glitch,” is increasingly impossible now that tools like Sora have made those clues harder to identify. Teaching people to identify synthetic media manually is like teaching them to spot forged bills by hand while industrial printers improve daily. It’s time to make sure we can “read the recipe” that has been used to manufacture content, and use this to support new forms of media literacy enabled by these technical signals.
Systemic preparation requires investing in resilient infrastructure now: systems that assume a highly synthetic content and communications environment while preserving pathways for confirming what is real. This requires visible and invisible watermarking that persists, as well as embedded authentication via metadata that reveals the recipe of synthesis and reality, the mixture of AI and human ingredients, behind what we consume. OpenAI’s Sora watermarks are easily removed, and while they include embedded metadata using the emerging “C2PA” standard, it isn’t easily accessible and is usually lost when content moves between platforms.
The fundamental technology for navigating reality and fiction exists, but the engineering investment and leadership effort from OpenAI and other tech companies is nowhere near enough for the risk we face or the opportunity this infrastructure presents to reinforce trust in digital content. Tech companies must also commit to not overwhelming our feeds with AI-generated content that crowds out authentic information. They can and should prioritize authenticity over AI volume.
Policymakers can make this investment mandatory. The EU has an opportunity with Article 50 of the AI Act, which requires transparency in AI-generated content, to drive the implementation of privacy-protecting tools that show us both AI deception and creativity. In the US, California is leading the way to mandate disclosure of what is AI-generated and provide options to protect what is authentic, most recently with Governor Gavin Newsom signing AB853 into law. With consumer pressure, we could push for further action to enable the trust and confidence in what we see and hear that matters in so many business sectors.
Policy and public action must also address a parallel threat: the appropriation of our digital identities. With the Sora app release, ubiquitous clones of Sam Altman could not distract from the impending normalization of easy appropriation of other people’s likenesses, living and dead. We have to invest in better ways to protect our digital likenesses, through stronger legal protections that preserve satire and parody while requiring consent for commercial use, as well as detection technologies that alert us rapidly when our likeness is being used online. YouTube just launched thisa for their creators. We need it for everyone, and not limited to one platform.
The political philosopher Hannah Arendt warned: “A people that no longer can believe anything cannot make up its own mind... And with such people you can then do what you please.” With Sora and similar tools, we’re accelerating toward that cliff. But we’re not there yet.
This is our moment to act with urgency. The clock on visual truth is ticking.
Sam Gregory leads WITNESS, where he launched the global “Prepare, Don’t Panic” initiative on deceptive AI. His TED talk focuses on how to prepare better for deepfakes.



