Audits, not essays: How to win trust for enterprise AI
Opinion: Alexandru Voica argues that application-layer AI companies are best off opening themselves up to rigorous testing rather than opining on AI safety
After years of attempting to regulate AI to a grinding halt, the European Union had a change of heart earlier this month. The 19-page document detailing the bloc’s new Apply AI Strategy could be boiled down to a single sentence: Stop worshipping whitepapers, and start shipping things that survive contact with the real world.
It’s good advice. If we want to move beyond pilots and make generative AI stick with more sectors of industry, we need to do the hard work of building trust by reducing risk, satisfying procurement departments and developing solutions that can scale globally. We need to build and ship products that work reliably in the real world.
Synthesia operates at the application layer of AI: we sell a SaaS platform that helps enterprises create on-brand videos with AI avatars. Our customers are interested in products and services that solve business problems, not in the nuts and bolts of the latest AI models. That distinction shapes our governance, and it’s why we’ve built it on the rails enterprises already use: existing regulation and auditable international standards.
In recent years, much of the regulatory conversations and activity around AI have tended to focus on potential future harms of AI, rather than the damage it can cause today. But this doesn’t get software deployed inside organizations where legal, security, and reputational risk are measured in billions of dollars. Trust is earned with controls you can test, not from well-meaning blog posts. Without guardrails, application-layer AI tools can cause significant harm, such as identity theft and widespread scamming, which damages the reputation of the technology and makes it harder for companies to place trust in it.
We know this first hand, after a handful of videos slipped past our content moderation systems in 2022 and were used by state actors to spread disinformation. As a result, we’ve banned the creation of news-like and political content from non-enterprise accounts and allowed NIST to red team our trust and safety processes to mitigate those risks.
While AI security debates among frontier model providers often veer into speculative harms, the operating reality for companies selling products to enterprises is different. Application‑layer risks are concrete and familiar to big companies: information security, privacy, lawful processing, intellectual property, brand safety, data residency, and operational continuity. The right governance must therefore be contextual (what are we building and for whom?), controllable (what guardrails and overrides exist?), and contractible (can obligations be written into service terms, verified, and enforced?).
When an airline or food and beverage company evaluates Synthesia, the request isn’t “What’s your position on AGI?” It’s: “Show me your AI governance controls, map them to our policies, and prove that they have been audited.” We need to demonstrate how incidents are detected, communicated, and remediated or explain what third party service providers touch which data, where, and why.
A few years ago, this work was entirely manual: our legal and security teams would entertain in-depth conversations with customers where many questions and answers were exchanged during a slow, months-long procurement process. Despite making good progress, we needed a more practical path to anchor governance to the standards and laws that customers already live by. To many people, these standards read like the back cover of an instruction manual, as they involve risk and impact assessments, AI system lifecycle management processes, and data management and quality controls. To enterprises, though, these standards are the basis of trust.
So in 2024 we became the first generative AI company to be certified for ISO/IEC 42001, a new AI management system standard modeled after existing regulation. We took this step because the 42001 standard added structure to AI‑specific risks, leading to a significant speed up of procurement cycles. This year, we were audited for ISO/IEC 27001 which gave us a tested information security management system. And because all good things come in threes, we’re now looking at ISO/IEC 27701, which extends 27001 for privacy and operationalizes GDPR‑style obligations.
Together, these standards do something no ethics statement can: they form the backbone of enterprise trust for any SaaS product, and embed governance into how a product is designed, shipped, and operated — all audited by an independent third party that tests the result. They also speak a language that procurement teams already understand, instead of asking them to map out our bespoke “consent, control, collaboration” framework onto their own systems and processes.
A common refrain in AI policy is that companies buying AI systems need specific AI legislation before they can proceed to adopt the technology responsibly. But AI regulation can take years to develop and enterprise vendors don’t have the luxury of waiting around. Additionally, nascent AI regulations such as the EU AI Act — which was promised to “promote the uptake of human centric and trustworthy artificial intelligence” — has not yet meaningfully increased adoption in the real world. In 2024, only 13.5% of EU enterprises used AI technologies, up just 5.5 percentage points from the year before. Companies operating in already highly regulated industries such as financial services or healthcare have further obligations under sectoral rules, and their own internal policies.
The fastest path to safe deployment is to map those obligations into product and process now. In practice, that means privacy, security and governance through design based around standards that we and other companies can be audited against. This allows AI vendors to respect intellectual property rights, adhere to data security and privacy best practices, and maintain confidentiality where data crosses borders.
An ISO-based governance approach typically shortens security reviews, makes the product legible alongside other enterprise tools, and ensures that when incidents occur there are documented procedures, trained people, and auditable logs to address them. The mechanics are intentionally unglamorous because the underlying ISO standards are widely recognised, the same approach travels across jurisdictions, and a single management system can absorb new obligations without constant rewrites of first principles
For buyers of AI, that means faster procurement and fewer bespoke questionnaires. For regulators of the technology, it means a common frame that complements the government-designed AI risk management frameworks and sector guidance. For AI vendors, it means a governance spine that scales as rules evolve.
Frontier model research labs face a different risk surface and may need different regimes. But at the application layer, where customers deploy tools to real employees and real datasets, the risks map neatly to security, privacy, IP, and brand integrity. The world already has standards and laws for those, and we should use them.
Policymakers can accelerate safe adoption by anchoring new AI obligations to existing, auditable standards and by recognizing certifications and interoperability. Reward the vendors who submit to independent scrutiny and who expose controls that customers can actually verify. Differentiate between those with a philosophy and those with the humility to be tested.
Alexandru Voica is head of corporate affairs at Synthesia.



