How profits can drive AI safety
Opinion: Geoff Ralston argues that the safety and security of AI doesn’t need to be at odds with profit and progress
There’s a strange thing happening in certain parts of Silicon Valley: “safety” has become a dirty word, on par with “regulation” and “cubicle.” In the domain of artificial intelligence, the most important technology of our time, you’re either pushing to accelerate as fast as possible — no holds barred, no guardrails allowed — or dismissed, usually with a smirk, as a “decel,” an enemy of progress. Peter Thiel has gone so far as to suggest that Eliezer Yudkowsky may be the antichrist because he wants to slow down AI progress to avoid an impending disaster.
The idea that caring about safety places you in opposition to innovation is absurd. Certainly, accelerating the development of AI in the face of competition both from China and domestic rivals is understandable. But how do you combat anti-safety groupthink around one of the most powerful technologies humanity has ever built? Perhaps there’s a simple way to change the mindset: make safety profitable.
That’s what we’re trying to do with SAIF, the Safe Artificial Intelligence Fund. We invest in for-profit companies building the guardrails for our AI future; tools that will make this new era safer, more trustworthy, and ultimately more human. We’re not alone: A new generation of venture funds is forming around the same principle. This shouldn’t be surprising. The notion that “safety slows progress” exists only in the heads of a few hyperventilating accelerationists. History tells a different story: safety enables trust, which is a prerequisite for progress.
Last year, an estimated five billion passengers boarded airplanes, cramped metal tubes flying several miles above the earth at 550 miles per hour. They do so without a second thought, because air travel has become one of the safest ways to move around the planet. That confidence didn’t happen by accident; it came from decades of innovation and meticulous work to make an inherently risky act feel routine.
A closer parallel lies in cybersecurity, now a $300b industry. The sector’s existence doesn’t slow down the internet economy, it enables it. Companies can only innovate at scale when users, partners, and regulators trust that their systems are secure. The same dynamic will hold for AI. Unless we mitigate its risks, the acceleration everyone wants will stall.
AI is a horizontal technology, threading through nearly every aspect of human life, in business, education, medicine, media and governance. The opportunity for extraordinary advances in every one of these domains is huge, and it also means the potential for harm is vast. That danger is deeply concerning, but also, for the right founders, a massive opportunity: across every domain, safety- and security-minded founders can build guardrails that both protect society and create enormous value.
There are already plenty of startups attempting to do this, some of which we are backing. Lucid Computing provides technology which cryptographically assures organizations of the geographic location where AI inference is being carried out. The AI Underwriting Company is developing insurance standards for agentic systems to give companies greater confidence in deploying AI. Chimera Cybersecurity protects small businesses from digital attacks in real time with an AI-based defense system. SAIF has invested in all these companies, as well as others working on biothreats, identity assurance, fact verification and securing data centers.
Can companies focused on AI safety become profitable, enduring businesses? They must. Safety will not scale through philanthropy or regulation alone; it must be woven into the economic fabric of the AI ecosystem. When investors back safety startups today, they aren’t opposing progress, they are chasing returns. It is only as a side effect that they will enable the adoption that accelerationists take for granted.
But we should also be clear-eyed about the risks that markets alone cannot easily solve. AI capabilities are advancing faster than oversight capacity. Some of the gravest dangers — such as existential threats from misaligned systems, authoritarian lock-in through ubiquitous surveillance, or the social fragmentation that comes when reality itself becomes negotiable — are unlikely to yield to purely profit-driven solutions. These challenges demand public investment, international cooperation, and sustained research into alignment and governance.
Technologists in Silicon Valley have always prided themselves on building innovative technology, products and ideas that change the world for the better. But we haven’t always considered all the potential consequences of what we build. Never has this been more important than with AI, but the good news is that safety and security don’t need to be at odds with profit and progress. AI that is safe and secure will ensure a better future for us all.
Geoff Ralston is the founder of the Safe AI Fund (SAIF), an early-stage venture fund that supports startups developing tools to enhance AI safety, security and responsible deployment. He was a partner at Y Combinator from 2011 and served as its president from 2019 until 2022.




