How China’s AI diffusion plan could backfire
Opinion: Scott Singer argues that the country’s plan to embed AI across all facets of society could create huge growth — and accelerate social unrest
If 2025 was the year that China emerged as a frontier AI power, 2026 will be when the country seeks to cash in on its progress to genuinely transform its economy and society. Yet its desire to so eagerly embed such a transformative technology into the fabric of society could quickly backfire.
China’s AI successes aren’t limited to DeepSeek. Alibaba’s Qwen model has gained popularity among some of the leading companies in Silicon Valley, while Moonshot’s Kimi can compete on capability benchmarks with leading American closed-weight models like ChatGPT. Now, Beijing is betting that this technical prowess can revolutionize its entire economy.
China’s goals to diffuse AI throughout society while placing bets on transformative future technologies like AI-powered robotics are incredibly ambitious. While other nations, including the US and the UK, are prioritizing the diffusion of AI, China hopes to do so with aggressive pace. If all goes according to the Chinese Communist Party’s plans, announced in August, in one year China should be about halfway to integrating AI into six critical sectors of the economy. In a decade, it will have completed an AI-powered industrial revolution akin to its rollout of the Internet, but far faster and with greater economic impact.
If it succeeds, domestic AI diffusion could help solve some of China’s thorniest, structural domestic challenges, including an unrelenting property crisis, an aging population, and shrinking consumer confidence. Yet AI development also comes with significant perils for China — some of which may, ironically, grow more acute the more its ambitions succeed.
There’s a scenario in which the plan doesn’t go well. China’s chosen path may simply be less effective than the Party imagines, failing to address the domestic challenges it was meant to solve and also losing ground to its primary geopolitical rival. First, China may simply not have enough venture capital flowing at a moment when local governments are cash strapped. Second, a policy that depends on the entrepreneurialism of local governments could generate significant waste — spurred by cycles of hyper-competition that in China is often known as “involution” — unless local governments coordinate with each other. Third, AI diffusion could encounter unanticipated technological bottlenecks, slowing progress. Failure for any of these reasons would force China back to the economic stimulus drawing board.
But there is a more challenging scenario in which China succeeds too well. The nature of its plan means that rapid AI diffusion could unleash social disruptions faster than even an adaptive authoritarian state can manage.
This is not a hypothetical concern. In key departments of the Chinese government, policymakers already recognize the paradox and are increasingly grappling with some of the risks. Groups of technical experts, convened by the key standards body TC260, have come together to make sense of the risks and offer potential mitigations. Those risks include declines in traditional demand for labor and the rise of emotional dependence on increasingly popular AI companions. Their concerns also include more speculative risks, including those stemming from open-weight models, such as their misuse by terrorists, and the risks that models could help novices create bioweapons.
It’s not just China’s technical standards experts that are paying attention. The AI+ Plan itself calls for employment risk assessments. The National Development and Reform Commission, China’s macroeconomic planning agency, is rumored to be commissioning a large study on AI’s impact on the labor market. Some challenges, like those associated with AI companions, may emerge regardless of the state’s techno-optimism. But others, like transformations to the labor market, could be greatly accelerated by China’s diffusion ambitions to embed AI across society.
AI is already affecting some of the structures that anchor Chinese social life. China’s university entrance exam, the gaokao, has long served as an imperfect, but ultimately equalizing, meritocratic means of creating opportunities for young Chinese people, regardless of their background. Now, however, AI may be forcing it to evolve. Earlier this year, Chinese tech companies were forced to turn off AI functions across the country during the four-day exam to prevent cheating. While a temporary shutoff may mitigate the problem, AI could pose more substantial, enduring threats to China’s education meritocracy: those entering university at age 18 might find their entire field has transformed at 22, when they complete their undergrad degree, or age 25, as the Party increasingly encourages the expansion of graduate school enrollment.
In the short term, the more acute threat lies higher up the education chain –– for those who perform well on the gaokao, attend a good university, and undertake a prestigious STEM degree. If jobs evaporate for graduates, the Chinese policy apparatus will need to reckon with a new threat to economic and social stability. Forecasting how the youth unemployment story could play out underscores a broader challenge: the faster China diffuses AI, the faster a range of risks could materialize.
These challenging scenarios are not inevitable. AI could ultimately create a wide range of new white-collar jobs across sectors that analysts cannot currently imagine; commoditized intelligence could create new fields to be tested on the gaokao. To China’s credit, it is already planning to reform its offerings of higher education majors in response to rapid technological change. Whether AI ultimately proves empowering could come down to the policy decisions that emerge after China’s current suite of diffusion-focused initiatives. The more successful China’s AI strategy, the more urgent those types of policies will become. The gaokao example illustrates the core dilemma: each percentage point of economic growth from AI diffusion could bring corresponding increases in social disruption.
What AI will do, however, is reveal the distinct and sometimes competing priorities within China’s vast bureaucracy, underscore the competitive dynamics that drive local governments, and ultimately test the domestic capacity of a regime whose greatest strength over the last couple of decades has relied on the ability to shift resources fast to solve problems.
For Western policymakers, understanding both the possibilities and failure modes of China’s grand AI experiment will be critical. In addition to sizing up the competition, Western governments could learn by treating China as a live, evolving experiment for policies that drive, and in turn respond to, rapid nationwide AI diffusion. China is far from the only country that will need to reckon with labor market changes and broader societal challenges from AI, but it might be the first. Learning the right lessons could shape whether other governments can walk this tightrope themselves without stumbling.
Scott Singer is a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he works on global AI development and governance with a focus on China.


