India’s AI summit is trying to do too much
The AI Impact Summit has an ambitious agenda — but its lack of focus makes concrete results unlikely

Thousands of politicians, AI leaders, and business executives will descend on New Delhi next week for the AI Impact Summit. Don’t expect them to get much done.
The summit is the fourth in a series, first established in 2023 as a venue for politicians and frontier AI CEOs to tackle the potential existential risks of advanced AI development. Bletchley Park’s “AI Safety Summit,” as it was then known, resulted in a declaration from the US, China, and others that acknowledged concerns around loss of control, among other risks, and committed to working together to address them. 2024’s AI Seoul Summit built on the declaration, with a series of concrete safety-testing commitments from frontier AI developers.
The following year’s summit in Paris failed to build on that promising start, with Emmanuel Macron jettisoning the safety focus and turning a blind eye to the rapid development of AI — and all that entails.
A year on, India is unlikely to be a return to form. While prominent leaders are still engaging — Sam Altman, Demis Hassabis and Dario Amodei will all be present, organizers told Transformer, as will delegations from the US and, reportedly, China — few expect much in the way of tangible progress on AI governance. The sprawling Indian event might, however, at least be a marginal improvement on Paris — and an opportunity to start slowly fixing a summit series that has strayed from its original purpose.
The India Summit is, to put it generously, trying to be everything to everyone. It has three main goals, India AI Mission CEO Abhishek Singh, one of the lead organizers of the event, told Transformer. Countries will discuss how to “leverage AI to empower people [and] promote innovation,” for one. Following in France’s footsteps of treating the summit as a trade show, another goal is “projecting India as the service provider for AI for the whole world.” The third goal — tellingly left till last — is “democratizing access to compute, datasets, and algorithms,” particularly for the global south, Singh said.
Under these sit seven working groups, termed “chakras” by the Indian organizers, covering topics including “Resilience, Innovation, and Efficiency,” “Human Capital,” and “AI for Economic Development & Social Good.” Virtually any big issue relating to AI is on the agenda in some shape or form. AI safety, the sole focus of the first two summits, is now just one among many.
Nicolas Miailhe, co-founder of international governance dialogue group AI Safety Connect, told me that he was “a bit worried that this broadening leads to a dilution of what was started at Bletchley … When you have seven chakras, a good consultant would say ‘okay, push one thing, maybe two.”
Lucia Velasco, a research affiliate at the Oxford Martin School’s AI Governance Initiative, put it more diplomatically: “The India Summit has a very ambitious agenda.”
When I pushed Singh on the lack of focus, he bristled. “I don’t know why people would say that,” he said. “The agenda is quite wide and comprehensive.”
There is, at least, one noticeable improvement from Paris: AI safety will not be sidelined entirely. Whereas last year’s Action Summit largely ignored and minimized the risks, Singh was keen to emphasize that the safety and security of frontier models “remain on the table.” The Safe and Trusted AI working group, co-chaired by Brazil and Japan, has two deliverables: a “trusted AI commons,” which will aggregate tools for AI evaluations, bias mitigation, and the like, and a “global governance framework for AI,” on which details are scant.
Singh is also hoping to get frontier AI companies to commit to sharing usage data with governments, in a similar fashion to that already published by Anthropic. “If usage data is made available to sovereign governments,” he said, “they can align their approach towards skilling and reskilling.” Getting companies to agree to do this, however, is a “work in progress.”
But safety is clearly still not a top priority for Singh and his co-organizers. “The conversations have moved on from Bletchley Park,” he argued. “We do still realize the risks are there,” he said, pointing to Moltbook as evidence of them. But “over the last two years, the worst has not come true.”
Singh is correct that the discourse has shifted away from catastrophic risks in the two and half years since Bletchley. But by going with the tide, the India Summit risks wasting a potentially crucial moment. Just last week, Anthropic and OpenAI warned that they could no longer rule out serious risks from their latest models. The advances in AI coding tools, meanwhile, are making it increasingly hard to dismiss questions of rapid job displacement.
The way Singh discussed usage data illustrates the disconnect. If AI progress continues at the pace many expect, “reskilling” will be a drop in the bucket. Much more ambitious policy initiatives will be required. But on that front — on any aspect of truly transformative AI progress — this year’s event has little to offer.
If there is a single priority for the India summit, it is to bring the “global south” into conversations about AI. With the first three hosted by the UK, South Korea, and France, India represents a big departure. It is trying to position itself as a mouthpiece for the entire developing world — one that has markedly different priorities when it comes to AI.
When it comes to safety, that is much-needed. “AI safety as a cause area historically has not done a good job in bringing the Global South into that conversation, which is a flaw,” Niki Iliadis of The Future Society told me. Rachel Adams, the founder of the South Africa-based Global Centre on AI Governance, agreed. “Mainstream conversations around AI safety [have not been] adequately inclusive of global south expertise, global south realities,” she said.
At the summit, Adams and collaborators from the Digital Futures Lab, ITS Rio, and the International Innovation Corp will launch the Global South Research Network on AI Safety, an effort to encourage collaboration and policy research in countries that generally don’t have a large voice in AI discussions. Others hope that the India summit can be an opportunity for the AI safety community to learn from its mistakes, introducing and explaining safety concepts and ideas to a wider audience.
But it is possible that the efforts to expand the conversation exist in tension with actually getting anything done. It would certainly be nice to live in a world where everyone has a say over the future of AI development, but that is not the world we live in. Only two countries — the US and China — are capable of developing frontier AI models. A couple of others play important roles in the AI supply chain. By and large, the rest of the world is an AI taker at the mercy of American and Chinese decision-makers.
“Being completely frank,” Velasco said, “what would be the incentive for an AI superpower to discuss any national security aspect [of AI] with a country that is in the opposite space of AI development?” At the same time, concerns about catastrophic AI risks are not high on the agenda for most developing countries, which are understandably more worried about access to the technology.
“It wouldn’t make sense to have a conversation about the latest capabilities of the latest model that is being deployed in some specific facility and has shown signs of rogue behavior” with every country, Velasco argues. Instead, she thinks that a smaller forum is needed. In a paper last year, Velasco and co-authors proposed a two-track system: a core group of AI-leading countries and companies would focus on the governance and safety of advanced AI, while a second, broader track would explore how these technologies can serve the public interest.
The summits at Bletchley and Seoul resembled this “core” track — but Paris and Delhi mark the Summit’s transition to something trying to encompass the broader approach, despite that being an area already well-served by efforts at the UN and elsewhere. The core track that was its initial strength has been diluted, and with it its potential for meaningful progress. The relatively small group of countries at Bletchley and Seoul managed to achieve concrete commitments. No one is expecting anything similar from Delhi.
The lack of anything concrete, however, does not necessarily mean the summit will be a completely wasted opportunity. Many of those I spoke to cited the potential to socialize norms at the event — to get people comfortable with discussions about frontier AI risks. People simply talking about the risks is better than nothing, after all. At Davos last month, CEOs’ warnings of imminent AGI and ensuing chaos grabbed headlines. It is easy to imagine something similar happening in India.
One international governance expert told me that the most important thing is that the latest installment is happening at all. “The key thing for me is that the summit series carries on, and therefore there is another big event after this one.” The existence of the forum, in other words, is the key thing. “Imagine if the series just ended,” he said. “I think come 2029 or whatever, and if something bad happens in the world, and everybody agrees we need to get together and do something — we’ll be very grateful to have a summit series at that point.”
Gratitude for a forum’s mere existence is a depressingly low bar — but given the current state of international AI governance, that might be all we have.




