Formative Promise, Incomplete Architecture
AI governance is in its formative stage — continuity is promising, but structural integration remains incomplete.
Climate governance took decades to mature. AI may not give us decades.
Artificial intelligence is no longer a speculative frontier. It is already reshaping economies, public administration, markets, and social discourse. The India AI Impact Summit 2026 reflected this momentum, bringing together policymakers, industry leaders and international organisations to signal ambition and strategic intent. The language was confident. The vision was expansive. Prime Minister Narendra Modi’s articulation of the MANAV principle — AI that is moral, accountable, nationally sovereign, accessible and inclusive — captured the ethical aspiration.
There is value in such gatherings. The mere fact that governments and stakeholders feel compelled to meet repeatedly signals recognition: AI is not merely a market tool; it is a civilisational technology. Vocabulary is stabilising. Norms are beginning to take shape. Continuity matters.
Yet aspiration and architecture are not the same.
The summit also revealed a tension familiar in emerging technological domains: visionary rhetoric often moves faster than implementable frameworks. Announcing principles is one thing; embedding enforceable mechanisms is another. Governance maturity is measured not by ethical vocabulary, but by accountability structures, labour adaptation, institutional capacity and enforceable limits.
India’s ambitions are real. Its STEM (Science, Technology, Engineering, and Mathematics) base is substantial. Investment flows are rising. Engagement with organisations such as UNESCO and the OECD reflects a willingness to participate in broader normative conversations.
But structural gaps remain.
Talent retention continues to challenge domestic capacity, with many highly skilled researchers drawn abroad. High-end computational infrastructure is uneven. Data centres demand large amounts of electricity and water — resources that are not uniformly reliable. These are not criticisms; they are reminders that momentum does not automatically translate into readiness.
The deeper question is not whether ambition exists. It does. The question is whether governance capacity is evolving at the same speed as technological capability.
Labour, Distribution and the Pace Problem
AI promises productivity gains. It also threatens disruption.
Labour markets adapt slowly. Education systems reform slowly. Regulatory systems evolve slowly. AI models, by contrast, scale in months.
This asymmetry is rarely central in summit declarations, yet it may be the most consequential factor. Without systematic reskilling, workforce transition strategies and social cushioning, productivity gains risk concentrating benefits while dispersing costs. Even when AI reduces production expenses, consumer prices do not automatically decline in markets characterised by high capital intensity and platform dominance.
Governance, therefore, is not only about safety and ethics; it is about distribution, access and fairness.
If benefits accrue narrowly while disruption spreads widely, public trust will erode — regardless of visionary language.
An “AI Geneva Convention”?
As discussions intensify globally, some have suggested the need for an AI analogue to the Geneva Conventions — not to halt innovation, but to establish minimum harm thresholds. Such a framework might prohibit fully autonomous lethal systems without human oversight, constrain indiscriminate biometric surveillance, mandate accountability for high-risk deployments, and require transparency testing for frontier systems.
Like the Paris Agreement, early enforcement might rely more on normative pressure and national implementation than on supranational authority, potentially anchored through multilateral forums such as the United Nations.
This would not be perfection. It would be a floor.
History suggests that norms often precede enforcement strength. But history also shows that norm-building takes time.
And time may be the scarcest resource in the AI era.
The More Immediate Front: Cyber Harm and Social Stability
Beyond battlefield scenarios lies a more urgent domain: AI-enabled cybercrime, deepfakes, reputational sabotage, communal incitement and algorithmically amplified polarisation.
These are not distant possibilities. They are active dynamics.
Synthetic media can erode trust at scale. Automated slander can damage reputations within hours. Coordinated manipulation can inflame communal tensions before institutions respond.
Here AI operates on both sides of the equation. It enables harm — and it can detect it.
A meaningful beginning in AI governance could lie here: enforceable provenance standards for synthetic content, cross-border cooperation on AI-driven cybercrime, liability frameworks for malicious deployment, and mandatory deployment of defensive detection tools by large platforms.
If global stakeholders converged even modestly on such guardrails, it would transform summit dialogue into operational restraint. That would not solve everything. But it would signal seriousness.
Public trust depends less on sweeping declarations than on visible safeguards.
The Surveillance Paradox
Yet urgency carries risk.
The threat of AI-enabled disorder provides states with powerful justification to expand digital oversight. Behavioural monitoring, predictive analytics and biometric tracking can be framed as protective tools. Without strict statutory limits, independent audits and judicial safeguards, such measures risk normalising intrusive governance.
The paradox is clear: in seeking protection from misuse, societies may entrench concentrated digital authority.
Responsible AI governance must therefore operate on two fronts simultaneously: restraining malicious actors while restraining overreach. Defensive capability must be matched by enforceable limits on state deployment.
Without balance, the cure may gradually embed the very asymmetries governance seeks to prevent.
A Compressed Institutional Moment
There is reason for cautious optimism. Global dialogue is widening. Normative vocabulary is stabilising. Early frameworks are emerging. AI governance is no longer peripheral to diplomatic discourse.
But continuity alone does not guarantee adequacy.
Climate diplomacy evolved through decades of negotiation under the United Nations Framework Convention on Climate Change before coalescing into structured agreements. Even today, enforcement remains complex and incremental.
AI development operates on a different temporal scale.
Institutional learning, consensus-building and regulatory calibration have historically required years — often decades. AI capability evolves in compressed cycles measured in months. The governance challenge, therefore, is not only substantive; it is temporal.
AI governance remains in its formative phase. That is not failure. It is a beginning. Yet structural integration — across labour transition, accountability, civil liberties, infrastructure capacity and enforceability — is still incomplete.
Climate governance took decades to mature. AI may not give us decades.
Whether institutions can compress their learning curve may determine whether innovation ultimately stabilises society — or unsettles it.
(Freelance journalist Retired from Indian Information Services. Former senior editor with DD News, AIR News, and PIB. Consultant with UNICEF Nigeria. Contributor to various publications.)
Krishan Gopal Sharma




.jpg)
Related Items
India, Vietnam ink MoUs in various fields
Moody’s says India best placed to weather future global shocks
India signs 3 MoUs with Jamaica, expands strategic cooperation