[Vrinda is a Third-Year Law student at the National Forensic Sciences University Gandhinagar. Thier areas of interest include Technology and Cyber Policy along with AI Governance.]
- INTRODUCTION
The Ministry of Electronics and Information Technology (MeitY) constituted a drafting committee in July 2025 to develop an AI governance framework for India. The framework is guided by two objectives: first, to harness the transformative potential of artificial intelligence for inclusive development and global competitiveness, and second, to address the risks AI may pose to individuals and society. The framework was officially released in November 2025 and is structured into four parts. Part I outlines seven key principles, or sutras, intended to guide India’s overall approach to AI governance. Part II sets out key recommendations across six governance pillars, followed by Part III, which identifies short, medium, and long-term action plans along with their respective timelines. The final section provides practical guidance for industry actors and regulators. One could say that the Indian government has chosen to govern AI rather than strictly regulate it, adopting a “third way” approach through soft law instruments. At this stage, MeitY has decided not to introduce an AI-specific law, opting instead to amend existing laws where necessary. The stated intent behind this approach is to avoid premature regulation and instead adopt a phased response to an emerging and rapidly evolving technology. This positions India’s approach distinctly apart from the European Union’s risk-based and stringent regulatory model, as well as the United States’ largely laissez-faire approach to AI governance.
- JUSTIFICATION BEHIND A SOFT LAW APPROACH
India’s decision to adopt a soft-law approach to AI governance has been framed as a strategic, intentional policy choice rather than a regulatory oversight. The government has emphasized the reasoning that introducing an AI-specific statute at this stage could result in a premature or “half-baked” law, especially in a domain that is evolving rapidly and remains difficult to define precisely. From the current perspective of how India is aiming to govern AI, hard regulation is being seen as potentially stifling innovation, discouraging experimentation, and limiting the country’s ability to build globally competitive AI capabilities or to be a pro-innovation country. Instead, a light-touch framework is seen as enabling regulatory learning, allowing policymakers to see the real-world deployment of AI systems before forming binding legal obligations. This approach indicates that the members of the committee are leaning towards a preference for adaptability and innovation over early regulatory rigidity.
Alongside the above justifications, India has justified its soft law approach on the grounds of flexibility and regulatory sufficiency. The government has indicated that existing legal frameworks, including the Information Technology Act,2000, and the Digital Personal Data Protection Act,2023, can be amended to address emerging AI-related risks as they arise. Sector-specific regulators such as the Reserve Bank of India, Securities and Exchange Board of India, and Telecom Regulatory Authority of India are expected to manage domain-specific AI harms in their respective jurisdictions. In addition, the framework proposes a three-tier institutional model, namely AI Governance Group (AIGG), Technology & Policy Expert Committee (TPEC), and AI Safety Institute (AISI), to coordinate policy formulation, implementation, and oversight. Overall, reflecting a preference of having a phased approach to AI governance over the immediate introduction of an AI-specific statute.
- QUESTIONING THE APPROACH AND SUITABILITY IN LIGHT OF INDIA’S DIGITAL ECOSYSTEM
India’s digital ecosystem can be characterized by having polar opposite sides that complicate its approach to AI governance. On the one hand, the country witnesses rapid technological adoption, as evidenced by the creation of new governance bodies and global recognition for its digital public infrastructure, including Aadhaar, UPI, and DigiLocker. Technology is deeply embedded and will continue to be in India’s civic, economic, and social life. Although we have benefited greatly from this technological advancement, the scale of digitization has also created a high-impact risk environment. In 2025, Indian organizations reportedly faced over 2,000 cyberattacks per week on average, while AI-enabled harms such as identity theft, deepfake abuse, political manipulation, and online safety threats, especially targeting women and children, have increased sharply. This duality raises concerns about whether a non-binding, guideline-based approach is adequate for such a high-risk digital environment in our country.
Though there is no one measure of determining the success rate of soft-laws, but few important measures on which it relies heavily are strong enforcement mechanisms, a high compliance culture, and institutional capacity for monitoring and redressal. India has, however, struggled with weak enforcement, delayed regulatory action, and low levels of compliance when obligations remain voluntary or non-binding. In such a context, governance through non-binding guidelines risks becoming aspirational rather than effective. The idea that industry actors will internalize ethical obligations without enforceable consequences may not align with India’s regulatory realities. These gaps raise a critical question: whether soft law, in the absence of clear enforcement structures, can meaningfully mitigate AI-related harms in India’s digital ecosystem.
The framework’s emphasis on trust and self-regulation leads to further amplification of these concerns. While the principle that “trust is foundational” reflects an intent to promote responsible innovation, it places a heavy reliance on corporations to govern themselves. In practice, however, companies remain profit and commercial-incentive-driven actors operating under intense market pressures to deploy systems quickly and at a large scale. A “trust first, verify later” model risks turning citizens into ultimate test subjects for high-risk AI deployments. Without clearly defining accountability mechanisms, penalties, or sanctions, self-regulation may remain minimal or symbolic, undermining the very objectives the guidelines seek to achieve. Whether responsible innovation can be operationalized without precautionary safeguards is a question that needs serious reconsideration.
The framework incorporates high-level principles such as trust and responsible innovation; the absence of clear operational mechanisms risks reducing these guidelines to aspirational statements rather than enforceable norms. Delays in accountability, coupled with the lack of well-defined liability and sanctions, may lead to dilution of responsibility when AI systems cause harm. In a governance environment where verification is the step after deployment, the burden of risk is implicitly shifted onto citizens rather than regulated entities. Without robust monitoring structures and timely enforcement, a trust-based, self-regulatory model risks normalizing harm before corrective action is taken.
- ACCOUNTABILITY AND LIABILITY: OPERATIONAL DEFICIENCIES IN THE GUIDELINES
The guidelines repeatedly emphasize accountability, both as a governing principle and across multiple pillars, stating that AI developers and deployers must remain visible and accountable for their systems, further suggesting that accountability should be ensured through a mix of policy, technological, and market-led mechanisms, with firms facing meaningful pressure to comply with obligations. However, the framework is unclear in explaining how accountability is to be assigned and enforced in practice. Instead, relying heavily on voluntary moral commitments, despite acknowledging that self-regulatory frameworks lack legal enforceability. The proposed alternatives, such as transparency reports, internal policies, peer monitoring, and audits, remain recommendatory in nature and fall short of providing an execution mechanism.
The guidelines address the need for liability by recommending a graded system of liability, though simultaneously cautioning against mechanisms that may “stifle innovation.” This balancing act results in dilution of proper liability standards, as no clear thresholds, consequences, or strict liability regimes are provided. In prioritizing innovation protection, the framework appears to overlook the need to impose firm obligations on developers and deployers, even in high-risk contexts. As a result, accountability is framed more as an ethical rather than a binding responsibility.
The framework has also avoided to meaningfully incorporate multi-stakeholder participation, including civil society organizations, marginalized communities, or independent public-interest representatives. Nor does it clarify how sector-specific technical questions and liability disputes will be resolved across domains. In the absence of statutory backing, mandatory oversight powers, or an independent regulator, these institutions risk functioning as advisory bodies rather than enforceable accountability mechanisms. Consequently, the question of liability, who is responsible, when, and under what consequences, remains unresolved. This consciously light approach to accountability and liability has risked reducing governance to ethical discourse rather than applied regulation. Without statutory force, enforceable tools, and independent oversight bodies, the framework may struggle to move beyond aspirational intent.
- INNOVATION OVER RESTRAINT: AMBIGUITY AND RISKS
One of the sutras of these guidelines is “Innovation over Restraint,” which tends to position AI-led innovation as a pathway to global competitiveness and national resilience towards innovation in AI-led Developments. While there is acknowledgment that innovation must be carried out responsibly, this principle fails to define what responsibility actually means in operational terms, how it is to be measured, who determines acceptable levels of risk, and which actors are tasked with making these assessments. In the absence of defined answers, responsibility will remain a rhetorical commitment rather than a regulatory requirement.
The problem is not the principle, but the potential of its execution in such uncertainty to implicitly endorse a permissionless innovation model, where systems are deployed first and corrected later. Such an approach shifts governance away from prevention and anticipation of harm towards ex post responses after damage has already occurred, further risking citizens as subjects of experimental AI deployments rather than as rights-bearing individuals entitled to protection. This matters because in the absence of precautionary safeguards, high-risk systems would be allowed to operate without prior assessment, normalizing harm as a cost of innovation, undermining digital rights by delaying accountability until after irreversible consequences have already taken place.
- MISSED OPPORTUNITIES IN OPERATIONALISING SOFT LAW: LESSONS FROM THE UK AND SINGAPORE
In my view, soft laws are not weak laws by default, but their effectiveness depends largely on how they are operationally designed and implemented. India has chosen the right principles, and the framework clearly acknowledges both the benefits and risks associated with AI systems. The intent to secure citizens from harm while simultaneously avoiding barriers to innovation is evident throughout the guidelines. However, this intent has translated into an over-reliance on symbolic and ethical principles rather than enforcement tools, prioritizing the discourse of values over the creation of mechanisms capable of translating those values into practice.
Even within a non-binding framework, the guidelines could have incorporated conditional and enforceable safeguards to better balance innovation with harm prevention. In a digital ecosystem as complex and high-impact as India’s, context-based and continuous risk categorization could have helped operationalize the framework’s stated objectives. Greater clarity on enforceable accountability mechanisms and the introduction of graded penalty structures would have strengthened compliance without resorting to rigid regulation. While concerns about rapidly evolving technology justify caution against rushed AI-specific legislation, high-impact and high-risk use cases could still have been addressed through targeted regulatory interventions. The use of sunset clauses would have allowed such measures to remain adaptive, enabling periodic review as technology and risks evolve.
The United Kingdom offers a useful illustration of how soft law can function alongside empowered regulators and central oversight mechanisms. Rather than relying solely on voluntary compliance, the UK’s approach is backed by regulator-led monitoring, sectoral oversight, and coordinated risk assessment. Regulatory sandboxes and testbeds allow innovation to proceed under supervision, ensuring that risks are identified before large-scale deployment. This design ensures that soft law principles are supported by institutional capacity, giving them practical force. This results in innovation being encouraged without abandoning accountability or enforcement.
Singapore’s AI governance framework further demonstrates how ethical principles can be translated into concrete and testable standards. In the Singaporean model, voluntary or self-regulatory approaches do not imply the absence of measurement or scrutiny. Ethical principles are operationalized into checklists and assessment criteria that must be satisfied before deployment. This approach ensures that responsibility is not left to interpretation but evaluated against predefined benchmarks. In contrast, India’s principle-heavy and tool-light framework lacks similar mechanisms.
- LACK OF A RIGHTS-BASED APPROACH AND LACK OF A CONSTITUTIONAL FRAMEWORK
The Indian AI governance guidelines clearly incline towards a techno-legal approach, moving towards governance mechanisms and innovation facilitation over a rights-based framework. While the guidelines acknowledge that AI systems are agentic and probabilistic, and therefore capable of causing harm and innumerable risks to people, this recognition does not translate into adequate protection of citizens’ rights. There is an evident lack of centering the right of individuals to be digitally safe or free from unjust technological harm, positioning citizens as subjects of innovation. This imbalance raises questions about whether governance focused primarily on technological management can sufficiently address AI’s social and constitutional implications.
Although the guidelines employ value-laden terms such as responsibility, fairness, non-discrimination, and equity, these remain largely aspirational in the absence of legal and constitutional grounding. AI-related harms are framed as failures of responsible innovation rather than as potential violations of fundamental rights. Notably, the framework does not meaningfully engage with constitutional guarantees under Articles 14, 19, and 21, nor does it anchor AI governance in principles of constitutional morality. This omission, whether intentional or not, weakens the force of the guidelines, as rights protection is reduced to ethical discourse rather than an enforceable obligation. Without recognizing AI harms as potential fundamental rights infringements, the framework risks normalizing harm instead of preventing it by posing AI-enabled harm as a result of irresponsible innovation. In my view, there could have been a more balanced approach by incorporating the importance of the rights of citizens equally to a techno-legal approach.
- CONCLUSION
India’s decision to govern AI through soft law reflects a conscious attempt to balance innovation with caution in the face of a rapidly evolving technology, adopting sound principles and recognizing the risks posed by AI. The argument and the critique remain that such overboard reliance on aspirational ethics, voluntary compliance, and institutional intent raises questions about real-world effectiveness and enforcement. AI harms are no longer subjects of anticipation; governance without clear accountability, enforceability, and constitutional grounding risks remaining symbolic. Soft law, by itself, is not inherently weak; its strength lies in operational design, oversight, and safeguards. However, by prioritizing innovation over restraint without adequate rights-based protection, the guidelines leave critical gaps unaddressed. The way forward is that India must complement principled governance with enforceable mechanisms that place citizen safety and constitutional values at the center of AI regulation. The most important thing that citizens must watch out for is whether the AI Safety Institute, among the other institutes, is functioning and providing helpful advice within the proposed timeline. The critical gap that remains a subject of worry is that voluntary promises mostly remain unfulfilled. The industry would wait to see if the government is sincere, and the government waits to see if the industry will self-regulate. Ultimately, citizens would be the ones who would suffer at the end.