India Unveils Risk-Based AI Governance Framework
The India AI Governance Guidelines, released on 5 November 2025, provide a comprehensive national framework for the safe, responsible and inclusive development of artificial intelligence in the country. The guidelines recognise AI as a key driver of economic growth and social transformation, while also acknowledging risks such as bias, discrimination, exclusion, unfair outcomes and lack of transparency. They adopt a risk-based, evidence-led and proportionate governance approach, and do not permit unrestricted deployment of high-risk AI systems.
Safeguards outlined in the framework are designed to mitigate risks to individuals and society, with sectoral regulators retaining responsibility for enforcement and oversight within their existing legal mandates. The guidelines are principle-based, agile and flexible, and are intended to encourage responsible AI adoption without stifling innovation. They do not introduce new statutory mechanisms such as independent audits, appeal systems or additional oversight bodies, instead relying on existing legislation including the Information Technology Act, the Digital Personal Data Protection Act and relevant sectoral regulations. The government has stated that a new horizontal AI law is not required at this stage.
The details were shared by Union Minister of State for Electronics and Information Technology Jitin Prasada in the Rajya Sabha on 19 December 2025.