India's AI Guidelines Adopt A Softer Approach But With Scope And Limitations India has taken a rather lenient approach with AI guidelines. Here's what it solves and what misses out on.
By Kul Bhushan
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Even as the governments around the world are scrambling to regulate artificial intelligence (AI), India is also figuring out how to navigate through the new disruptive technology. Interestingly, India is taking a rather careful position on regulating AI as it stresses on cooperation, trust, democratisation, innovation and at the same time acknowledges flaws and risks.
The AI Governance Guidelines was released by the Ministry of Electronics and Information Technology (MeitY) after a public consultation that saw more than 2,500 submissions from across government bodies, academic institutions, think tanks, private sector organisations, and others.
"With the vision of AI for All to integrate scale with inclusion, sustainability and resilience laid down by the Honourable Prime Minister Shri Narendra Modi, AI must serve as an enabler for
inclusive development across all strata of society. Our commitment is to harness AI for the common good, ensuring its benefits reach the last citizen by revolutionizing diagnostics in rural healthcare, providing personalized education in local languages, or enhancing climate resilience for our farmers," the IndiaAI Governance Guidelines says in its report.
"Recognizing both the immense promise and the inherent risks, ranging from the spread of deepfakes, misinformation and algorithmic biases to threats against national security, the India AI Governance Guidelines provides a framework that balances AI innovation with accountability, and progress with safety. It represents a strategic, coordinated, and consensus-driven approach to AI governance," it adds.
The guidelines are divided into primarily four parts: Trust, People First, Innovation over "Restraint, Fairness & Equity, Accountability, Understandable by Design and Safety, Resilience & Sustainability."
What the Guidelines Focus On
As mentioned above, these are guidelines and the panel has recommended that separate AI laws are not needed at the moment, though it suggests AI could be governed through existing laws such as the IT Act and DPDP Act but with requisite amendments for better targeting the problems.
Moreover, the panel seeks establishment of new entities for governance. For instance, it seeks setting up an AI Governance Group (AIGG) for policy coordination and an AI Safety Institute (AISI) for technical evaluation and risk assessment. The objective is to make the process rather cohesive and speed up the process for the stakeholders.
Guidelines also call for an environment of trust and accountability as well as fairness and equity. Other aspects, as mentioned above are, safety, resilience, and sustainability.
Stressing that the "trust is the foundation", the panel suggests that the innovation should be "carried out responsibly and should aim to maximise overall benefit while reducing potential harm. All other things being equal, responsible innovation should be prioritised over cautionary restraint."
The guidelines also address the risks involved in the threats involved using or misusing AI. It also notes the challenges thrown by deepfakes and AI-driven misinformation. The panel suggests existing IT laws could be leveraged to tackle this menace. Though, it's not in favour of regulating the foundational tech instead of the application layer that is being misused. It says that there should be an India-specific risk assessment framework and encourage compliance through voluntary measures.
Other aspects include a graded liability system for better grievance redressal mechanisms and transparency report as well as broadening the access to data and computing resources through the IndiaAI Mission and encouraging the integration of AI with Digital Public Infrastructure (DPI).
A Softer Approach
Even as guidelines seem to create an idealistic environment for AI to thrive in India, it is also relatively less stricter than the AI act implemented by the European Union. For instance, unlike India's, EU's act has legally binding elements with must compliance, including heft penalties.
The EU act also classifies AI systems into multiple categories based on their risks, starting from minimal risk to unacceptable risk. It also seeks the AI companies to conduct rigorous pre-market testing along with detailed documentation and human oversight among others for systems it defines as "high risk". It also doubles down on high risk AI systems by creating stringent barriers for entry.
It's worth noting that the EU is reportedly planning to cool down some of the aspects of its AI act after facing intense pressure from businesses as well as the US government. The Guardian reports that the body was planning to make some amendments to meet demands by big tech firms. Moreover, the EU commission responsible for the AI law is mulling over extending another year of "grace period" for companies to comply with rules on the highest risk systems.
A softer approach, however, can be in India's favour given the AI ecosystem is just taking shape in the country and is yet to have anything concrete indigenous.
"India's approach — light on law, heavy on learning — is smart. It lets innovation breathe while keeping accountability in sight. The EU builds fences; India builds lanes. For developers going global, that's an advantage. They learn agility at home and structure abroad. But to succeed in regulated markets, Indian firms must document everything — what data they used, how models were tested, and where bias was checked. Agility without traceability won't travel. Guardrails should make you sharper, not slower," Krupesh Bhat, Founder & CEO, Melento told Entrepreneur India.
Some possible challenges
As mentioned above, India's guidelines do acknowledge the risks and even have a few recommendations to mitigate them. But grievance redressal may be an area of concern. Companies and individuals will need a reliable system that has a faster turnaround time.
Amit Das, founder and CEO of Think360ai, explains that as one deals with AI systems day in and day out, one of the continuing challenges (as evidenced in the inferencing and thinking processes of algorithms) is that they are often inconsistent, and the slightest change in context can shift the responses significantly. In such cases, the trail is of significant import, as one tries to understand the right redressal opportunity and the right fixes going forward.
"In the absence of this, and subsequent explainability, the cost of redressals (and occasional penalty) will be a meaningful deterrent against policy success. We need to build trails and explainability all along, deeply embedded in our solutions," he stressed.
Another plausible bottleneck is multifaceted AI companies catering to different sectors. The AIGG is supposed to bring together different ministries such as MeitY, Ministry of Home Affairs, telecom ministry and more. It is expected to have representatives from different top bodies such as RBI, SEBI, CCI and TRAI.
"Every sector speaks a different language of compliance. For one AI provider, that's a maze. A credit-scoring tool faces one set of audits; a retail chatbot faces another. This patchwork slows everyone down. The AI Governance Group should fix that — one rulebook, one reference, one rhythm. Think of it as India's AI GPS: guiding every developer, regardless of industry. When rules are predictable, innovation becomes faster and safer. Companies don't need fewer rules; they need one map to follow," Bhat adds.
How effective the proposed "whole-government approach" could be, it's something we can find out when it's implemented in letter and spirit.
"The major set of issues will arise from duplicated compliance efforts across regulators, and non-standard interpretations. That being said, AIGG needs to focus on driving uniformity in interpretation and standards. It also needs to be able to supervise or draw stakeholders from multiple regulatory outfits to drive consensus, in the absence of which, solution providers (especially small and medium sized) will struggle to comply," Das further said.
Summing it up,
It's surprisingly pleasing that the AI guidelines are light on law-making and focused on learning, and push for innovation with democratization and, most importantly, a trust environment. It's also different from the EU's stringent laws, which, as mentioned above, are likely to be watered down to accommodate demands from the big tech companies. Unlike the West, however, India's ecosystem is pretty nascent. It remains to be seen how these guidelines are implemented or go through a myriad of changes based on the changes in technology and subsequent impact.