Wed, Jan 22, 2025
The dust hasn’t even settled on the draft rules for the country’s data protection law, and the Ministry of Electronics and Information Technology (MeitY) has released a report detailing India's roadmap for a regulatory and innovation-friendly artificial intelligence (AI) ecosystem.
The report stems from recommendations by a subcommittee set up in November 2023 under MeitY. At first read, it feels exploratory, asking questions about copyright, antitrust, data safety, and ethics without any clear answers.
While this is being seen as a precursor to a possible AI regulatory framework, the report isn't meant to directly create laws but serves as a foundation for further action to develop policies. The draft report is open for public consultation till January 27, 2025.
Copyright Infringement & Data Privacy
The report has addressed the very important issue of AI companies using copyrighted content to train their models. The subcommittee explored two issues under Indian copyright law: first, whether AI models can train on copyrighted data without approval, and second, whether AI-generated works qualify for copyright, given that the law usually only recognises humans as authors.
While the report identified the challenges in training AI on copyrighted data and eligibility of AI-generated works for copyright protection, it stopped short of offering actionable solutions.
The report posed questions like: “...while the law protects the rights of the copyright holder, do we have the capabilities to enforce compliance to the existing law? Do we need to identify and agree on steps that the entities training on data need to put in place so as to demonstrate compliance with the law?”
The Secretariat spoke to Nandita Saikia, a lawyer, who explained, “Indian law is reasonably clear that copyrighted works cannot be used to train AI models except in a few limited circumstances. However, the (copyright) law was not drafted with AI in mind, and, although the report recognises the current state of affairs, it refrains from making concrete proposals to define what the path ahead should look like.”
Because India's copyright law doesn't specifically address AI-generated content, it is sometimes unclear how to determine who the author of AI-generated works would be.
The report says, "By proactively creating appropriate guidance, the relevant authorities (Copyright Office, Ministry of Commerce & Industry) can provide certainty and clarity to the users as well as to other government authorities who may otherwise adopt inconsistent practices."
OpenAI, the developer of ChatGPT, is currently facing 13 copyright infringement lawsuits worldwide, including one in India. The Indian case was filed late last year by news agency ANI, which is seeking damages for using their news article without permission and compensation.
Data is the lifeblood of AI models, serving as the essential foundation for their development and functionality. These models rely on vast quantities of data—spanning terabytes—to learn, adapt, and generate meaningful outputs. The relationship between AI training and data privacy is crucial, as the data required to train AI models often include personal, sensitive, or proprietary information.
Addressing this, the report says that existing laws and regulations continue to apply to the use of AI. “AI systems should be developed, deployed and used in compliance with applicable data protection laws and in ways that respect users’ privacy. Mechanisms should be in place to (sic) data quality, data integrity, and ‘security-by-design’.”
But here's the rub: the Digital Personal Data Protection Act, 2023, along with the recently released draft rules, leaves a rather glaring loophole. Since the law focuses to private user data, it often leaves publicly available data vulnerable.
The Report Has Six Recommendations
Inter-Ministerial Committee
This recommends bringing all the inter-ministerial authorities and institutions that deal with AI governance to be on the same page on AI governance. Why? To ensure a unified approach across sectors like healthcare, finance, and transportation.
Technical Secretariat
The report recommends that MeitY create a Technical Secretariat to advise and bring up to date the above-mentioned inter-ministerial committee on a systems-level understanding of the Indian AI ecosystem. The Technical Secretariat will do this by bringing in experts from academia and industry.
AI Incident Database
It has been recommended for the Technical Secretariat to set up a database which will serve as a repository of AI-related incidents. This is to understand real-world risks, guide future responses and reduce harm.
It isn’t clear what an ‘AI incident’ may be. The report says it could be a “cyber incident” or a “cyber security incident” under the IT Act, but may be something else too.
Industry Transparency Commitments
The report encourages the AI industry to adopt voluntary transparency measures such as regular reports, model assessments, and security evaluations to complement existing laws.
One interesting thing it says is that there may be a need for a basic set of guidelines or rules for creating and using AI systems that carry a medium or high level of risk. This is important to establish accountability with the AI industry from the beginning as AI systems with medium-to-high risks can have serious consequences if not properly managed. One of the (failed) examples of this was California’s SB 1047.
Malicious Synthetic Data
The report suggests that the Technical Secretariat investigate the use of technology like watermarking and labelling to prevent, detect, and track harmful AI outcomes, including malicious synthetic media like deepfakes.
Thoughtful Intent But Lacking Clear Solutions
While the report overall reflects MeitY's intent to regulate AI thoughtfully but falls short in several areas: it lacks concrete plans for implementation, accountability, and privacy protection; overemphasizes consultation without offering concrete solutions; and fails to provide a clear strategy for balancing innovation with rights protection.
“Many of the uncertainties in the law relating to AI can be addressed only once there is a clear policy in place regarding how to prioritise the various implications of deploying AI. Unfortunately, the report has not dealt with the prioritisation of competing interests in any depth,” said Saikia.
Considering the committee has been around for over a year, its vague and narrow approach is unlikely to create a long-lasting impact on its own.
“Developing an AI regulation and governance model first requires us to grapple with the hard issues of how to balance competing interests: proprietary rights, human rights such as free speech and privacy, and public interest. Hopefully, we will actively engage with these issues in the foreseeable future,” added Saikia.
So, what’s to come after the government has accumulated comments from the public and stakeholders? In general, a committee report such as this would ultimately inform the formulation of policy, the issue of subordinate legislation, or the enactment of law.
The title of the report suggests that guidelines will likely be issued as subordinate legislation, which would probably be legally binding.