Tue, Jun 03, 2025
In a high-profile case against Anthropic by several major music publishers, over its alleged misuse of copyright to train its AI bot Claude, the US-based AI major on May 15 this year submitted before a California Federal judge that its law firm, Latham & Watkins, was responsible for an incorrect citation — created by its own chatbot Claude.
The lawsuit once again highlighted a troubling case of “AI hallucination” affecting real-life decision-making. And the problem is neither new nor far-removed from India.
As recently as March this year, then CJI-designate Justice B R Gavai, while addressing the Kenyan Supreme Court in Nairobi on “Leveraging on Technology within the Judiciary”, warned about the inherent risks of depending on AI in the legal sphere.
Earlier, in December 2024, the Bengaluru bench of India’s Income Tax Appellate Tribunal (ITAT) issued an order to withdraw its prior order against the private fund Buckeye Trust, which alleged that the latter owed tax. The Tribunal had cited several Supreme Court and Madras High Court rulings to make the assessment.
None of which existed.
Large Language Hallucinations
In the past, The Secretariat has highlighted this troubling aspect of “hallucinations” by large language model (LLM) generative AI, in which the AI fabricates or gives factually incorrect/nonsensical information, especially in cases when it is unable to find “real” information or citations.
While most of the high-profile instances of AI hallucinations that have been reported in the media so far, involve wrong legal or media citations, given the rapid adaptation of AI in every sphere of work across India, and the still-evolving nature of the technology, there’s every chance of it proliferating and affecting policy decisions in both government and corporate sectors. Which can have widespread repercussions.
From time to time, the United Nations Educational, Scientific and Cultural Organisation (UNESCO) has been giving AI guidelines for policymakers across individual countries.
GoI’s Moves: Two Steps Forward, One Step Back
In India, Niti Aayog hasn’t shied away from efforts to establish its own set of rules. It has, for example, set up the Centre for Responsible AI (CeRAI) at IIT Madras to create a primary policy document, while also involving Nasscom to bring in industry inputs.
The Ministry of Law and Justice is reportedly drafting comprehensive guidelines for AI use in legal proceedings, expected to be released soon. These guidelines will likely mandate human verification of all AI-generated legal research and prohibit direct citation of AI-sourced material in official documents.
However, the Government appears uncertain about how far and in which direction its AI usage restrictions should go. After the Ministry of Electronics and Information Technology (MeitY), in February 2025, implemented a comprehensive ban on the use of AI tools in government offices, it issued another order on March 15, withdrawing the previous order.
"We are frankly still studying the issue and would take more time to formulate a policy on this," said officials on condition of anonymity.
How AI Could Affect Policymaking
Within India’s corporate sector, the most vulnerable appears to be the finance industry, which has been one of the fastest to adopt AI.
While AI raises broad concerns about job losses and hiring biases, the financial sector is terrified that false information on investments, market trends or regulatory requirements, generated by AI, could lead to financial losses and compliance hassles.
"Although the financial services industry in general, and banks in particular, have been early adopters of AI, hallucinations by AI pose significant risks because of the massive spread of misinformation in the industry. This results in AI models generating outputs that do not correspond to input data, or are not grounded in reality," Indranath Mukherjee, Vice-President, AXA XL India, told The Secretariat.
He continued, "Such outputs, based on fabricated information, can result in financial losses to corporates as well as individuals. It is crucial for organisations to employ stringent human quality control measures, regularly validate AI models, and invest in research to mitigate the risks."
Meanwhile, the Finance Ministry has periodically expressed concerns about privacy, especially regarding financial data, saying the AIs could be mining sensitive data.
Despite these concerns, the Indian government maintains its commitment to advancing AI development through its 'AI for All' initiative, which aims to harness AI benefits in sectors like healthcare, agriculture, and education.
When he was the Vice-Chairman of Niti Aayog, Rajiv Kumar, who had helmed the preparation of the approach document Responsible AI for all in 2021, had written in its foreword that the challenge for policymakers is in framing and implementing AI principles that “balance innovation and governance of potential risks".
Industry leaders suggest that the solution lies in developing better verification systems and increasing AI literacy among professionals. IT majors across the world and in India are now developing AI verification tools designed to detect hallucinations before they impact decision-making processes.
As both government and corporate sectors navigate these challenges, experts hope India's approach to AI governance will become a model for other developing economies seeking to harness AI's benefits while mitigating its risks.