Sun, Jul 06, 2025
At Anthropic's first developer event on May 22 in San Francisco, CEO Dario Amodei made several large claims, the most alarming of which was that AI would destroy most entry-level jobs worldwide in five years, which immediately became a global headline.
Less noted in media reports were his claims on “hallucinations” — ie., AI generating false information. He said that while this is true, it isn’t qualitatively different from human lies, and certainly less frequent than human beings.
Hence, he said this won’t be a major barrier to achieving the holy grail of artificial general intelligence (AGI). He did admit, though, that AI lies with more “confidence”, misleading the user more often.
It flew in the face of Anthropic’s troubles with its own AI model, Claude. Last November, it had to withdraw a lawsuit in a California court after it was found that its advocate had been misled by Claude into submitting citations that the AI had “cooked up”.
Indian R&D In AI Hallucinations
With the global hoopla over AI being slowly replaced by tempered enthusiasm over AI hallucinations, a growing number of Indian AI startups have begun to pivot their R&D efforts towards creating robust tools and methodologies to detect and mitigate these hallucinations, aiming to build a foundation of trust for wider AI adoption.
The problem is not trivial. Studies suggest that large language models (LLMs) can hallucinate anywhere from 3-27 per cent of the time. But that’s not all. In specific contexts, such as legal information, this rate can soar dramatically.
For Indian enterprises adopting AI in banking, healthcare, customer service, etc, the consequences of relying on inaccurate AI outputs are severe, ranging from compliance breaches and significant financial losses to damaged reputations and erosion of customer trust.
"AI hallucinations are not just errors, but a feature of LLMs," noted Balakrishna D R, Executive Vice-President at Infosys, in a recent TechCircle report, highlighting that while useful creatively, these fabrications are unacceptable where factual accuracy is paramount.
Leading the charge are companies like Tredence, a data science and AI solutions firm with a significant presence in India. Tredence's AI Centre of Excellence, led by Director Ankush Chopra, has been actively researching and publishing methodologies to combat hallucinations.
Techniques Adopted By Startups
Their work explores techniques such as Retrieval-Augmented Generation (RAG), which grounds LLM responses in external, verified knowledge sources, alongside Natural Language Inference (NLI) and integrated gradient methods to evaluate consistency and detect deviations from factual data.
Tredence emphasises a multi-pronged approach, combining prompt engineering, careful fine-tuning, and advanced validation techniques to enhance the reliability of LLM outputs for enterprise use.
However, the path to reliable AI is fraught with challenges, particularly in India. Vijayant Rai, Managing Director at Snowflake India, highlighted in an article that concerns over accuracy are a major hurdle, often restricting generative AI tools to internal use cases.
Key hurdles include ensuring data readiness, establishing robust governance frameworks, implementing effective AI guardrails, and overcoming a shortage of specialised expertise. Furthermore, the cost of infrastructure and the evolving, sometimes uncertain, regulatory landscape add layers of complexity, especially for startups.
The Scope Of The Opportunity
Nevertheless, the focus on tackling hallucinations presents a significant opportunity for the Indian AI ecosystem. Companies are exploring innovative solutions like automated reasoning, a technique championed by AWS that uses formal logic to verify AI outputs against predefined rules.
Infosys reports growing adoption of such techniques in India's BFSI (Banking, Financial Services, and Insurance) and healthcare sectors. Startups like Gupshup are also contributing, refining prompt engineering and fine-tuning methods, and designing models that explicitly acknowledge uncertainty rather than fabricating answers.
The consensus among experts is that while a universal solution remains elusive, a combination of advanced techniques, rigorous validation, and industry-specific reliability standards is crucial.
The push towards hallucination-free AI is not merely a technical pursuit; it's a strategic imperative for India's burgeoning AI industry. As enterprises increasingly rely on AI for critical decision-making, the ability to deliver trustworthy and reliable AI solutions will be a key differentiator.
By investing in research, developing specialised tools, and fostering expertise in areas like automated reasoning and RAG, Indian AI companies are positioning themselves to build trust and unlock the full potential of AI for businesses, domestically and globally.