Mon, Apr 28, 2025
Last month, after almost 150 years, India overhauled the legal framework for its criminal justice system. The Parliament replaced the Indian Evidence Act, 1872 (IEA) with the Bharatiya Sakshya Adhiniyam, 2023 (BSA). It also enacted the Bharatiya Nyaya Adhiniyam, 2023 and the Bharatiya Nagarik Suraksha Adhiniyam, 2023 to replace the Indian Penal Code, 1860 and the Code of Criminal Procedure, 1973 respectively. This move can have implications for India’s transition to a digitally-empowered society.
The 2020s have been described by Prime Minister Narendra Modi as India’s ‘Techade’, with the decade to be marked by increased adoption of technologies and India leading this adoption. Emerging tech, such as AI, holds the potential to boost productivity and efficiency across various sectors, including the criminal justice system.AI-generated outputs could also be used as evidence in a judicial proceeding or help law enforcement agencies in investigations.
For instance, AI algorithms can be used to enhance the forensic analysis of evidence such as fingerprints/DNA samples, AI-powered emotion-detection technology can assess the emotional state of suspects, or AI-based voice-recording software as evidence in legal proceedings.
There are complexities involved in using AI generated outputs as evidence in a trial. It is important to assess whether the BSA is adequately equipped to address the admissibility problem for AI.
Examining the BSA
Under the BSA, an output generated from AI will most likely be classified as ‘digital’ or ‘electronic evidence.’ The BSA defines documents and documentary evidence to include electronic or digital records. Illustrations to the definition of ‘document’ in the BSA include electronic records on emails, server logs, documents on computers, laptop or smartphone etc.
Notably, it appears that the changes brought through the BSA may be cosmetic, especially with regard to digital/electronic evidence. This is not a substantive change as it merely formalises existing practices under the erstwhile IEA.
Documentary evidence, including digital/electronic evidence, may take the form of ‘primary evidence’ or ‘secondary evidence’. For context, primary evidence is considered the best level of evidence that can be presented in a trial without supporting evidence, as opposed to secondary evidence, which needs separateauthentication.
There are problems in treating AI-generated evidence as primary evidence. This is encapsulated in the ‘black box’ problem - where understanding the reasoning behind an AI system’s predictions or decisions becomes difficult. While evaluating responses generated by ChatGPT, Justice Prathiba Singh of the Delhi High Court noted that the “accuracy and reliability of AI-generated data are still in the grey area.”
This, along with other potential problems such as inaccuracies, hallucinations, etc., makes it difficult to establish its credibility as primary evidence.
There are complexities in taking AI-generated evidence as secondary evidence too. Digital evidence to be admissible as ‘secondary evidence’ needs to be ‘authenticated’ through a certificate (section 63) signed by any person ‘in charge of the computer or communication device’ and an expert.
This presents certain challenges, given the nature of AI systems. AI systems involve multiple contributors (with different persons doing different tasks such as collating and analysing data, training AI models, developing model techniques and algorithms, testing and evaluation of AI models etc.).
They are also complex and often self-learning algorithms, which can make obtaining authentication certificates a cumbersome task.
Moreover, it may also become difficult to clearly explain the functioning of AI systems – especially those involving deep learning or advanced machine learning techniques.
Crucially, we are only in the early stages of the development of AI systems. The evolving nature of AI systems raises concerns about the suitability of section 63, which borrows from section 65-B of the older Evidence Act, perhaps designed with more traditional forms of electronic in mind (such as pen drives based on optical or magnetic media as opposed to flash drives based on semiconductors), to effectively address the intricacies of AI-generated evidence.
Do Foreign Approaches Offer Any Guidance?
In both the US and the UK, for evidence to have high probative value, it must be relevant and reliable (authentic). Evidence may become inadmissible if its usefulness is substantially outweighed by its drawbacks like the risk of causing prejudice, misleading the jury, or wasting the time of the court etc.
Thus, the legal standard broadly appears to be the same across different jurisdictions - the US and the UK.
Neither the US nor the UK have come up with conclusive solutions to authenticate the reliability of AI systems as evidence during a trial till date. For instance, in the US, the current authentication method is calling the developer or creator of the system to testify about the system – which is similar to the Indian requirement under section 63, BSA.
The Path Ahead
There needs to be a concerted multi-stakeholder approach comprising technologists, judiciary, policy-makers, civil society, and policy experts to work towards creating a framework that ensures the credibility and reliability of AI generated evidence.
As AI continues to advance and for it to be truly useful for the justice system, policymakers may need to arrive at a more adaptive legal framework to address the unique authentication challenges posed by AI-generated evidence. Such a framework will also need to account for the proprietary rights (IPR) of different stakeholders (developers, contributors to training models etc.)
Upcoming sectoral legislation (such as the Digital India Act) may be a good place to address some of the concerns that accompany AI technologies. For instance, mandating the adoption of responsible AI principles such as safety and reliability, inclusivity and non-discrimination, equality, privacy and security, transparency, accountability, and protection and reinforcement of positive human values (NITI Aayog, 2021) could be a good first step.
Additionally, AI systems that might be relied on for evidentiary purposes may be classified as ‘high-risk’ systems, and be subjected to heightened transparency and explainability obligations.
Building an enabling legal framework is only a part of the solution. Given the role that judges and lawyers play in trials, there needs to be greater sensitisation amongst both the Bar and the Bench on emerging tech like AI, especially on the limitations and opportunities of these technologies. Parties to the case need to be given access to technical experts who can help them evaluate the nature of technologies.
Alternatively, the government may set up an expert independent regulator to authenticate and evaluate AI systems used for evidentiary purposes [this follows from a similar suggestion made by Lord Sales, a Justice from the UK Supreme Court, for the UK].
(The authors are lawyers with Ikigai Law, a Delhi-based tech-focused law and policy firm. Views expressed are personal)