Editorial Charter

Wake Up Call For Law-making On Deepfakes And Misinformation

Deepfakes are most commonly associated with malicious purposes like defamation, financial fraud, and spreading falsehoods. However, they also have positive uses, as seen in creative and civil applications

The recent Bombay High Court decision, which struck down the provision enabling the Centre to set up a fact-check unit (FCU), offers insights on how the Government should regulate deepfakes and other misinformation. Deepfakes, or synthetically generated content created using artificial intelligence (AI), often mimic a person’s body, face, or voice. They are most commonly associated with malicious purposes like defamation, financial fraud, and spreading falsehoods.

However, deepfakes also have positive uses, as seen in creative and civil applications. For instance, a Brazilian artist has used them for political satire in Brazil. In the United Kingdom, the NGO - Malaria No More - joined forces with an AI company to create a short film featuring David Beckham speaking out against malaria. In the video, deepfake technology was used to sync Beckham’s lip movements to different voices announcing in nine languages, to widen the impact of its message

The potential for deepfakes to be used for both good and evil likens their context to the challenges before the Bombay High Court regarding the Government’s FCU.

The FCU was meant to identify fake or misleading information online relating to the work of the central government, and notifying digital platforms such as social media companies to take down such content. If platforms failed to conform with such orders, they would be held liable for the content in question. 

The rule potentially could have enabled the removal of politically inexpedient speech on the pretext of fighting misinformation. Thus, it was challenged because it was felt it could threaten to stifle political satire and other forms of free speech. 

Regulating Deepfakes

Deepfake regulation is prone to the same perils that surrounded the establishment of the FCU. In some ways, it presents a trickier conundrum. Illustratively, many of the so-called deepfakes taken down during the 2024 Indian election season, were not in fact synthetic content.

For instance, a report found that during the 2024 general election, fact-checkers found that a majority of falsified content posted online was not made using AI. One of them even indicated that deepfakes accounted for only about a per cent of misinformation.  

A future deepfake law could be similarly misused to take down real images or videos in the future, if checks and balances are absent, as in the case of the FCU. In addition, such a law will most certainly face a constitutional challenge if it is not mindful of judicial precedent on free speech, and the decision of the Bombay High Court in particular. 

There are three principles for any future law on deepfakes (and indeed misinformation online) to consider based on the developments last week. First, the right to free speech does not encompass the right to truth.

The corollary here is that the State cannot sit as the arbiter of truth. The conclusion is intuitive. As scientific discovery progresses and our understanding of the world evolves, newer facts can challenge existing thought orthodoxies.

If people were only entitled to the truth, it would rule out the publication of satire and fiction. The absence of the right to truth accommodates our varied belief systems. 

Second, restrictions on speech, even if such speech is false, cannot be vague and must fall within the contours of the permissible restrictions on speech under Article 19(2) of the Constitution. Deepfakes cannot be restricted on arbitrary grounds of being false information.

Restrictions on them must relate to the country’s sovereignty, national security, diplomatic relations, public order, morality, decency, contempt of court, defamation, or the incitement of an offense. 

The perimeter set by 19(2) is broad. In Kaushal Kishor v State of Uttar Pradesh, the Supreme Court held that the restrictions under Article 19(2) were comprehensive enough to account for any attack on an individual, groups, society, the judiciary, the State or the country. And it is not open to the State to add to the list of grounds under 19(2). Therefore, content cannot be taken down just because it is a ‘deepfake’. 

Third, any law seeking to restrict deepfakes must be proportional. The test of proportionality ensures that there are sufficient guardrails in place to keep any restriction on a fundamental right from overstepping its bounds. 

The Bombay High Court struck down only part of a wider legal rule tackling misinformation, under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The remainder of the rule prohibits any intentional communication of misleading or patently false information. While it ostensibly checks deepfakes, the rule could also be used towards the same ends at issue in the case of the FCU, such as the suppression of political parody.

The Bombay High Court’s decision bolsters the case against this provision, which has been challenged in a separate case.

The Court must be lauded, not only for checking the arbitrary exercise of State power but enabling others to stand on its shoulders to do so as well. 

Separate Law On Deepfakes?

An open question is whether the Centre can bring out a separate law on deepfakes, to the exclusion of other online falsehoods. Deepfakes are technically distinct from other online misinformation.

However, a 2021 study by researchers at Harvard shows that they are not any more convincing than other forms of fake media. Moreover, they seemingly constitute just a small part of the problem. Is it then justified to single deepfakes out without instituting wider reform on misinformation, both online and offline? 

Online misinformation is typically addressed through reactive rule-making in India, which is why it so often falls short on constitutionality. Here again, we must thank the Bombay High Court for setting precedent that will, hopefully, encourage the State to be more thoughtful about measures aimed at restoring trust in our information landscape.

(The author is Director of the Esya Centre, a think-tank focused on emerging technologies and a member of the AI Knowledge Consortium. Views are personal.)

This is a free story, Feel free to share.

facebooktwitterlinkedInwhatsApp