Editorial Charter

Rethinking Digital Governance: Beyond Platform Responsibility to Holistic Online Harm Reduction

Today, governments across the world, including India are trying to find ways to dilute the safe harbours granted to digital companies so that they take greater responsibility for the harms occurring on their platforms

On January 31, the CEOs of prominent social media companies Meta, Discord, Snapchat, TikTok and Twitter (now X) were called to testify before the US Senate Judiciary Committee. The hearing was held in response to rising reports regarding the use of platforms to sexually exploit children, and much of the testimony presented by these CEOs revolved around the ongoing efforts made by their companies to bolster trust and safety.

The Senators, however, were unmoved, suggesting that platforms were not doing enough and, in some cases, willfully putting children in harm’s way. They lamented the fact that families whose kids had been subjected to sexual exploitation and other harms on these platforms currently had little to no recourse. The principal reason is that Section 230 of America’s Communications Decency Act grants these companies immunity from user activity on their platforms.

Since the nineties, technology companies have enjoyed a limited immunity granted from liability for user-generated content on their platforms. These provisions, known as safe harbours, were necessary to ensure the success of technology platforms that populate our digital economy today.

Had these requirements/conditions/stipulations been absent, Twitter/X, Facebook, Snapchat (and other similar intermediaries) would’ve shut down even before they took off. Back home, take the case of Guruji.com, a search engine founded by two IIT-Delhi graduates in 2006 to cater to the Indian context. Its popularity soared on the back of ‘music search’ feature, until a copyright infringement case by T-Series led to the arrest of the CEO of Guruji.com and eventually forced the startup to shut down. Guruji.com enabled users to find music from different websites, but had no feature to exclude those that hosted pirated music content.

Today, governments across the world, including India are trying to find ways to dilute the safe harbours granted to digital companies so that they take greater responsibility for the harms occurring on their platforms. However, these approaches have several shortcomings, not least because they tend to suggest that harms result almost solely as a consequence of platforms not doing enough to mitigate them.

It is likely true that platforms have not gone as far as they need to protect users, particularly more vulnerable groups, from online malfeasance.

However, centring the debate around these entities and their inaction misses out on the complex nature of online harms and how they occur. It also creates a situation where the platform becomes the arbiter of right and wrong online, which is problematic because it will almost always work to serve its interests before the public good.

Emerging legislation diluting safe harbours also aims to focus on harm pre-emption rather than mitigation. The European Union’s Digital Services Act (DSA), the United Kingdom’s Online Safety Act (OSA), and the US Kids Online Safety Act (KOSA), proposed legislation that was at the centre of the Senate hearing with the big tech CEOs, fall into this bucket.

Their objective is laudable, as online harms, particularly those pertaining to children, can result in self-harm and suicide. However, by focusing on prevention as a regulatory objective, these laws create further policy issues like unchecked surveillance of users and throttling of free speech.

Illustratively, under the OSA, a regulator can require a messaging service to use accredited technology to prevent individuals from viewing terrorism content. Consequently, the service would have to either break its end-to-end encryption, leaving its network and users vulnerable, or scan messages before they were sent. Since the complete prevention of certain types of content is impossible, it may not be worth contending with the trade-offs that may come along with attempts to do so.

Another prominent theme common to these emerging laws is user empowerment. Lawmakers want them to control the kind of content they engage with online. The KOSA requires platforms to give minors the ability to opt out of personalised recommendation systems and limit the categories of suggestions from them. The provision is presumptuous because it assumes that minors are both capable and willing to tailor their recommendations in a way that serves their best interests, and not interested in more dangerous content online.

Rights advocates like the Electronic Frontier Foundation, a prominent US-based non-profit focused on advocacy around civil liberties, argue that lawmakers should instead focus on enacting laws that focus on competition and privacy issues, rather than dilute safe harbour provisions. The EFF argues that regulation along these lines may force platforms to innovate and possibly give users a wider range of platforms to choose from. However, there is little evidence to indicate that such a scheme will work.

The European Union introduced comprehensive privacy legislation over six years ago and scholars argue that it entrenched the position of large platforms further. The EU has now introduced the Digital Markets Act, a competition law that aims to engender greater fairness and contestability in digital markets. It is uncertain, however, the impact this law will have on illegal content in the short and long term.

In India, the government is considering a law to tighten controls on online intermediaries and reduce the level of immunity they once enjoyed. The rising rates of CSAM (Child Sexual Abuse Material) is a significant reason for this proposed move. According to an advisory issued by the National Human Rights Commission in 2023, out of 32 million reports received by the National Centre for Missing and Exploited Children, 5 million pertained to CSAM uploaded from India.

However, before it treads down the path of diluting safe harbour provisions, the government must study the impact of measures introduced in other jurisdictions.

Lawmakers must avoid including those provisions that have failed or lead to other market failures such as over-censorship by platforms. Moreover, they need to expend greater effort analysing the nature of how these harms occur and the extent to which victims are active participants. In such cases, particularly in the context of children, limiting the extent of victim participation may be key.

The KOSA includes a provision that requires platforms to limit the ability of others to contact minors. However, there should be an added restriction on the ability of minors to get in touch with strangers.

There is no perfect way forward on intermediary governance. However, it is certain that a singular focus on making platforms more responsible and increasing their compliance costs is not a recipe for success. Lawmakers must widen their gaze and work backward from incidents related to online harms, isolate their root cause, and focus on measures that will limit their occurrence, ensuring a safer digital environment for all.

(Meghna Bal is Head of Research at Esya Centre, a technology-policy focused think-tank. Views expressed are personal)

This is a free story, Feel free to share.

facebooktwitterlinkedInwhatsApp