[This post has been authored by Sanjana L.B., a 4th year student at Symbiosis Law School, Hyderabad.]
Introduction
In January 2021, India had the highest number of Facebook users at 320 million. This was followed by the United States of America (“USA”), with 190 million users. As of February 2021, about 53.1% of the population of Myanmar were active social media users. These numbers are not only indicative of internet penetration, but also of the audience for user-generated content on platforms like Facebook. This article focuses, firstly, on the need for content moderation on social media by looking at harmful precedents of inefficient moderation, and secondly, on the Indian Government’s approach to content moderation through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) and recent developments surrounding the regulation of social media content in India.
The dark underbelly of social media: the need for content moderation
In 2018, reports began to emerge that Facebook, one of the largest social media platforms in the world, was a tool for spreading hate speech and inciting genocide in Myanmar. From fake information aimed at inciting rage against minority communities, to posts calling for ‘ethnic cleansing’, Myanmar was quaking under the weight of Facebook’s wide reach and misuse in the country. Similarly, social media also played a role in the violent protests under the label “Stop the Steal” in the USA. Posts related to the label first emerged on Facebook and Twitter, and eventually moved to other platforms – such as MeWe – even before Facebook banned the group, almost pre-empting their removal. Newer groups also emerged on Facebook using coded communication, almost too easily by-passing content moderation tools. India wasn’t far behind either, when protests against the Citizenship (Amendment) Act, 2019 also took a drastic turn when several social media accounts were created to incite hatred against minority groups. Soon after, several arrests were made, and the Government began more vociferously voicing its concern and initiating action against misinformation, incitement and criminal activity on social media.
These are prominent examples, but everyday numbers are no less bothersome: in the third quarter of 2020 alone, Facebook took down 22.1 million hate speech-related content, 19.2 million posts containing graphic and violent content, and 3.5 million posts that were detected as bullying and harassment. These numbers are higher than the previous quarter, and the one before that: harmful content on social media is only on the rise. In the midst of this, several countries in the world, including India, have stirred awake to the need for content moderation, regulation of social media and balancing free speech with harmful content.
How do the Intermediary Guidelines fare in the war against harmful content?
The goal and the weapon
The Intermediary Guidelines aim to give content moderation both legal and timely force: unlawful content is supposed to be taken down within a fixed deadline. Additionally, fair action and transparency have also been brought to the forefront. The Government’s approach to harmful content on social media is fueled by the increase in fake news, corporate rivalries, use of abusive language, defamatory and obscene content, disrespect to religious sentiments, ‘anti-national’ elements, public order, and the like. In light of this, content moderation will take front stage. Although the Intermediary Guidelines stipulate ‘actual knowledge’ for removal, in the form of a court order or notification by the Government or its agencies through authorized officer, the principle is clear: the Indian Government is tightening the noose on harmful content on social media. But will the Intermediary Guidelines solve underlying problems?
- Implementing content moderation tools
In each of the examples above, the underlying problem was not just the lack of regulation, but the lack of proper enforcement tools. Statements and conduct which incite violence, involve child pornography, criminal offences and the like have long been pursued by governments and law enforcement agencies: but the growing reach of social media, internet penetration and a lack of sensitisation has made the problem almost impossible to handle (and trace). Social media platforms have also recognised their complacency, or at the very least, displayed inadequacy in their efforts to tackle harmful content. The Intermediary Guidelines under Rule 4(4) try to remedy this by requiring significant social media intermediaries to “endeavour” to employ technology-based measures and automated tools to detect and remove harmful content. But what about inciting content that does not meet thresholds of illegality? Or content that, like before, has escaped the scrutiny of law enforcement and the intermediary? Do we then have a shield or merely a band-aid?
- Protection of fundamental freedoms
In April 2021, Twitter received notices from the Central Government to take down posts critical of how the Government was handling the second-wave of COVID-19 infections. The notices were issued under Section 69A of the Information Technology Act, 2000 (“IT Act”). Twitter complied with the Government’s orders partially by geo-blocking these Tweets from being accessed in India. Section 69A allows the Government to issue orders to intermediaries to take down information hosted by them on certain specific grounds – interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to these grounds. The provision has been vastly criticised for the imbalance it inflicts on the power of the Government and the right to freedom of speech and expression.
While the Intermediary Guidelines sought to increase transparency in how social media platforms treat user content, they must be understood in the backdrop of the allegations of bias levied against social media platforms by several politicians in India and outside – such as Tejasvi Surya (Indian Member of Parliament), Donald Trump (former President of the US), and Nicolas Maduro (President of Venezuela). These are examples of politicians calling for greater accountability from social media platforms when they act against user-generated content, and the prevention of arbitrary censorship of content.
The usage of the unfettered power under Section 69A of the IT Act essentially circumvented the procedural safeguards provided for content moderation under the Intermediary Guidelines, as they do not interfere with the powers of the Government under Section 69A. While the Intermediary Guidelines encourage fairness and transparency, these safeguards become qualified when they do not apply to Government-action. In such a scenario, when social media platforms moderate content on the directions issued by the Government, procedural safeguards are lowered, and the underlying purpose of the Intermediary Guidelines may be lost.
In view of this, the weapon may not essentially accomplish its goal.
Furnishing reinforcements to the battleground
Content moderation tools are necessary reinforcements to the constant battle against harmful content on the internet. Social media platforms such as Facebook have long recognized the need for greater investment in both technological and human resources to strengthen content moderation. Facebook and Twitter have also voiced their preference towards automated moderation. The automation and AI-based approach is considered smart, quick and cost-effective when compared to hiring large numbers of human moderators. But if history serves as an example, the efficacy of technology is admittedly not quite there yet. It’s not that social media platforms do not employ human moderators – they do. But they are not enough to scale the number of users (and the number of posts).
The UK’s Online Harms White Paper has recognized that unless automated tools are highly accurate in their identification, human review cannot be minimized.[i] This position can be rationalized further on the premise that human moderation can recognize contextual elements that AI-tools cannot. In Myanmar’s example, Facebook expressly recognized the need to increase the number of content reviewers owing to the volatile local context in which communications were exchanged. For instance, a Facebook post intended to incite violence against Rohingya Muslims in Myanmar claimed that “cockroaches” need to be “trampled”; only a reviewer with an adequate understanding of the socio-political volatility surrounding the minority community in Myanmar would be able to discern the harm that the post was capable of causing. Would a mere expression of dislike to an insect be recognized as ‘harmful content’ by an AI system? Not likely. Do we have AI-tools capable of discerning contextual elements from individual pieces of content in a local set-up? Not yet.
Unfortunately, problems multiplying on social media do not wait for technology to catch up. It is, therefore, suggested that the law require social media intermediaries to appoint human moderators in proportion to their user base in a jurisdiction or locality. This solves two things: the worrying number of harmful content that a small number of people have to review, and the problem of erroneous moderation due to moderators being unaware of the context in which posts are made. While this may initially increase compliance costs, it is essential in view of the unbelievable penetration that social media platforms have achieved across the world. One way in which companies can comply with such a requirement and still manage increased costs can be through outsourcing moderation requirements. For example, Genpact and Accenture act as third-party contractor firms which employ content moderators for Facebook. Such arrangements can be scaled to increase the number of human moderators in individual jurisdictions as well as to tackle the problem of contextual misinterpretation.
While on one hand is the need for human moderation, on the other is the proper training and protection of content moderators. The need for adequate training is important for two reasons – first, moderators must be trained to steer clear of being the arbiters of truth and applying personal biases, and second, they must be trained in local languages, slangs, and discourse. For the first aspect, the Intermediary Guidelines help, as they aim to enhance transparency in the process of content removal and account suspension by requiring significant social media intermediaries to notify users and give them an opportunity to be heard.[ii] The second aspect is left to social media platforms and their approach to content moderation. Human moderators on social media platforms have expressed concerns over a lack of support, training and assistance in actually reviewing content. There is an urgent need to remedy this to both increase efficiency in moderation and protect content moderators. It is also imperative that adequate psychological support and protections available under a jurisdiction’s labour laws be ensured to moderators, owing to the Herculean (and most often, traumatic) tasks they are employed to accomplish.
Conclusion
Social media platforms and user-generated content continue to grow. It is critical that approaches to content moderation are revisited to increase efficacy. In doing so, it is also imperative to keep in picture transparency and fairness for all origins of censorship online to achieve the ultimate objective: foster free speech, deter harmful speech, and harness accountability.
[i] Government of UK, Online Harms White Paper: Full Government Response to the consultation, Dec. 2020, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/944310/Online_Harms_White_Paper_Full_Government_Response_to_the_consultation_CP_354_CCS001_CCS1220695430-001__V2.pdf, p. 46.
[ii] Rule 4(8), Intermediary Guidelines.