[This post has been authored by Riya Sharma and Atulit Raj, second-year students at the Institute of Law, Nirma University.]
Introduction
In the modern age, social media has turned into an essential platform for public discourse. A nation flourishes when the voices from the crowd are not only heard but listened to. These voices, which represent those who would bring about change, are being silenced in the status quo. This includes tweets seeking help in COVID times and of journalists expressing their opinion on demonetization. The profiles of activists were blocked for posting against the Farm Bills.
It is imperative for social media companies to facilitate free expression to users to the maximum possible extent. However, acknowledging the existence of advertisers’ concern and government policing, content moderation remains to be the best available solution. An output opacity-based remedy that presents itself is shadow-banning; this article expands on the same and assesses whether the current legal framework is sufficient for the regulation of artificial intelligence in relation to content moderation.
Shadow-Banning: the what of moderation, not the why
An important part of content moderation is content visibility which tends to be governed by users’ behaviour intertwined with machine-learning optimization algorithms – these systems rank users and content by adjudging their acceptability. Major social media platforms turn to shadow-banning as an alternative remedy because it does not cut off access to content entirely, but instead make it less visible to other users.
When a user’s posts or activities are hidden from other users, without their being officially banned or notified, this is known as shadow-banning. Shadow-bans differ from other forms of content moderation because they are not announced to users, which allows platforms to deny that they were ever instituted. It can result in fewer interactions with the account and lower post visibility.
Dissecting the Algorithmic Governance
Shadow-banning, as a matter of policy, remains undisclosed to the affected users, and therefore any allegations of shadow-banning tend to be ignored by social media platforms regardless of the fact that they employ visibility restrictions. There exists a direct link between shadow-banning and machine learning tools as they play a major role in syphoning off spam content in accordance with the restrictions. Recognizing the bias in algorithms sheds much greater light on how much reliance can be placed on machine learning tools. The presence of the said bias has catastrophic implications. For instance, an automated risk assessments adopted by US judges to determine bail and sentencing limits resulted in adjudging longer prison sentences or higher bails on people of colour.
Although social media platforms have stated that non-pornographic nudity is permitted in fine art posts, an algorithm is likely to remove content depicting nudity, dark-skinned bodies, fat bodies, female bodies, or LGBTQ content. The algorithms are said to be created in order to prioritize a users’ interests. For instance, Facebook implemented a secret programme known as ‘XCheck’ that allowed some users (mostly, high-profile accounts) to post content that violates the company’s policies. While the intentions may be good, evidently there exist flaws in the algorithms and the time has come to address the issue of AI’s control over social media content.
Is The Current Legislation Ready, Or Lagging?
There are no laws governing the shadow-ban or artificial intelligence in India – the Information and Technology Act, 2000 includes a similar exclusion. Although it was opined that the Digital Personal Data Protection Bill, 2022 (“the DPDP Bill”) will include provisions tackling shadow-ban, the recently released draft failed to provide any legal recourse to the users from the exploits of the social media platforms. As there exists no redressal mechanism in India, the affected users remain devoid of any justice. Furthermore, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 touch upon social media regulations but only insofar as they are related to the removal of offensive content. There is no mention as to social media platforms’ arbitrary restrictions on content visibility.
The problem of shadow-banning is not limited to India but is a global concern. The Data Service Act (“DSA”) of the European Union contains measures for such unreasonable content concealment by social media intermediaries. Article 14 DSA is a partial solution for shadow-banning as it demands that platforms codify their content moderation rules in “clear and unambiguous language”, and that the disclosure “shall include information on any policies, procedures, measures, and tools used for the purpose of content moderation, including algorithmic decision-making and human review”. Article 17 DSA demands that each moderation action to be accompanied by a ‘Statement of Reasons’ to the affected user which should include : “(1) the measure taken; (2) the legal or contractual violation that this measure responds to; (3) the facts and circumstances relied on in taking the decision; (4) information on the role of automated decision-making in this action, (5) whether or not the measure was taken in response to a third party notice; and (6) the user’s possibilities for redress”. Article 66 DSA makes the platform liable to give the reason for unjustified restrictions of content visibility.
Section 230 of the United States’ Communication Decency Act, which addresses the responsibility of social media platforms for posted content, is subject to 03 proposed legislations attempting to tackle shadow-bans: “Ending Support for Internet Censorship Bill“, “Biased Algorithm Deterrence Bill“ and “Algorithmic Accountability Bill“ are the three modifying clauses. The first two Bills argue that social media corporations should be required to loosen their content moderation policies, while the third Bill contends that they should be given more latitude to moderate further.
In light of the developing laws around the world, India should adopt similar approaches to develop a law against unjustified restrictions on the content visibility.
Conclusion, With A Hope For New Beginnings
The issue of the shadow-banning necessitates a judicial process of careful legal reasoning, as well as an industrial process that occurs on massive scales via standardized routines and procedures. Firstly, the DPDP Bill poses a great potential to address the lacunae of no legal recourse for the affected users – the DPDP Bill lists some definitions to the term “harm”, one of which is “distortion … of identity”. Furthermore, the DPDP Bill can include the Joint Parliamentary Committee’s recommendations of adding “psychological manipulation that impairs the autonomy of the individual” and “any observation or surveillance not reasonably expected by the data principal” to the list of ‘harms’.
Secondly, quoting the Founder-CEO of QuiGig, “AI [currently] has no ability to think out of the box. It only acts based on prior data and would not have an answer [to] new unique circumstances.” Therefore, it is suggested that an AI trained to classify large amounts of data into different categories can deal with generic cases and leave those “gray areas” to human moderators who can bring “expertise, empathy, and contextual knowledge to judge” complex cases, to eliminate the possibility of biases. Lastly, moderation can be defined as “the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse” and therefore, users should be provided a mechanism to provide feedbacks to the AI-powered classification system by suggesting characteristics to include or exclude when making a decision about shadow-banning content. The suggested solutions are merely stepping stones to encourage appropriate reliance on vastly potent AI-based technologies for public good.