[Nandini Sharma is a fourth-year student at Rajiv Gandhi National University of Law, Punjab. This article argues that India’s legal framework fails to hold political actors accountable for deliberately manipulating algorithms to spread disinformation, instead placing disproportionate liability on platforms. The author proposes a listener-centric approach to freedom of expression that recognizes citizens’ rights to an undistorted informational environment and traces liability back to the human actors behind computational propaganda.]
Introduction
The use of algorithmic manipulation for portraying distorted political narratives has taken a disastrous turn in online informational space, especially after the rise of deepfakes, which is an additional weapon in the existing artillery of informational manipulation. The present work highlights instances of disinformation in the political sphere where involved actors aim to distort the informational environment and steer the public behaviour towards their political gains. It explores the question of tracing the liability of actors involved in planned political manipulation under Indian internet regulations and suggests a shift to demand their accountability. In this context, the term’ political actors’ refers to political leaders and individuals involved in their publicity functions. Arguably, the current legal framework lacks a human agency model to determine liability in cases of algorithmic amplification, resulting in platforms being disproportionately held liable and exempting the actual actors involved. The article makes a case for a listener-centric approach to freedom of expression for ensuring that the actual actors involved in the dissemination of political disinformation are held accountable.
The Trend Of Political Disinformation: Deepfakes And Algorithmic Manipulation
The manipulation of public opinion has evolved in different phases and gradations according to technological growth. The notorious Cambridge Analytica scandal exposed the fragility of digital spaces in preventing behavioural profiling and its potential impact on individual choices in social settings. Several other cases, such as TikTok’s directing search results to right-wing content, posed intertwined questions about the limits of online expression amidst technological advancements.
The rise of deepfakes makes the informational environment more prone to manipulation, due to their deceptive tendencies, which makes the regulation of political disinformation all the more necessary. The recent circulation of Trump’s deepfakes for targeted campaigning presents the prime wake-up call for legislators around the globe. Political actors have always attempted to sway public opinion in their favour; however, the technological tools through which such practices are facilitated today present significant challenges to the informational self-determination of individuals. The rise of ‘softfakes’ is a prime example in which a political actor is portrayed according to the projections of the targeted demography. The rise of deepfakes has led to the invention of a new portmanteau for describing the political manipulation through generative AI known as ‘Slopoganda’. While the invention of these new slang captures the increasing popularity of deepfakes, the gravity of harm posed by these trends on an individual’s informational rights, especially during elections, is far greater and beyond these slangs. Notably, these forms of speech are often misleading and shared with malicious intention, and therefore, can be associated with disinformation.
Most recently, deepfakes have been brought under the regulatory ambit through amendments recognising the mandatory requirements surrounding synthetic content. Notably, deepfakes only present one side of the coin, i.e., ‘what’ of the harmful content; however, the question of ‘how’ holds relevance in determining the liability of the actual actors which can be dealt with by shifting regulatory approach surrounding harmful content.
Tools Of Algorithmic Manipulation
Amidst the rise of diverse mechanisms in content management and personal data capitalisation, numerous tools are deployed to strategise content dissemination; such as bots, algorithmic information curation systems (AICSs), profile optimisation, organised trolls, and account buying, among others. Actors have negatively harnessed the inherent tendencies of algorithmic systems to propagate divisiveness, polarisation, and the deprivation of individual attention away from crucial public issues.
The profit-maximising nature of social media platforms requires the constant utilisation of recommendation algorithms, which essentially means exposing an individual to content that can produce maximum user engagement. This tendency has given rise to Filter bubbles, which create a deception among individuals that the content they are consuming is sufficiently representative of existing information, which strengthens the confirmation bias. This practice creates a vicious loop, preventing individuals from trusting diverse sources of information and opposing viewpoints. Eventually, they have become one of the most prominent mechanisms to restrict the opinions of citizens. Political disinformation travels to a larger audience within a few hours due to an emotional pinch and leaves an unprecedented impact on the political realities of society.
It must be noted that targeted content delivery is not inherently harmful to the informational environment in a commercial setting. However, the same is not true for online expression of political narratives because platforms have become an important tool for the exercise of public rights by both the audience and the speaker; therefore, such political expression involves a gamble on public rights from both the audience and the speakers. Notably, such manipulative expressions affect the informational accuracy and diversity of viewpoints as recommended by the UNESCO Recommendation on the Ethics of AI.
It is contended that citizens’ informational self-determination holds greater value than the political interest of a few individuals. Therefore, algorithmic manipulation should be dealt with a regulatory hand to safeguard the ancillary rights, such as the right to be informed and the right to think freely, which facilitate the public right of expression over the internet
Choose The Real Culprit- Algorithms Or Humans Behind The Distorted Informational Environment
More often than not, the black-box nature of algorithms coupled with a platform’s profit-maximising tendency has been accused of disseminating harmful content. However, in shifting liability towards fast-growing technological tools, which are difficult to decode, policy-makers often ignore the background actors involved in distorting an individual’s informational rights.
Social media algorithms have been designed to maximise user interaction through content engagement; therefore, some scholars have contended that although the role of algorithms in disseminating harmful content is significant, it is often complemented by human manipulation. Studies have proven that religious and social identities also affect exposure to a specific form of harmful content. Political actors benefit from the audience’s existing bias by tailoring their content to these identities through personalisation, often with a manipulative intent to create a long-term impact on individuals’ psyches. Such practices have been collectively termed as computational propaganda, which describes the use of algorithms, automation, and human curation for intentionally distributing deceptive information over social media networks. A project by Oxford Internet Institute, namely, Computational Propaganda by the University of Oxford, presents the disturbing reality of these tools for manipulating public opinion.
Notably, when human intention is significantly involved in such scenarios, it should not be ignored in determining liability and demanding accountability. The role of human agency behind algorithmic functioning was highlighted by the Karnataka High Court, although in a different context where the court undertook a comprehensive constitutional scrutiny on the validity of informational blocking orders under ancillary provisions of the IT Act and related rules.
Listener Centric Approach: Towards Finding Liability For Manipulating Minds
The Indian approach is not suited to regulate algorithmic manipulation because of two primary issues: namely, the inadequacy of concepts like curated content and the limits of due diligence requirements in tracing liability of actual actors.
Notably, content curation involves a degree of editorial control in arranging the content. IT Rules only recognise curation which is professionally driven and deployed on an organised scale. The term ‘publisher of online curated content’ as defined under Rule 2(u) of the Intermediary Guidelines, 2021, is primarily meant to cover OTT platforms such as Netflix, Disney+ Hotstar, etc. On a legal plane, to understand algorithmic manipulation, the algorithms exercise the limited editorial function by curating content; however, political actors play a background role in influencing such curation by different methods, as discussed above. Therefore, the limited concepts under the Indian legal landscape do not capture the complexity of factors and leave the challenge posed by computational propaganda unaddressed. In this backdrop, giving attention to listeners’ rights against disinformation and manipulation provides a theoretical foundation. It requires creating an informational environment that regulates deception and manipulation and facilitates advancing the purpose of free expression, i.e., effective participation and open discourse.
In India, harmful content on online spaces is primarily regulated by imposing due diligence standards on intermediaries, which essentially requires active moderation from their end. Particularly, the legislative framework to address disinformation remains fragmented and fraught due to arbitrary standards in determining the factual veracity of the content. Due diligence standard focuses on preventing further harm, but fails to trace the liability of malicious actors.
Listener-centric interpretation of the freedom of expression emanates from the case of Lamont vs Postmaster General of the US where the US Supreme Court declared a law as unconstitutional, which mandated individuals to obtain permission from the state for accessing particular information. Furthermore, in another landmark judgement, in assessing broadcasting regulations that prohibited overly critical speech, the court preferred listeners’ rights over those of broadcasters.
Interestingly, the Listener-centric approach was indirectly applied in Union of India vs Motion Pictures. The court upheld the mandatory requirements under the Cinematographic Act and allied rules, which mandated the cinema owners to show short educational and scientific documentaries to encourage citizens towards informed decision-making.
It is essential to draw inferences from the existing understanding of privacy to deliver a substantive formulation to the listener-centric approach in freedom of expression.
Importantly, the Supreme Court in KS Puttaswamy vs Union of India has already recognised the principles of informational self-determination and informational privacy for protecting individual autonomy and dignity against state interference. The findings remain relevant for the internet space as the court diluted the public-private space dichotomy in recognising that privacy protects personhood and not a specific place. On a closer look, informational privacy not only protects against the initial data collection but also protects individuals from its resultant consequences, including a distorted informational environment emanating from the algorithmic manipulation.
In this respect, two relevant issues require consideration- Firstly, there is a higher probability that the informational environment is over-regulated, citing the protection of listeners from harmful content and manipulation, and secondly, determining manipulation in the informational environment is a subjective rather than a scientific scrutiny. Even if both the intermediaries and the individuals benefiting from the manipulation are subjected to proportionate liability, regulating manipulative tactics may be considered as a prior restraint against speech and face rejection by the constitutional courts.
Conclusion
Regulatory aspects for dealing with harmful content on social media by emphasising the intermediary liability approach, particularly for algorithmic functioning. However, the current travesty of political disinformation and manipulative practices on social media platforms calls for recognising the responsibility of the actors who have intentionally utilised algorithmic curation in their favour, either by tailored dissemination or creation of emotionally surcharged content. The listener-based approach attempts to address distorted informational environments over social media platforms by demanding higher accountability with respect to political speech. Arguably, it can be used as a theoretical foundation to fill the existing lacunas under the IT Rules, 2021, to locate the liability of such actors in proportion to their role and intention.