[Ed Note: The following post is part of the TLF Editorial Board Test 2020-21. It has been authored by Harsh Tripathi, a second year student of NALSAR University of Law.]
Picture this: A computer software, running on AI-based algorithms, has been deployed to scrutinize housing applications. However, the applications filed by the members of a particular community or people with a particular sexual identity are constantly rejected while most allocations are being made to the members of a different community.
While the above scenario does play out in the Indian context, it aptly projects the real-life ramifications of implementing dicey algorithmic systems. It is undeniable that the State must employ tools of contemporary technological developments for enhancing the efficacy of State functions. But ensuring the fair and uniform application of such systems is an equally pertinent aspect of the State’s fidelity to its subjects. Implementation of AI-based algorithmic systems, carries the possibility of algorithmic bias. It poses certain compelling concerns, the lack of addressal of which, can have alarming implications.
Accountability and Discourse
In India, the algorithmic systems applied by State bodies are highly obscure, having insignificant, if at all, public accountability. The diversity in India and the prevalent compartmentalisation on the basis of caste, religion and political view
. makes the accountability all the more relevant. For instance, in 2015, Delhi Police had announced the induction of the Crime Mapping, Analytics and Predictive System (CMAPS), a predictive policing system that uses satellite technology to locate and designate crime hotspots. Almost 5 years later, no reports or statistical data has emerged regarding the efficacy of this system. In fact, the possibility of the prevailing Delhi police biases against minorities being extrapolated to such algorithmic systems raises serious concerns.
Currently, there is no framework to ensure accountability to the public. Law enforcement agencies enjoy substantial exceptions from the Right to Information Act which makes it a herculean task to enforce accountability. The anchoring legislation crafted for this domain, the Data Protection Bill, has a lot of gaping holes which herald its gross inefficiency. Adding to that, the amendments to the RTI Act, as proposed in the Data Protection Bill, further restrict a citizen’s prerogative and curb the disclosure of vital information. The amendments empower the concerned authorities to deny information by widening the ambit of the term ‘harm’ in the context of public interest. Unsurprisingly, it garnered massive backlash and protest from transparency activists. This underscores two vital concerns. Firstly, there is no requisite legislative backing to regulate such systems. Secondly, implementation of the same pays no heed to public discourse.
Public discourse is of germinal importance here. With innumerable differences in the Indian population on the basis of caste, class, religion, world-view, etc., public scrutiny assures the people that such an algorithm is dissociated from any subjective assessment. One might intuitively think that these AI-based algorithms produce impartial outputs since they are machine-driven. But these systems don’t just function on pre-inputted instructions. They learn from the “datasets”, or, in simple terms, “examples” that are fed into the system. For instance, in 2018, Amazon had to shut down a recruitment system based on AI algorithms because it discriminated against women in the selection procedure. It was so because the “datasets” fed into it, in this case the resumes for job applications, were mostly from men and the data history read by the algorithm showed a greater acceptance of men in the tech industry than women. Such occurrences are usually followed by banal explanations that machines can provide erroneous outputs, but no clarity on such errors is ever provided. Moreover, this lack of discourse gets shielded under the pretext of what can be termed as algorithmic authority. Algorithmic authority comes into play due to the excessive faith people put in machine-driven outputs. It is common for people to think that computers are incapable of producing wrong results and this paves the way for intentional biases. Victims of algorithmic biases, therefore, do not opt for redressal mechanisms, because in most cases they can’t envisage the plausibility of a computer discriminating against them.
The Ambit of Possibilities
A major concern around the usage of these systems is the endless pit of the possibilities of application. The purpose for which an algorithmic system is created can conveniently change according to the needs of the State. The kind of datasets fed into these systems and the determinable outputs can potentially be used for innumerable purposes. Even if the State finds legislative backing for such systems, laws can be amended. Amendments that can possibly legitimize the usage of such systems for delimitation exercises or creating the National Registers for Citizens (NRC). These concerns are not uncommon. Earlier this year, the Telangana State Election Commission employed facial recognition system as an identification mechanism in the local election which raised significant privacy concerns for the voters. India is not alone in employing such systems. When facial recognition algorithms were used for elections in Afghanistan last year, a large number of women were disqualified from participating. Consequently, such algorithms have garnered substantial global backlash.
As a matter of fact, in August 2020, two algorithms in use by the State authorities were scrapped within 10 days on the grounds of biased functioning of the algorithms in the UK. The Home Office discontinued the usage of such an algorithm, employed for visa applications while the London Court ruled that the usage of facial recognition algorithms by the police infringes the human rights of the subjects. The Court of Appeals opined that there existed “fundamental deficiencies” in the existing legal frameworks governing the usage of algorithmic systems by law enforcement agencies.
Reliance on Private Sector
The ambit of possibilities further widens with the involvement of the private sector. This coalescence has greatly contributed to the erosion of transparency. Most States want to project themselves as ‘AI Ready’ and to accomplish that, they often resort to private firms for developing algorithms. This is a dichotomy as there exists a conflict of interest. A State is responsible towards its citizens, while private firms to their shareholders. The State’s intent is to improve welfare and efficiency frameworks, while private entities have purely economic interests. This inherent antagonism can potentially threaten an individual’s privacy and rights. Large amount of personal and non-personal data is given to private firms for building algorithmic systems. The ambit of risks and quantum of accountability are at serious odds. The State’s role can be questioned, but private firms do not have any accountability towards the subjects. Nevertheless, the ‘technologically updated and upgraded’ State relies heavily on private bodies for building algorithms. Law enforcement agencies are a great example of the same. Predictive policing algorithmic systems in various states (Punjab, Rajasthan, etc.) have been developed by a Gurgaon based start-up ‘Staqu’. While there is no blanket presumption that private entities will misuse the data provided, there is still a norm of compliance that citizens abide by with the government, the like of which cannot be mandated by private bodies.
A State’s incorporation of technology to improve its functioning is a harbinger of its forward-thinking intent. But it cannot disregard the consequences that such contrivances carry. A State needs to ensure public accountability in the policies and measures that it employs. Algorithmic systems are one of newer branches of AI that is constantly evolving and usage of the same by law enforcement agencies can have precarious upshots. Algorithmic biases have been globally recognised; major democracies (like the US) have introduced specific legislations to address the same. The Indian State too needs to project the requisite transparency to ameliorate the revolving concerns. It needs to assume a more centralised role while designing these algorithms and prevent the line of State/Private demarcation from getting blurry. With the increase in employment of AI-based systems in various functioning of the State, the Indian legislature needs to contemplate on formulating a legislation to regulate application and ensure transparency.