Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Exploring the Feasibility of Pretrial Risk Assessment Tools

Posted on June 14, 2021December 27, 2024 by Tech Law Forum NALSAR

[This post has been authored by Tanvi Tanu and Sakshi Tulsyan, 2nd year students at the University of Petroleum and Energy Studies, Dehradun.]

The biases and inequalities are infused in the criminal justice system and are baked into the algorithmic tools, Pretrial Risk Assessment instruments being one of them. These tools are used in the determination of a qualitative value of risk related to the non-appearance of a pretrial defendant at trial; they are lauded as a substitute for the perturbing cash-reliant bail system. 

Algorithmic Risk assessment tools have been touted as a non-discriminatory approach in bail jurisprudence; moreover, they seemingly offer a win-win situation for the defendant, the state, and the country’s economy. In the hope of reducing bias and incarceration rates while maximizing individual liberty and public safety; risk assessment tools are being rolled out in several jurisdictions of the world, with India following closely behind. In May 2017, the 268th Law Commission Report viewed risk assessment as an upholder of the doctrine of “presumption of innocence” and listed it as a recommendation to be adopted in India’s bail system. This article will assess if incorporating risk assessment tools in pretrial decision-making would be a good move for India.

Several of such risk assessment tools are purpose-specific such as juvenile justice risk, domestic violence risk. A pretrial risk assessment tool predicts the future behaviour of the defendant based on a predefined set of factors such as the age of the individual, their criminal history, historical background, and familial ties. They categorize individuals as low-, moderate- and high-risk based on the achieved risk score, calculated on the mere correlation of factors instead of a causal relationship. Though these algorithmic tools can give the veneer a more scientific approach than a judge’s discretion, what is utterly inconvenient is the opaqueness in the criteria of their functioning. 

Risk Assessment Tools: A Black Box

Such AI/ML systems were meant to treat individuals alike and help in minimizing the risk of judicial bias, instead, they are endangering fairness in the justice system. As these algorithms are skilled on large datasets, they reproduce the same biases present in the original dataset. These tools may even be tainted by the personal biases of the developers. They have been branded as “evidence-based” while they seem to reproduce systemic inequities and result in detaining individuals even before they are convicted. The developers blatantly refuse to reveal the working of these tools. Thus, on one hand, the courts and individuals are expected to trust the tool as a means of “scientific and validated” outcomes while on the other hand, it remains largely untested and unvalidated by third-party audits. 

The undisclosed working formula of these risk assessment tools has wreaked havoc among human rights activists in the US who, per the studies conducted, believe that the decisions based on these tools could culminate in racial bias. In 2016, a study by the Public Institute of California depicted that African-Americans were thrice more likely to be arrested as compared to white people. In November 2020, California voters rejected Senate Bill 20, which sought to replace the much frowned upon system of cash bail with a risk assessment model in bail jurisprudence. The risk assessment model was dubbed as “an ethically dubious risk assessment algorithm with racist potential”.

In their paper, Megan T. Stevenson and Christopher Slobogin discuss how these tools analyze factors such as age differently. The researchers partially reverse-engineered the COMPAS VRSS, a black box risk assessment tool, they discovered that 60% of the risk score is based on the age of the defendant. The findings of their research can be simply put as follows- suppose a person ‘X’ is 18 years old while ‘Y’ is 40 years old, the risk scores of ‘X’ would be twice as high as ‘Y’. Stevenson and Slobogin identified the factor of “age” as a “double-edged sword”-while on one side, younger defendants are often viewed as less blameworthy by courts, on the flip side, criminologists have found that younger individuals commit crimes at higher rates than older people. As is evident, the tool’s approach is one-dimensional and it identifies younger age as a risk thus pointing towards the culpability of young people. Needless to say, these algorithms, on the face of it, have only exacerbated the bias that they were expected to eliminate.

How suited is it to a justice delivery system?

These tools fail to draw a line between the risk of flight and the risk of committing a new crime; the negative experiences with the risk assessment tool COMPAS dates back to 2016. In the case of Loomis v Wisconsin, one of the issues before the court was whether the tool violates the fundamental right of due process of the defendant since it makes use of gender and race as criteria. Since the defendant was unable to prove that the tool perpetuated bias, the court held that the said constitutional right of the defendant was not violated. It is pertinent to note that gender and racial bias in a proprietary risk assessment tool will be difficult to prove before a court of law.

The judicial rationale is not merely concerned with what is laid down in a statute, there are various aggravating and mitigating factors that the judges consider before arriving at a decision. This tool in a way exposes the defendant to prejudice. Although the final decision to release a person on bail rests with the judge, but when a risk assessment tool labels an individual as “high-risk”, it is very likely to influence the judge’s perception of the defendant’s character. Thus, the right of the defendant to be presumed innocent unless proven guilty would be unintentionally hampered. 

A Changing Stance

This tool was considered a panacea to ills in the criminal justice system but now the outlook on the same is steadily shifting. The Maryland-based Pretrial Justice Institute (PJI), which was once a strong proponent of deploying these tools in the criminal justice system, had switched its position by February 2020; it stated that it no longer revered the pretrial assessment tools as a solution for predicting an individual’s appearance in court. Even China has an ambitious plan of using AI in the judicial process but remains concerned about the so emanating ‘black-box’ decisions. The country finds a lack of transparency as a major challenge in using this tool in complex cases. Furthermore, it was the first time, a state, Idaho in March 2019 enacted a legislation that encouraged transparency and accountability in pretrial risk assessment tools.

Recently, in January 2021, Illinois passed the Pretrial Fairness Act which allowed the defense attorneys to challenge the validity of such tools in court. In 2007, an actuarial risk assessment tool based on Static-99 was introduced in Japan for assessing adult sexual offenders. Moreover, countries like Malaysia have begun utilizing AI in judicial decisions, though the same remain largely restricted to drug possession and rape cases. The bottom line is algorithms can help reform the criminal justice system provided they are carefully applied and are regularly inspected for efficacy.

Conclusion

Six years ago, in Pennsylvania, the originators of the algorithmic risk assessment model had opined that it was no good, that is to say, the model has been unsettling from its very inception. Interestingly, in the year 2018, over 100 organizations came together against these risk assessment tools and outlined six recommendations to lessen the harm these tools inflicted; they suggested a call for “adversarial hearing” before pre-trial detention.  

Furthermore, the American Civil Liberties Union (ACLU) argues that the broad categorization of defendants largely restricts individual assessment by a judge in an adversarial system’s pre-trial hearing. It becomes pertinent to note that countries in the West have been meaning to acquire the same traits of an adversarial system that India is motivated to shed. 

Therefore, to optimize these tools to suit the Indian legal regime, if we adopt these risk assessment tools, we should look at them as a support system and not try to substitute them for a judicial mind. The allocated risk scores are not set in stone; the judges can be trained to have a better understanding of the working of these tools. What really matters is making liberty a norm and pretrial detention a limited exception with or without cash bail.

bento4d

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme