Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Criminal Liability of Artificial Intelligence (Part I)

Posted on December 18, 2020December 18, 2020 by Tech Law Forum NALSAR

[Shubham Damani is a second year student at NALSAR University of Law, Hyderabad. This two-part post was the winning entry in the Ab Initio Essay Writing Competition hosted by the NALSAR Student Law Review. Part 2 can be found here.]

“The development of full artificial intelligence could spell the end of the human race.”
Stephen Hawking

The field of artificial intelligence emerged in the year 1956, when for the first time John McCarthy had coined this term at the Dartmouth AI Conference. He had also defined AI and this definition is well accepted in the field.

“[M]aking a machine behave in ways that would be called intelligent if a human were so behaving.”

Therefore, the criteria for qualifying as AI is that the system has to think, act and behave like an intelligent human being. The most essential features of AI that make them similar to the humans, are their capacity to learn from past experiences and their ability to work independently. This implies that the AI systems are not merely the slaves of the codes and algorithms; they have the ability to take decisions from various alternatives available and that choice would be dependent upon their learnings from past experiences.

The world has witnessed significant developments over the last seventy years. Currently there are well known AI systems which are highly autonomous, unpredictable and hundred times intelligent (Radutniy, pg.133) than humans. Some of the popular ones are AI ROSS, an IBM manufactured AI unit for dealing with bankruptcy cases; AI DEEP BLUE, an independent chess playing unit which has defeated the world chess champions like Gary Kasparov; and SELF-DRIVING CARS, an autonomous self-driving system which takes decisions by analysing the environment.

These systems are being increasingly deployed to work alongside human beings as they substantially enhance the human performance. But that is not the only outcome; it also carries some negative externalities and doctrinal questions for discussion. One such externality is the harm that they might cause to human life and this doctrinal question of liability needs to be resolved.

One such instance could be the incident of March 19, 2018; when an automatic driverless SUV killed an old woman in the lanes of Arizona. This is not the only incident of harm caused by automated machines; statistics suggests that in the U.K. alone, 77 robot based injury cases have been registered in 2005 and there are also some instances of death cases where the autonomous industrial robots are accused of either crushing workers’ heads or pouring molten aluminium on them.

There is a big dilemma in attributing the liability in these cases because the current legal jurisprudence around AI crimes is under-developed. There are no specific laws or regulatory frameworks in this respect. And the biggest hurdle in formulating any law or regulatory framework, is just one question i.e. ‘who should be held liable for the crimes committed by an artificial intelligence unit’? Should it be the code developers, manufacturer, user or AI itself? This dilemma in fixing liability arises because AI is an autonomous entity, capable of acting independently. On one hand it would be unfair to impose liability on developers, manufacturers and users because the AI works on its own and there is no fault on the part of these human entities; but on the other hand it seems bizarre to fix liability and to punish a non-human entity.

Discussing the possible solutions is the focus and the aim of this post. It could be done by inquiring upon the proposed models of attributing criminal liability in these type of cases and thereafter, an effective and efficient framework can be formulated in this respect.

Ascertaining Liability for Crime by an AI

It discusses the possible theoretical framework for attributing liability for AI based crimes. This framework is a three model framework which would help to fix the liability in various possible situations. The gradation of these models are based on the level of autonomy and independency of an AI unit.

  • Non-Autonomous or Principal-Agent Liability Model

This model deals with the situation whereby the AI unit acts as an innocent agent who follows the command of the principal i.e. the AI in this model is treated as an instrument in the hands of the principal, who intentionally directs the AI system to commit a crime. The AI does not act as per its accumulated experiences but rather the principal has the complete command over its function.

Pursuant to this case, the perpetrator could be either of the two persons namely (Matilda Claussen, pg.41) the AI-developers or the user. The AI-developer could perpetrate the crime by intentionally coding specific programmes into the AI system that would result in a criminal action. Similarly the user could also command the AI-unit to perform a act which is criminal in nature. Therefore, in these cases, it is quite evident that the liability would be fixed upon the principal/perpetrator (Gabriel Hallevy, pg.179) who instructed the AI system to commit the offence.

  • Semi-Autonomous or Foreseeable Consequences Liability Model

The second model deals with a situation where the crime committed by an AI unit is attributable to the human entity (Hallevy, pg.181-85) even if they do not intend to commit that crime. It suggests that the persons associated with the functioning of AI owe a ‘duty of care’ to prevent the AI from committing the crimes which are reasonably foreseeable and if the person fails in doing so, it would call upon a criminal liability on them.

For example: – An AI entity/robot is designed for the army defence purposes, which is programmed in such a way that it identifies the threat to its mission and eliminates it by using its in-built capabilities.  But if in case, the AI entity wrongly identifies the threat and thereby eliminates it; the code developer would be held criminally liable because it is reasonably foreseeable that an event like this is probable. Therefore, the developers owed a duty of care, which they have breached.

This model is indicative of the fact that in certain situations, the liability can be fixed upon a person even if the AI unit has worked autonomously. Moreover, the defence of unpredictability of AI entities cannot be taken (Gless et.al, pg.427) because launching an unpredictable system in itself calls for a duty of care.

  • Fully-Autonomous or Direct Liability Model

This model deals with a situation whereby the AI entity commits the crime on its own (Hallevy, pg.186) and the crime was neither foreseeable nor was it a probable consequence. These crimes are referred to as ‘Hard AI’ crimes (Ryan Abbott et.al, pg.328). An example for ‘Hard AI’ crime could be the AI entity named ‘Tay’ (Matilda, pg.18). It is a chatterbot created by Microsoft. But soon after its launch, it had to be taken back because it started tweeting racist and sexist statement. In this case, the concerned chatterbot was developed with good intentions and a high duty of care however, it started acting in an unforeseeable and improbable manner. 

The previous two liability models were quite straight-forward and easy in terms of fixing the liability because in those situations, the entity which is subjected to the liability would be a human entity. But in this situation the perpetrator is a non-human entity and the liability is not reducible to any person; be it developer, manufacturer or user.

Gabriel Hallevy, a criminal law professor, suggested that the AI entity should personally be made criminally liable (Hallevy, pg.186) for committing the crimes which are irreducible to any person and thereby should also be punished. The punishment could be in form of temporary shutdown or may be permanent dismantling, which would then be equivalent to imprisonment and death penalty respectively. But there are other scholarly opinions which suggest that the AI cannot be and should not be held criminally liable because of the limitation of legal framework and punishing AI would not serve any purpose (Ryan et.al, pg.344).

This lack of consensus in scholarly opinions leaves us in an uncertain position regarding the acceptability of the Direct Liability model. It could only be unravelled by deeply analysing the limitations on imposing direct liability.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe

Recent Posts

  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)
  • Lawtomation: ChatGPT and the Legal Industry (Part I)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2023 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme