[Shubham Damani is a second year student at NALSAR University of Law, Hyderabad. This two-part post was the winning entry in the Ab Initio Essay Writing Competition hosted by the NALSAR Student Law Review. Part 1 can be found here.]
Previously, the author had discussed a framework which would help impose liability on Artificial Intelligence in in various situations. This post continues the discussion, with an examination of the kind of liability to be affixed and what a workable regulatory framework would look like.
Should Direct Liability Model Be Accepted?
This section will examine two broad factors for the purpose of inquiring into the idea of a direct liability model. These factors of inquiry are: – Eligibility test factor and Punishment test factor.
Eligibility Test Factor
This test checks the validity of holding AI criminally liable by looking at two basic requirements for establishing any crime. These two basic requirements are: – actus reus (Guilty act) and mens rea (Guilty Mind). Actus Reus can be imputed by viewing the act committed by AI that has resulted in some crime but the mens rea presupposes the ability to form intention which in turn requires self-consciousness. So, the first contention of direct liability model is, whether an AI can be considered a self-conscious non-human entity in order to give it the legal personhood for the purposes of imposing direct criminal liability.
Those who defend the direct liability model argue that the AI entity possesses human-like consciousness and therefore should be provided with ‘electronic identity’ for the purpose of law. AI robots can sense the environment (Stuart, pg.39) through the gadgets like camera, microphones, sonar etc. and then they could consciously take decisions based on their past experiences. This is exactly similar to what human entity. Therefore, they should be treated as being capable of forming intention and thereby should be given legal personhood. Legal personhood can be provided in the similar manner as provided in case of corporations (Ramesh Subramanian, pg.95). AI unit, like corporations, should be given specific rights and obligations. Some scholars even suggested to extend the Hard AI crimes under the purview of strict liability, where the element of mens rea is not required to be established at all.
Now, those scholars who are against the concept of direct liability model argue that the AI entities are not capable of moral self-determination (Sabine Gless, pg.8), which is a pre-requisite for forming intention. An AI entity lacks the cognition for recognising the concept of right and wrong. Therefore, they lack the power to refrain themselves from performing criminal actions. Hence, these entities cannot be said to have capabilities of forming evil intention or even recognising normative values like good and bad.
Immanuel Kant has also stressed upon the importance of self-consciousness for holding a person liable for his act.[1] Wesley. J. Smith said that an AI unit can never have self-consciousness because they are the products of codes and algorithms which can never provide them with human-like consciousness. Therefore, they cannot acquire legal personhood in a realistic sense.
Punishment Test Factor
This test seeks to analyse the validity and the productiveness of the punishment, if provided to AI entity. This test factor is analysed on two criteria: Productiveness of punishment and its Spill-over effect.
- Productiveness of Punishment:
This criterion basically tends to inquire whether punishing AI units serves the fundamental purpose of punishment itself.
Those who favours the idea of punishing AI units argue (Ryan, pg.346) about the retributive importance of punishment. They focus on the idea of psychological satisfaction of the victim or his/her relatives by subjecting the offender to the same intensity of punishment, as the victim has suffered. Therefore, in these cases, it does not matter whether the offender is a human entity or a non-human one. Scholars like Christina Mulligan argues (Christina, pp.579-80) that “taking revenge against wrongdoing robots, specifically, may be necessary to create psychological satisfaction in those whom robots harm.”
Now, those who condemn the idea of criminally punishing a non-human entity circumscribe their arguments around the idea of deterrence as the purpose of punishment. They believe that the prime objective of imposing criminal punishments is to deter people from further committing the crimes. They argue that the AI entity is not deterred because they are non-moral agents(Gless, pg.5), who lack the ability to reflect upon their own choices. Therefore, punishing AI unit would not serve any purpose. Moreover, it does not fulfil the Hart’s five-point definition of punishment. One among those five points is that the punishment should be painful and unpleasant for the offender. But as far as the AI entities are concerned, they are ignorant of the ideas like pain and pleasure. Therefore, it is practically impossible to deter an AI unit.
- Spill-over effect:
This criterion is more in terms of a challenge for imposing criminal liability on AI units. It argues that punishment, if given to AI units, would not be limited to them but instead it would be spilled over (Ryan, pp.362-64) to innocent actors like the developer and the user. For Example: An AI-enabled expensive car hits a pedestrian on the street and thereby kills him. The car was on auto driving mode at the time of accident and also the liability is irreducible and non-attributable to anyone other than the AI itself i.e. a Hard AI crime case. Now if in this scenario, the AI unit gets the punishment of being permanently dismantled, then its effects would also be spilled over to the innocent car owners. They with no fault of their own, will lose an expensive and significant feature of a car and thereby suffer losses without any just cause.
A Workable Regulatory Framework
The discussion in the previous portion unambiguously points out that direct liability on AI entities cannot be applied without distinction. At the same time, a crime cannot be overlooked. Therefore, a framework has to be designed in order to reconcile both the situations so that the dilemma of imposing criminal liability on AI units can be overcome.
Jurisdictions like Europe and the UK have attempted to create a legal framework but they are yet to come closer to a stable legal framework. Now, considering the already suggested frameworks and the above discussion; an endeavour is made to suggest a workable legal framework on this front.
A regulatory framework has to be formulated in such a manner which would define the duties and obligations of each and every person who lies in the chain of interaction with the AI entity. Starting with the basic unit of formation of an AI entity i.e. the code developers. These people must be held duty-bound to follow the Asimov’s four laws of robotics, while developing an AI unit. These laws should be strictly endeavoured to the extent of employing the best knowledge and technological know-how which could be considered optimum with respect to AI’s parlance.
Then comes the duty of the manufacturers or the producers. They are key stakeholders for launching an AI entity as they cull out high economic interests out of it. Therefore, their liability towards the AI units would be greater (Gless, pg.15). They must check all the technical standards before launching any AI product. They must also keep an eye on the performance of the AI entity in the market by regularly taking the customer’s feedback. And if any problem persists, they must immediately call back the product and fix the problems thereof. Also, they must be duty-bound to arrange an ex-ante financial insurer for the purpose of creating an AI contingency fund, so that the reparations can be made in the cases of mischiefs committed by AI entity. And in case he/she does not find an insurer, it becomes their responsibility to act as a financial insurer.
The next in the process is the mandatory registration of the AI entities by the government, in order to re-assure (Ryan, pg.379) the safety standards. And at last the user must also be obligated to use the AI unit as per the proper guidelines that are being provided with.
Now as far as the punishment aspect is concerned; it may definitely be a futuristic concept i.e. when the AI entities would be capable of moral self-determination. Currently, the aspect punishment could be dealt with slackness because punishing AI does not fulfil even the purposive requirement of the concept of punishment itself.
The ultimate chain of inquiry is, to firstly look at whether the crime committed by the AI unit is reducible to the responsible human entity (in the light of increased duties imposed upon them, as proposed in the framework) and in case it is not, a sum of appropriate amount from the AI contingency fund should be given as compensation.
Conclusion
The discussion suggests that presently, the AI entity can be endowed with the status of a Quasi-person (Asaro, pg.4). However, the development and evolution in AI is far reaching as the Tilden’s law of robotics clearly gives indications that AI may become a major threat for the human race. Their sole object would be to protect themselves and eliminate (Weng, pg.282) the one who comes in their path. AI is already running on the way of becoming Artificial Superintelligence (ASI) (Radutniy, pg.136). It means that AI will have the capacity to develop themselves. It’s anthropomorphizing more and more over the years and has capacity to replace the human race after a century. Their autonomy, independency and unpredictability are the premises on which the legal framework is framed. Therefore, the law has to be constantly developing in order to cope up with these ever-evolving threats posed by the ever-evolving AI technology.
[1] Immanuel Kant, Kritik der praktischen Vernunft, in Immanuel Kant, Werke in zehn Bänden, vol. 6, 223 (Wilhelm Weischedel ed., 1975).