Ed. Note: This post by Rithvik Mathar is a part of the TLF Editorial Board Test 2018
One question that arises when considering the personhood of Artificial Intelligence (AI) is the existence of a moral status for the technology. For example, most people believe human life is more valuable than animal life. Utilitarian would believe that anything with the capacity to suffer has moral status accordingly. It is a measure of how much consideration the well-being of an entity deserves.
This article will begin by setting certain definitions up, following which it will briefly discuss the various models that describe the moral status of AI. Such moral status might be a precursor to conferring legal person upon AI.
Sentience and Sapience
Sentience is an ability to feel pain, while sapience pertains to certain human characteristics such as thinking and reasoning. AI may be sapient, but it is not sentient.
The debate revolves around whether the reasoning capacity and the emotional quotient of AI should afford them consideration for their well-being. For example, animals have been granted rights and even personhood. These rights stem from their sentience. This is a popular basis for personhood.
Some people believe that AI has no moral status. Richard Kemp[1] advocates against anthropomorphism with regards to AI. He argues that AI is merely a tool and must be treated so. First principles must regulate AI, that is the law of contracts, torts, and copyright law.
Nick Bostrom explains the difference between sentience and sapience in his paper, The Ethics of Artificial Intelligence.[2] He suggests that a principle of subjective rate of time, arguing that the subjective duration of an event may differ depending on whether the brain is in human from or in digital form. Building on this, he argues that AI is not sentient. He draws a parallel between animal rights and rights of AI. This is problematic in the contexts of the mentally-ill as well as infants who lack cognitive abilities. He proposes the idea of Substrate Non-discrimination. Essentially, the characteristic of the entity must not decide its moral status, provided they have the same functionality and conscious experience. Whether it is a digital brain or a biological one, they must be treated equally if they function the same and have the same conscious experience. This is an important idea. Especially since the general population does not accord any moral status to AI, that is there is no value in the feeling of a robot. Any policy move that contradicts this narrative would have to consider them as they have several relevant repercussions. These arguments propound universal rights for AI.However, moral status should be more contextual. It resides in neither the subject (the person watching) nor the object (the thing being watched), but in the relationship between the two, that is to say, that moral consideration must stem from the relationship between AI and persons. This would make it context dependant. The same is argued by Mark Coeckelbergh.[3] He takes a philosophical approach to analyse the same.
Current Conceptualisation of Personhood as Inadequate
In the context of abortion debates, the definition of personhood has been severely criticised. It does not account for foetuses, people in vegetative states and the mentally-ill. The current definition may exclude those in vegetative states. It is problematic when personhood is tied to sentience and sapience for this reason.
In light of the advent of AI technology, there is an exigency for an update on the definition of personhood. Factors that may be relevant are the ability to control money and practical ability to perform cognitive tasks.
Practicality of Granting Personhood
The European Parliament discusses the practical benefits of granting AI personhood in their deliberations on drafting a Charter on Robotics.
Legal personhood comes hand in hand with liability, which would require mandatory insurance policies as there must be some interest for a person to sue a robot. Granting AI personhood has several advantages, the European Parliament notes that payments from robots may help secure social welfare systems.
Partial Personhood of AI
The article rejects the notion that a precondition for personhood is to be human. While granting AI personhood will require significant academic efforts building doctrinal pressure as well as compelling economic interests.
However, granting AI independent legal personhood would require a level of sophistication not yet achieved. Hence, a dependant legal personhood may be granted, borrowing from legal personhood regimes of companies.
Conclusion
The debate is far from a simple one. This article has merely traced out what may be important. Other important dimensions to the issue have not been discussed such as the present legal regime, constitutional definitions, economic and cultural dimensions as well as matters of technology itself. The advent of AI has forced us to reinvent archaic ideas of personhood and moral status. There are many parallels that the question of AI personhood may borrow from, such as legal personhood and animal rights. With developments in technology being made steadily, it is a question that only gains in importance and urgency.
[1] http://www.kempitlaw.com/legal-aspects-of-artificial-intelligence-2/
[2] https://nickbostrom.com/ethics/artificial-intelligence.pdf
[3] https://link.springer.com/article/10.1007/s10676-010-9235-5