Ed. Note: This post by Benjamin Vanlalvena is a part of the TLF Editorial Board Test 2016.
Source: xkcd
Liability in law arises to persons who are considered rational and have control over their actions. Techonology is advancing at a rapid pace; machines have taken over a lot of jobs requiring manual labour. Some argue that this is beneficial as it means humans as a race would be able to focus on other activities/specialize. However, with the rate at which things are developing, one wonders what kind of activity would be left for humans. We already have a ‘robot lawyer’ hired by a law firm, a robot which helped people with their traffic tickets and has already successfully challenged 160,000 tickets, there are also robots writing stories for news agencies, one wrote a movie, another drew art. Robots have already defeated us in chess and go. Though they might not be completely ‘intelligent’, there’s no doubt that someday they could catch up to us.
However, does such a fear of robots ‘taking over our jobs’ make us Luddites? As robots become more advanced and autonomous, the chain of causality becomes complex. Which brings us to the question of who becomes liable when a robot commits a crime, or more crucially, can a robot commit a crime or is it merely following orders or is its action simply a malfunction. Companies are considered to be non-human legal entities which can be made liable for their offences through fines or revocation of licences. Could we take an action in a similar direction?
Ethics in and of itself is a widely debated philosophical subject, so is the concept of personhood and consciousness. To bring in a third factor, robots, as ‘beings’ having the potential to possess ‘ethics’ or whether ‘artificial intelligence’ could be termed as consciousness is a legal quagmire. When the action of a robot causes the death of a person or an accident, the question arises, who should be liable, the manufacturer, the owner or the user?
As earlier mentioned, the idea of being liable for an action arises from the fact that the actor is considered to be autonomous. For self-driving cars, therefore, the trolley problem becomes relevant and the question of liability in cases of driverless cars crashing is pertinent.
Ethics, however, are not limited to drivers, and robots are not limited to such a function. There’s a plethora of situations which we must consider. If a robot, for example is to be truly autonomous and yet follow Asimov’s laws, what then, when there are multiple orders which are contradictory, how should a robot react if it’s owner who is in great pain and there is no scope for her to live, requests the robot to kill her? If a General is fighting in war and knows that if he were to be turned over be tortured and forced to spill secrets, requests a robot to kill him, should it? Who would decide what is ethical for robots used in war or war-like situations?
The question therefore arises, when our idea of what is ‘ethical’ or ‘moral’ itself differs among people, can we enforce such an idea on robots? Before we ask if we can trust robots with making moral decisions, can we trust humankind to make the same decisions?
If we make robots liable for their actions, do they deserve any rights? It would not be a first to give rights to non-humans. Animals, for example, have a number of people advocating for their rights. Questions are aplenty, in a trolley problem, if one had to choose between a human being and 5 robots who could through their research cure cancer or some other illness, who should be destroyed? What if the human being were the President of a country?
As time passes, AI will only develop further, when we have autonomous robots who have learnt to say no. The question of who should teach when it should say no also arises. What is morality but one programming oneself or being programmed subconsciously or otherwise to behave in a particular manner in a particular circumstance. How different then, would it be from teaching a child what is moral and programming a robot to act in a particular manner in a particular circumstance, is that not ‘right’ for it?
When we have more and more humanoid robots, and start to treat them like humans and have relationships with them. Questions of how they can be used will eventually arise. How would we view a relationship between a robot and a human? (The movie ‘Her’ comes to mind) What about robots used for sex? What if said robot is looks like a child? An animal? Does it matter only if they have ‘conscience’?
The ethics of robotics, is a difficult to address, and before we are overwhelmed with the advancement of technology, we must address these concerns.
For further information;
http://www.economist.com/node/21556234, Morals and the machine, The Economist.
https://www.youtube.com/watch?v=7Pq-S557XQU, Humans need not apply.
https://www.youtube.com/watch?v=Umk7nQiaqkA, Should we give robots rights?
http://www.bartneck.de/publications/2015/anthropomorphismOpportunitiesChallenges/, Anthropomorphism: Opportunities and Challenges in Human-Robot Interaction.
http://www.androidscience.com/Ro-Man2006/1Kahn2006Ro-ManWhatIsAHuman.pdf, What is a Human? – Toward Psychological Benchmarks in the Field of Human-Robot Interaction.