(Image Source: http://sites.psu.edu/periodicpostulations/2012/09/12/little-lost-robot/)
One of the most interesting news items to come through the interwebs recently was the ‘seizure’ of a certain ‘art experiment’ in Switzerland. The bot, sadly unimaginatively named Random Darknet Shopper, lived up to its name by buying items randomly from Darknet marketplaces (with Bitcoins, interestingly) and shipping them to a gallery in Switzerland. The bot came under the scanner of the police after it bought some ecstacy pills and a counterfeit passport.
While the cops in this case have, in good humour, not filed any charges, this does raise some interesting questions. Specifically, when computers and all other devices are getting incredibly smarter day by day and artificial intelligence would seem to be a real possibility (and concern), when a bot commits a crime, like buying an illegal object, who is liable – the bot or the programmer? To take this a step further, if a bot creates some content, who owns the copyright to that content – the bot, or its programmer?
For instance, one of the most fascinating stories that has come up recently was the story of David Slater, an award-winning American photographer, who left his camera lying around, which was picked up by a macaque monkey, which took a photo of itself. In this case, the general consensus would seem to be that the copyright belongs to no one, as you kinda-sorta really need to be a human being to own a copyright – at least, as of right now.
But changing that story a little bit to fit our context, what if, rather than simply leaving the camera lying around, the photographer had set it up exactly the way he wanted to and pre-programmed it to take photographs in certain situations?
More realistic, and contentious, examples of this are Twitter Bots. These are programs that have been created to tweet certain text when certain requirements are met. The text of the tweet and the requirement are set by the programmer himself or herself. In certain cases, the text and the requirements can be extremely broad, resulting in content that perhaps even the programmer didn’t expect, which is directly published as a tweet. In this case, who owns the copyright for the content? And if the content is violative of Twitter’s policies or even national legislations, who would be liable?
While these may seem like fantastical concerns, they are actually extremely relevant for the immediate future. Soon enough, we will have self-driving cars on the road, and Microsoft’s campus has been guarded by Knightscope’s K5 ever since last year! And if Russia has its way, we will soon have a situation where a country’s army is hugely supplemented by autonomous fighting robots.
While all of these issues are individual issues within separate fields of law, I will be addressing a few of them within the Indian context here. For instance, let’s try and see how Indian criminal law or tort law will function in this scenario. To put this situation in context, let’s imagine a scenario where the ‘bot’ is the same as the robots currently patrolling the Microsoft campus, the K5s. These K5 are currently only tooled to surveil, assess and report suspicious activities, they might soon be able to use Tasers.
In a situation where the programming of a bot results in unintended consequences, even the creator/programmer cannot be said to have the intent of committing the crime in question. The question I am considering here is who, if anyone, would be liable in such a scenario, and to what extent.
The first point that should be mentioned here is that under the Indian Penal Code (‘IPC’), the ‘person’ being accused of a crime mandatorily needs to be a human being, with an exception being made for any Company, Association or body of person under Section 11. Thus, unless a further exception is made for bots, they cannot be covered under the IPC. Furthermore, the two main requirements for most offences are Actus Reus, the act, and Mens Rea, the intent. Now, since we are yet to have a functional AI capable of clearing the Turing Test, no bot will be able to meet the Mens Rea requirement, let alone the Microsoft bots. The exception here being, of course, the categories of offences that do not require Mens Rea.
So then, there seems to be no recourse in criminal law for bots on a crime spree. But how about tort law?
Before that question can be answered, the level of liability for the harm done by a bot would need to be confirmed by law. Under an absolute or strict liability regime, the creator/programmer would necessarily end up being liable for the damage caused by the bot. But things get a bit murkier when we consider the questions of negligence.
For negligence, we have the standard of the most popular man in law – the reasonable man. The creator will be liable for the damage caused by his bot under the tort of negligence if the actions that lead to the causation of the damage were reasonably foreseeable and if reasonable precautions were taken to prevent the same, which are both heavily fact-based distinctions.
Going back to the example of the Microsoft robots, the bots are currently not allowed to do anything but watch and report. But if (or when) they are truly given the ability to use Taser guns (or even let’s say pepper spray or tear gas), and they end up harming the wrong person, would Microsoft or Knightscope be liable?
The determination of the above would be based on a test of the ‘foreseeability’ of the bots’ actions. That would, necessarily, involve a thorough examination of whether the bot was functioning within its set parameters. If yes, the question would be what exactly these parameters were and how reasonable they were, and if not, whether such a malfunction was reasonably foreseeable.
If the answer to the first test is yes, the issue would come down to the whether the parameters of the bot’s functioning satisfy the judge(s) in question. If they do, then it could be very well be argued that the bot’s actions were not reasonably foreseeable, and that the reasonable precautions were taken, and the creator/programmer would not be liable. If they don’t, then the creator would necessarily be liable.
If the question is answered with a no, then the liability would depend on whether the malfunction was foreseeable. While this is a heavily fact-based question, in my opinion, this would perhaps be an easier test to satisfy – the functioning or malfunctioning of programs is a very unpredictable science. But a consequence of such defences being taken would quite probably be that a higher standard of liability would be imposed on bots in such cases, which would be quite problematic for the bot industries.
It may perhaps avert the advent of Skynet though, so that’s a good thing!
The autonomous photographer point for this post was inspired by the discussion in the CopyrightX class.