[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Vedaarth Uberoi, a third year student of NALSAR University of Law.]
From Terminator to Robocop, Wall-E to HAL-9000, artificial intelligence and fully autonomous machines have been a part of popular culture and science fiction, and with recent technological advances, they are coming closer and closer to a thing of reality which raises a question of what would be the legal consequences of such a development. In practical terms, recent controversies of accidents involving autonomous cars are a useful avenue to explore such a legal conundrum which is sure to be more prevalent in the future.
The very texture of urban life is completely altered when a person cannot look a driver in the eye to judge their intentions, or when a two-ton truck is run by an array of sensors and computers, whose decisions are foreign to human reasoning.
Those in favour of fully autonomous cars point towards the overall improvements to road safety which would be imminent with the advent of self-driving cars. Ninety-four percent of car crashes in America are caused by driver error(speeding and drunk driving among other examples), and both fully or even partially autonomous cars could help reduce that number substantially. Even so, crashes, injuries, and fatalities would not end entirely even if self-driving cars were to be ubiquitous. Overall, eventually, those figures are still expected to number far fewer than the number of people killed in car crashes in the present day.
The problem is that Rome wasn’t built in a day and the introduction of autonomous self driving cars wouldn’t instantly bring about a complete change, but rather in gradual stages as autonomous technology would slowly propagate and expand in the consciousness of the market and society. During that period, which could last decades, the social and legal status of Robocar safety would be judged and questioned against the inadequate and unsuitable existing standards, practices, and sentiments.
Who is to blame?
In 2018, University of Brighton researcher John Kingston analyzed three legal theories of criminal liability that could apply to an entity controlled by artificial intelligence.
Perpetrator via another – the programmer or the user could be held liable for directly instructing the AI entity to commit the crime.
Natural and probable consequence – the programmer or the user could be held liable for causing the AI entity to commit a crime as a consequence of its natural operation. For example, if a human obstructs the work of a factory robot and the AI decides to squash the human as the easiest way to clear the obstruction to continue working, if this outcome was likely and the programmer knew or should have known that, the programmer could be held criminally liable.
Direct liability – the AI system has demonstrated, of its own independent volition, the necessary elements of liability in criminal law. Legally, courts may be capable of assigning criminal liability to the AI system of an existing self-driving car for speeding; however, it is not clear that this would be a useful thing for a court to do.
If one is to direct that question of liability specifically with respect to car accidents and ask “who do I sue,” a plaintiff in a traditional car crash would assign blame to the driver or the car manufacturer, depending on the cause of the crash. In a crash involving an autonomous car, a plaintiff can be understood to have four options to pursue.
Operator of the vehicle: The viability of a claim against the operator will determine on the level of autonomy. For instance, if the autonomous technology allows the passenger to cede full control to the vehicle, then the passenger will likely not be found to be at fault for a crash caused by the technology.
However, in any situation where you’re expecting the human and the computer algorithms to share control of the car(as is the prevalent form of self driving car system in the present day), it is very tricky to hand that control back and forth. It should be noted that Waymo, the Alphabet subsidiary pursuing driverless technology, has consistently argued against such systems where control of a vehicle is handed back and forth between the driver and the algorithms. The company has instead pushed for a perfected automation technology that totally eliminates the role of a human driver.
Car manufacturer: A plaintiff will need to determine whether the manufacturer such as GM had a part in installing autonomous technology into the vehicle.
Company that created the finished autonomous car: Volvo is an example of a manufacturer who has pledged to take full responsibility for accidents caused by its self-driving technology.
In 2015, Volvo issued a press release claiming that Volvo would accept full liability whenever its cars in autonomous mode and announced that it will pay for any injuries or damaged caused by its fully autonomous software, which it expects to start selling in 2020. President and Chief Executive of Volvo Cars Håkan Samuelsson went further urging “regulators to work closely with car makers to solve controversial outstanding issues such as questions over legal liability in the event that a self-driving car is involved in a crash or hacked by a criminal third party.”
Company that created the autonomous car technology: Companies such as Google who are developing the software behind the autonomous car and those manufacturing the sensor systems that allow a vehicle to detect its surrounding.
Overall, there exists broad consensus that self-driving cars implicate the manufacturer of the vehicle more than its operator. That has different implications for a company like GM, which manufactures and sells cars, than Google, which has indicated that it doesn’t have plans to make cars, only the technology that runs them.
Still, since the law is set by precedents pursued by legal action, other interpretations of self-driving car liability are possible. A different interpretation might compare operating autonomous test cars to taking dangerous or experimental equipment on city roads. There’s an argument to be made that a pedestrian death at the hands of an autonomous car, even one that would have been unavoidable, is no different from a human-driven car with a new, experimental combustion engine that malfunctions and blows up on a city road or interstate.
Product Liability v. Personal Liability
Liability for incidents involving self-driving cars is a developing area of law and policy that will determine who is liable when a car causes physical damage to persons or property.
As autonomous cars shift the responsibility of driving from humans to autonomous car technology, there is a need for existing liability laws to evolve in order to fairly identify the appropriate remedies for damage and injury. As higher levels of autonomy are commercially introduced, the insurance industry stands to see greater proportions of commercial and product liability lines, while personal automobile insurance shrinks.
In a white paper titled “Marketplace of Change: Automobile Insurance in the Era of Autonomous Vehicles,” KPMG estimated that personal auto accounted for 87% of loss insurance in the United States, while commercial auto accounted for 13% in 2013. By 2040, personal auto is projected to fall to 58%, while commercial auto rises to 28% and products liability gains 14%. This reflects the view that personal liability will fall as the responsibility of driving shifts to the vehicle. In addition, with the view that the overall pie representing losses covered by liability policies will shrink as autonomous cars cause fewer accidents.
Availability of Crash Data and Fixing Liability
University of South Carolina law professor Bryant Walker Smith as noted that with automated systems, considerably more data will typically be available than with human-driver crashes, allowing more reliable and detailed assessment of liability. He also predicted that comparisons between how an automated system responds and how a human would have or should have responded will be used to help determine fault.
The challenge in this new ecosystem with regards to fixing liability is that some of the potentially liable parties may also have disproportionate control over the sensor data. There is a risk that one of these parties may alter the data to steer the liability decision in its favour, using the wireless and USB interfaces that vehicles have.
That means we must not only record tamper-free sensor data, but also any interactions with the vehicle, perhaps through mediums such as blockchain technology.
Dystopian End to Public Roads for Private Citizens?
There is an argument that autonomous cars could erode citizens’ rights to the public streets. Given sufficient economic incentive to pursue public-private partnerships between municipalities and technology companies, cities, counties, and states might choose to adopt industry-friendly regulatory policy in exchange for changes to the urban environment.
Eventually, should autonomous cars become widespread, it might become more expedient just to close certain roads to pedestrians, bicyclists, and human drivers so that computer cars can operate at maximum efficiency. It’s happened before: Jaywalking laws were essentially invented to transform streets into places for cars.
Uber, Google, and other influential companies with substantial interests in the field of autonomous driving might see this vulnerability as a sign that it’s time to get more serious about legal protection of their interests and this might be to the detriment of the private rights and interests of citizens with regards to their roads.