The “employee” at JP Morgan called COIN, “recruited” in June 2017, is highly efficient to say the least. It does work that earlier took 3,60,000 hours in a matter of seconds. Meanwhile, in a few developed countries ghost cars that are programmed to “drive themselves”, that is, driverless cars, are hitting the roads. On the military front, among other machines, Russia has created a semi – autonomous robot soldier called Ivan that can accurately copy the movements of a human. Attempts are being made to make Ivan fully autonomous. If Russia can create one Ivan, in time, it can also create an army of Ivans. USA also has similar “soldiers”.
The above examples are just a few of the more glamourous applications of what is called “artificial intelligence” (AI). AI is a simulation of human intelligence processes undertaken by computer systems. These processes include learning, computing, reasoning and the like. AI – powered machines are programmed through use of mathematical algorithms that can discover patterns and generate insights from data they are exposed to. These algorithms enable them to perform certain tasks that have been mathematically “fed” into their “brain”, thereby dictating their working. The global AI market is estimated to grow at 36.1% from 2016 to 2024 to reach a valuation of 3,061 Billion USD. Thus, the staggering potential of AI technology can never be understated, as will also be seen through the scope and impact of its applications.
There are three main categories of AI called Narrow AI, General AI and Super AI – in increasing order of complexity and automation capacity. Narrow AI, that is, machines which fulfill a particular purpose only, is surprisingly pervasive in our daily lives – right from the “Cortana” on Windows Operating Systems to the “Dr. Watson” present in IBM. General AI, largely not achieved yet, refers to machines that can perform several automated tasks at human capacity but are still guided through human control. An example is Tony Stark’s “Jarvis” in the Iron Man movies. Super AI, also not yet achieved, refers to fully automated robots with capacity beyond human ability – in other words, superhumans, like Arnold Schwarzenegger in “Terminator”.
With high potential of use invariably comes high potential of misuse, as has also been illustrated by Cambridge University’s ground – breaking report co -authored by 26 renowned experts on technology. Apart from technological errors, AI powered machines are at a constant risk of being hacked and used maliciously. For instance, the program of a driverless car can be tampered with to make it a kill – machine. AI can also be used to create highly realistic fake audio and video to induce certain consequences such as the alleged use of pro – Trump robots during the previous elections and their impact. Further, AI machines make our day – to – day lives more vulnerable to invasion of privacy as it becomes that much easier to secretly keep a watch on us all the time – an example being AI- powered machines being used for state sponsored underhand surveillance, as is allegedly happening in China. To cut a long story short, the actual and potential ways to misuse AI is unending. This is where the possibility of regulation arises.
It is important to note that there already exist laws which regulate use of AI like those on privacy, data protection and cybersecurity. For example, even under the current regulatory framework, using an AI- powered machine to snoop into people’s houses will attract punishment under the relevant laws. Thus, the presence of existing regulation narrows the regulatory scope of AI. In this light, requirement of regulation now arises to either cope better with the current situation or to cope with a completely new situation unforeseen by the existing framework.
Looking at the present stage of development AI is in, the need of the hour is specific rules to cope better with the current situation, just like there are safety guidelines for hazardous industries or permissible lead levels in foodstuffs. Similar industry – centric rules like technical guidelines and disclosure requirements based on the nature of particular AIs will go a long way in ensuring accountability today. Such targeted regulation has several advantages, as has been seen with USA’s driverless – car regulatory policy. Firstly, specific regulation means an explicit statement by the law that it has an eye on AI which will deter potential misusers. Secondly, regulation will help achieve quality control. For example, it will ensure quality driverless cars that will actually help reduce accidents by eliminating human errors associated with driving. Thirdly, and most importantly, regulation will increase people’s faith in AI. A person is more likely to sit in a driverless car which he knows has to legally meet certain safety standards rather than one which does not, since the former gives him a better assurance of safety. Further, if people become more secure about the AI machines they are presently using, the popularity of AI will only increase, leading to win-win situation. Thus, the specific regulation will actually boost the AI industry rather than pull it down.
The requirement of specific rules to beef up the current framework thus being established, the question now is whether at this stage in time, additional rules are required in order to be prepared for a change in the current situation, that is, development of General and Super AI – machines which operate at or beyond human capacity. In this regard, there are fears that superhuman robots might go out of human control and prove to be an existential threat to humanity, much like the way Dr. Banner’s experiments in the “Hulk” movie turned out. Proponents of this “existential threat” theory have compared AI experiments to “children playing with a bomb”. As Elon Musk puts it, if AI goes out of human control, it would be an “immortal dictator from which we can never escape.” What Musk and several others, including Stephen Hawking, are trying to say is that because of the destructive potential of uncontrolled AI, its regulation should be proactive rather than reactive to be on the safe side. It is thus argued that the mere possibility of out – of – control robots (whenever they might develop) is enough to regulate now, the risk of unpreparedness being too great.
On the other hand, opponents of the “existential threat” theory like Mark Zuckerberg and bill Gates argue that firstly, such a situation is much too far-fetched and uncertain. Secondly, even if we decide to regulate, there is simply nothing to specifically regulate against. Such regulation is like shooting arrows in the dark. Legislation requires some kind of a base and that is absent here, since there is no concrete idea of when such robots will develop and what their characteristics will be. We cannot overreact and make laws, especially when regulation is coming at the cost of curbing innovation. Any law regulating AI will have the effect of limiting the freedom to experiment and curtailing technological growth, just like imposing sanctions on news channels has a chilling effect on the freedom of speech and expression. Further diminishing the need to regulate is the fact that the wide but uncertain scope of AI induces an inbuilt chilling effect already. For instance, to a scientist, irrespective of the legal framework, the fear that he might create armed cyborgs already constitutes a chilling effect in itself. Further, each country will try to outdo the other in AI development, thus creating a prisoner’s dilemma situation that makes domestic regulation all the more difficult. Thus, the “existential threat” theory does not prompt regulation today.
The above arguments present a regulatory dilemma. While on one hand it is imperative to be prepared before it is too late in case the “existential threat” theory comes true, there is no concrete way to know what exactly to prepare against and when. In other words, it is too early to regulate AI now but there is no saying when it will become too late to regulate AI.
The answer to this dilemma is to take the middle ground, that is, develop broad general principles (like say, mandatory presence of a “kill switch” in every AI machine) and wait for the right time to come up with specifications, this “right time” being when technological progress clearly points towards development of advanced forms of AI with a clue as to what their specific characteristics will be. Existing principles along these lines include Asimov’s evergreen “Three Laws of Robotics” and the recently coined Alisomar Principles, which are a set of 23 guidelines to curb potential AI – related harm signed by about 1,200 AI researchers and over 2,300 others. Greater, more inclusive collaboration at the international level is essential to formulate more such principles that will ensure preparedness to cope with the future.
To conclude, presently, AI should be regulated in two ways – through specific industry – centric rules and universal general principles. While the former will help tighten the existing framework and ensure greater accountability and safety of the public, the latter will keep mankind prepared for the future developments that might take place in the AI field considering the fact that the consequences of neglecting possible AI development are too severe. Such a balanced regulatory framework will thus help cope with the both the realities as well as the implications of AI progress.
Other supporting links –
https://analyticsindiamag.com/russia-prepares-future-wars-array-ai-based-arsenal/
https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html
http://issues.org/33-4/perspective-should-artificial-intelligence-be-regulated/