[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Siddharth Kothari, a second year student of NALSAR University of Law.]
In an era of unprecedented technological advancements across different fields, Artificial Intelligence (AI) is poised to quiver our lives. AI refers to “a class of computer programming designed to solve problems requiring inferential reasoning, decision-making based on incomplete or uncertain information, classification optimisation and perception.” Initially imagined as a technology that could mimic human smartness, AI is set out to exceed far ahead of its original conception.
An AI spring has come up to alter our existence and the key development for this is the flood of data. IDC predicts that the global data sum will grow from 33 zettabytes in 2018 to 175 ZB by 2025, for a compounded annual growth rate of 61 per cent. As Barry Smyth, Professor of Computer science at University College Dublin, says: “Data is to AI what food is to humans.” Added with the exponential fall in cost of storage of data, Artificial Intelligence is being constantly fed and therefore, increasing enormously.
To get a better understanding, AI programs include a wide band of computer algorithms that have self-sufficiency, aptitude and a dynamic ability to solve issues at hand. Machine learning, as was coined by Artur Samuel in 1959, meant “the ability to learn without being explicitly programmed”. Instead of programming everything, machine learning algorithms learn from data. So for example, a chess game that dynamically finds patterns and then uses these patterns to make moves and comes up with its own scoring formula is a part of machine learning AI. Consequently, deep learning is a technique for the implementation of machine learning, inspired by neural functions of the brain. Artificial Neural Networks (ANNs) are algorithms created on the lines of the biological structure of a brain. In ANNs, there are ‘neurons’ which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer. Looking at the pace of such developments in AI who knows how are we from a time where the fictional extraordinary intelligence operator, Jarvis from Marvel’s Iron Man or Winston, the top-notch AI assistant from Dan Brown’s book ‘Origin’ actually come into play.
AI technology development with its express evolution has key implications for economies and societies. A study by EY and NASCCOM found that by 2022, around 46% of the workforce will be engaged in entirely new jobs that do not exist today, or will be deployed in jobs that have radically changed skillsets. In 2019, Fortune 1000 companies are expected to increase their Artificial Intelligence investments by a tremendous 91.6 per cent according to a survey by NewVantage Partners, a data and business consultancy company.
Therefore, there is a crucial requirement coupled with a heated debate, for the regulation of Artificial Intelligence. Current legal enforcement systems are surrounding human conduct which if applied to AI may not function. Because there cannot be created the traditional link of intent and causation that is applied in the legal sphere to the nature of machine-learning algorithm. It is obviously impossible to articulate as to how AI internalized a colossal mass of data to reach to its decisions. AI relies on machine-learning algorithms paralleled with ‘deep neural networks’ and can be as or more difficult to understand as a human brain. However, the difference is, humans leave evidence, trails etc. but if an AI program is what they call a ‘black-box’ it will make conclusions but without being able to communicate its reasons to do so. Therefore, questioning the ethical hypothesis of AI. Successively, a survey of 1400 US executives was conducted by Deloitte last year. It found ethical concerns to contribute as one of the top risks of Artificial Intelligence.
Thus, there is a well-established need to look at digital governance and an ethical and regulatory framework for AI. When we talk about the Indian setting particularly, where it has penetrated healthcare, agriculture, education, infrastructure and transportation, in June 2018 the government put out a discussion Paper setting a National Strategy for Artificial Intelligence. Mostly, to discuss the regulatory framework to address the privacy issues surrounding the same.
AI governance has some core guiding values as to its ethical framework which includes fairness with respect to fundamental human rights, a continued vigilance as to potential effects and consequences. It also requires AI be transparent as to improved intelligibility for its efficient application and should be free from bias that would result in discrimination of the use of data.
Hence, legal issues, ethics and model frameworks to handle AI are discussed. One of the most important issue is whether the responsibility of damages cause by AI can be vicariously attributed to someone, or to make it a ‘separate legal identity’.
For example the courts in UK have propounded that the a machine-learning system currently cannot be regarded as an agent, because according to them only a person with a mind can be regarded as an agent. But, contrastingly governing bodies in US and Canada are setting conditions wherein softwares can get into binding contracts on behalf of a person. European Parliaments have recommended that in the longer run, autonomous Artificial Intelligence coupled with robotics technology should be attributed with the status of electric persons.
In January 2019, Singapore came up with its model AI regulatory for discussion and adoption as a part to incorporate governance and provide guidance to the private sector when deploying machine-learning solutions. This model framework is based on a two-pronged basis for AI technologies. Bodies using AI in core decision making should ensure the process is transparent, explainable and fair. And secondly, AI solutions should be absolutely human-centric. 
Numerous groups across the states have released various guidelines for the ethical design for the implementation of AI. For example, the Massachusetts Institute of Technology media lab and BK centre for Internet and Society at Harvard University, in January 2017 embarked upon a $27 Million USD initiative to “bridge the gap between humanities, social sciences and computing the challenges of AI from a multidisciplinary perspective.”
Subsequently, rapid developments in this digital ecosystem have started another debate on the repercussions of these regulations on data protection and privacy. European Union has for example, released an all-inclusive legal charter for protection of Data called General Data Protection Regulation (GDPR). This regulation plan puts forth the rights and obligations of all stakeholders and a comprehensive plan of action in case of a breach.
As modern AI grows, governments all across the globe are developing or have developed data privacy and security regulations on the lines of AI. Therefore, in the Indian context, the hasty developments in this area urgently require stakeholders to recognize the challenges and risks of modern AI and to acknowledge its intersection with law, policy and ethics. Without explicit guidelines and legislations, AI algorithms continue to roam scot-free without the premise of ‘culpability’. Moreover, machine-learning systems are in many cases black boxes to humans posing a threat to the fundamental feature of the intent and causation of law. Therefore, the need of the hour is a better oversight and legislative regulation as well as protection of Artificial Intelligence algorithms.
 Toshinori Munakata,” FUNDAMENTALS OF THE NEW ARTIFICIAL INTELLIGENCE” 1–2 (2d ed. 2008)
 Barry Smyth, “Making AI meaningful again”
 Medium.com: “The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning”
 Kapil Chaudhary,”Why we need an AI code of Ethics” https://www.vantageasia.com/need-ai-code-ethics/
 Yavar Bathaee, THE ARTIFICIAL INTELLIGENCE BLACK BOX AND THE FAILURE OF INTENT AND CAUSATION, Harvard Journal of Law & Technology Volume 31, Number 2 Spring 2018, https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf
 Supra Note 5
 NITI Aayog, “National Strategy for Artificial Intelligence” https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf?utm_source=hrintelligencer