[This post has been authored by Suvam Kumar, a 3rd year student at National Law University, Jodhpur.]
[This post has been authored by Suvam Kumar, a 3rd year student at National Law University, Jodhpur.]
“In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose.” – Matt Bellamy
Artificial intelligence is a computer-based system that performs tasks, which typically require human intelligence. In this process, computers use rules to analyze data, study patterns and gather insights from the data. Artificial Intelligence companies persistently find ways of evolving technology that will manage arduous tasks in various sectors for enhanced speed and accuracy. Artificial Intelligence has transformed nearly all the professional sectors including the legal sector. It is finding its way into the legal profession and there is a plethora of software solutions available, which can substitute the humdrum and tedious work done by lawyers. In the legal profession, the changes are diverse where software solutions have outweighed paperwork, documentation and data management.
This blog analyzes the use of AI in the legal industry. It describes various AI tools which are used in the legal sector, and gives an insight into the use of AI in the Indian Judiciary system to reduce pendency of cases. Finally, we discuss the challenges in the implementation of AI in the legal field.
In the legal field, Artificial Intelligence can be applied to find digital counsel in the areas of due diligence, prediction technology, legal analytics, document automation, intellectual property and electronic billing. One such tool, which facilitates the use of artificial intelligence, is Ross Intelligence. This software has natural language search capabilities that enable lawyers to ask questions and receive information such as related case laws, recommended readings and secondary sources. Prediction Technology is a software which speculates a litigation’s probable outcome. In 2004, a group of professors from Washington University examined their algorithm’s accuracy in predicting Supreme Court judgments in 628 cases in 2002. The algorithm’s results were compared to the findings of a team of experts. It proved to be a more accurate predictor by correctly predicting 75 percent of the outcomes compared to the 59 percent of the experts’ accuracy. In 2016, JP Morgan developed an in-house legal technology tool named COIN (Contract Intelligence). It draws out 150 attributes from 12000 commercial credit agreements and contracts within few seconds. According to this organization, this equals to 36,000 hours of legal work by its lawyers.
In an interview with UK’s law Firm Slaughter and May a review of the AI tool, Luminance that is being currently used by them was taken. This tool is designed to assist with contract reviews, especially with regard to due diligence exercises during mergers and acquisitions. It was found out that the AI tool has an impact on the firm’s lawyers, who could spend more time on doing valuable work. It was also found out that the tool fits well into the existing workflows of the firm in relation to M&A due diligence. The documents that the tool helps to review are already stored in a virtual data room; the only additional step the tool needs to take is to introduce documents into the solution itself.
India is also adopting the use of artificial intelligence in the legal field. One of India’s leading law firms Cyril Amarchand Mangaldas is incorporating artificial intelligence in its processes for contract analysis and review, in concurrence with Canadian AI assistant Kira system. This software will analyze and differentiate risky provisions in the contract. It will improve the effectiveness, accuracy and scale up the speed of the firm’s delivery model for legal service and research.
In the Indian judicial system, where a plethora of cases is pending, artificial intelligence can play a significant role to reduce the burden. A deadweight of almost 7.3 lakh cases is left pending per year. A large amount of legal research is required by advocates to argue their case. Use of AI can accelerate the speed of legal research and enhance the judicial process. In this regard, a young advocate named Karan Kalia, developed a comprehensive software program for speedy disposal of trial court cases to the Supreme Court’s E-Committee led by Justice Madan B Lokur. This software offers a trial judge with appropriate case laws instantly, while also identifying their reliability.
AI enables lawyers to get nonpareil insight into the legal realm and get legal research done within few seconds. AI can balance the expenditure required for legal research by bringing about uniformity in the quality of research. AI tools help to review only those documents which are relevant to the case, rather than requiring humans to review every document. AI can analyze data through which it can make quality predictions about the outcome of legal proceedings in a competent manner, and in certain cases, better than humans. Lawyers and law firms can swing their attention to the clients rather than spending time on legal research, making the optimum use of the constrained human resources. They can present arguments and evidence digitally, get them processed and submit them faster.
Although AI is prone to some challenges, these can be subdued with time. The major concern circumscribing AI is data protection. AI is used without any legal structure that generates the risk of information assurance and security measures. A stringent framework is needed to regulate AI to safeguard an individual’s private data and provide safety standards. A few technical barriers will limit the implementation of AI technologies. It is difficult to construct algorithms that capture the law in a useful way. Lack of digitalization of data is also a technical constraint. Complexity of legal reasoning acts as a potential barrier to implementing effective legal technologies. However, this will be eventually rectified with continuous usage and time.
The introduction of AI in the legal sector will not substitute lawyers. In reality, technology will increase the efficiency and productivity of lawyers and not replace them. Instead, the roles of lawyers will shift, rather than decline, and become more interactive with technological applications in their field. None of the AI tools aims to replace a lawyer but they increase the authenticity and accuracy of research and enable to give a more result-oriented suggestion to the clients. As Mcafee and Bryjolfsson have pointed out, “Even in those areas where digital machines have far outstripped humans, people still have vital roles to play.”
The use of AI will manifest a new broom that sweeps clean, i.e., it will bring about far- reaching changes in the legal field. Over the next decade, the use of AI-based software is likely to increase manifold. This will lead to advancement and development in functionality present lawyering technologies such as decision engines, collaboration and communication tools, document automation, e-discovery and research tools and legal expert system the aforementioned. Trending industry concepts like big data and unstructured database will allow vendors to provide more robust performance. There will also be an influx of non-lawyer service providers who will enter the legal industry, some of whom will be wholly consumer-based, some lawyer focused and others will sell their wares to both consumers and lawyers. The future for manual labor in law looks bleak, for the legal world is gearing up to function in tandem with AI.
Web crawling is a process by which programs, which are colloquially known as ‘web spiders’ or ‘web robots’, browse the World Wide Web in a methodical and automated manner in order to index information found on every web page they come across. Many legitimate service providers, including search engines, employ web spiders to provide up-to-date information and data to their users.
Web crawling results in the creation of an index of web pages, allowing users to send queries through a search engine and provide links to the webpages that match the queries. The index is a list of entries which consists of key words, titles, headings, meta data etc. which were taken note of by the web crawler and addresses of the webpages on which they were found. Web crawling also enables archiving of webpages, which involves storing and cataloguing large sets of webpages on servers which are connected to the internet and updating them periodically.
Thus, any potential contravention of the Copyright Act, 1957 (‘Copyright Act’) must be evaluated against the aforementioned uses and the nature of information indexed, stored or cached in the process of web crawling. Under Section 14(1)(a) of the Copyright Act, ‘copyright’ is defined as an exclusive right subject to the provisions of the Copyright Act, to do or authorise any of the stipulated acts in respect of a work or any substantial part thereof. Under Section 51(a)(i), a copyright is deemed to be infringed when any person, without a license granted by the owner of the copyright or the Registrar of Copyrights under this Act does anything that is an exclusive right conferred upon the owner of the copyright.
Firstly, it would be pertinent to discuss the copyrightability of information or data which is getting stored, cached or catalogued through web crawling. Courts in India have placed a heavy reliance on US copyright jurisprudence, to hold that copyright does not subsist in raw facts, data, ideas, information etc. Feist Publications Inc. v. Rural Telephone Service Co. Inc, cited with approval in Eastern Book Company v. D.B. Modak, held that facts are not copyrightable since the sine qua non for copyright is originality. “Original”, as a term used in copyright, means that the work is created by the author independently and that it possesses at least some degree of creativity. In R.G. Anand v. Delux Films, the Supreme Court propounded that a mere idea cannot be the subject matter of copyright.
Therefore, a contravention of the Copyright Act would firstly depend upon the material which is collected by the web crawler. A web crawling action which simply results in collection of bare facts, raw data such as historical information, data captured by sensors, machine inputs, information pertaining to unclassified commercial transactions etc. cannot be copyrighted. Hence, indexation, storage or usage of such data or information in any other form will not constitute a contravention of the Copyright Act. However, if the crawler caches or uses copyrighted works hosted on webpages, then it will invariably constitute a contravention of Section 13(1) of the Copyright Act which states that a copyright shall subsist in original literary, dramatic, musical, artistic works, cinematographic films and sound recordings.
Secondly, a contravention of the Copyright Act would largely depend on the nature of web crawling being carried out by a company. If the scope of web crawling activities is only limited to creation of an index which is used to provide the users with the location of webpages which contain the relevant information required by them, then it should not result in a contravention of the Copyright Act. Essentially, any index created through web crawling contains billions of webpages and is well over 100,000,000 gigabytes in size. Such an index is similar to an index in the back of a book i.e. with an entry for every word seen on every web page indexed. When a web page is indexed, it is added to the entries for all of the words it contains. Thus, the web crawler by indexing web pages performs a limited role of directing the users to webpages of their choice by making the URL of such pages available to them.
The key question which needs to be then asked at this juncture is how the work is being made ‘available’ to the public. Under Section 2(ff) of the Copyright Act, “communication to the public” means making any work available for being viewed by the public by means of display or diffusion, without issuing copies of the work, whether or not any member of the public actually views the work. Copyright is deemed to be infringed if any person, who is not the owner of the copyrighted work indulges in communication to the public of any work.
Although there are no precedents in India, in my opinion the judgment in Perfect 10 v. Amazon.Com would be pertinent. Herein, the US Court of Appeals for the Ninth Circuit held that just providing HTML instructions for the location of copyrighted subject-matter would not by itself cause the copyrighted subject-matter to appear on the user’s computer screen. The HTML merely gives the address of the copyrighted subject-matter to the user’s browser. The user’s browser then interacts with the computer that stores the copyrighted subject-matter. It is this interaction that causes the subject-matter to appear on the user’s computer screen. Essentially, the web-crawler will only display to the public the location and address of the webpages hosting the copyrighted work rather than the work itself. This would not amount communication of the work to the public under Section 2(ff) read with Section 51(a)(i) of the Copyright Act as a web-crawler does not host the actual work thereby making it available to be seen or heard or enjoyed by the users directly or by means of display or diffusion.
Having said that, there are other scenarios in which web crawling may amount to contravention of the Copyright Act. If a web spider or a bot in the course of crawling through web-page stores or caches web pages or even entire websites on servers connected to the internet, it will constitute a direct contravention of the Copyright Act under Section 51(a)(i). Such an action would amount to making copies of and storing subject-matter in which copyright subsists. The Copyright Act equates the storage of any work in any medium by electronic or other means to reproduction of the work in any material form.
Henceforth, a potential contravention of the Copyright Act would largely be dependent on the kind of content hosted by the websites which are crawled upon and the nature of the web crawling itself. Any web crawling action concerned with indexation and storage of bare facts or raw data is legitimate. For works which are original and presuppose creativity, an infringement would be dependent on the nature of the web crawling action. If web crawling is limited to providing the location of the webpages after matching them with the queries of the customers, then it should not constitute a contravention of the Copyright Act under Section 51(a)(i). However, storage or creation of copies of web pages hosting copyrighted works would invariably contravene the Copyright Act.
Liability in law arises to persons who are considered rational and have control over their actions. Techonology is advancing at a rapid pace; machines have taken over a lot of jobs requiring manual labour. Some argue that this is beneficial as it means humans as a race would be able to focus on other activities/specialize. However, with the rate at which things are developing, one wonders what kind of activity would be left for humans. We already have a ‘robot lawyer’ hired by a law firm, a robot which helped people with their traffic tickets and has already successfully challenged 160,000 tickets, there are also robots writing stories for news agencies, one wrote a movie, another drew art. Robots have already defeated us in chess and go. Though they might not be completely ‘intelligent’, there’s no doubt that someday they could catch up to us.
However, does such a fear of robots ‘taking over our jobs’ make us Luddites? As robots become more advanced and autonomous, the chain of causality becomes complex. Which brings us to the question of who becomes liable when a robot commits a crime, or more crucially, can a robot commit a crime or is it merely following orders or is its action simply a malfunction. Companies are considered to be non-human legal entities which can be made liable for their offences through fines or revocation of licences. Could we take an action in a similar direction?
Ethics in and of itself is a widely debated philosophical subject, so is the concept of personhood and consciousness. To bring in a third factor, robots, as ‘beings’ having the potential to possess ‘ethics’ or whether ‘artificial intelligence’ could be termed as consciousness is a legal quagmire. When the action of a robot causes the death of a person or an accident, the question arises, who should be liable, the manufacturer, the owner or the user?
As earlier mentioned, the idea of being liable for an action arises from the fact that the actor is considered to be autonomous. For self-driving cars, therefore, the trolley problem becomes relevant and the question of liability in cases of driverless cars crashing is pertinent.
Ethics, however, are not limited to drivers, and robots are not limited to such a function. There’s a plethora of situations which we must consider. If a robot, for example is to be truly autonomous and yet follow Asimov’s laws, what then, when there are multiple orders which are contradictory, how should a robot react if it’s owner who is in great pain and there is no scope for her to live, requests the robot to kill her? If a General is fighting in war and knows that if he were to be turned over be tortured and forced to spill secrets, requests a robot to kill him, should it? Who would decide what is ethical for robots used in war or war-like situations?
The question therefore arises, when our idea of what is ‘ethical’ or ‘moral’ itself differs among people, can we enforce such an idea on robots? Before we ask if we can trust robots with making moral decisions, can we trust humankind to make the same decisions?
If we make robots liable for their actions, do they deserve any rights? It would not be a first to give rights to non-humans. Animals, for example, have a number of people advocating for their rights. Questions are aplenty, in a trolley problem, if one had to choose between a human being and 5 robots who could through their research cure cancer or some other illness, who should be destroyed? What if the human being were the President of a country?
As time passes, AI will only develop further, when we have autonomous robots who have learnt to say no. The question of who should teach when it should say no also arises. What is morality but one programming oneself or being programmed subconsciously or otherwise to behave in a particular manner in a particular circumstance. How different then, would it be from teaching a child what is moral and programming a robot to act in a particular manner in a particular circumstance, is that not ‘right’ for it?
When we have more and more humanoid robots, and start to treat them like humans and have relationships with them. Questions of how they can be used will eventually arise. How would we view a relationship between a robot and a human? (The movie ‘Her’ comes to mind) What about robots used for sex? What if said robot is looks like a child? An animal? Does it matter only if they have ‘conscience’?
The ethics of robotics, is a difficult to address, and before we are overwhelmed with the advancement of technology, we must address these concerns.
For further information;
http://www.economist.com/node/21556234, Morals and the machine, The Economist.
https://www.youtube.com/watch?v=7Pq-S557XQU, Humans need not apply.
https://www.youtube.com/watch?v=Umk7nQiaqkA, Should we give robots rights?
http://www.bartneck.de/publications/2015/anthropomorphismOpportunitiesChallenges/, Anthropomorphism: Opportunities and Challenges in Human-Robot Interaction.
http://www.androidscience.com/Ro-Man2006/1Kahn2006Ro-ManWhatIsAHuman.pdf, What is a Human? – Toward Psychological Benchmarks in the Field of Human-Robot Interaction.
(Image Source: http://sites.psu.edu/periodicpostulations/2012/09/12/little-lost-robot/)
One of the most interesting news items to come through the interwebs recently was the ‘seizure’ of a certain ‘art experiment’ in Switzerland. The bot, sadly unimaginatively named Random Darknet Shopper, lived up to its name by buying items randomly from Darknet marketplaces (with Bitcoins, interestingly) and shipping them to a gallery in Switzerland. The bot came under the scanner of the police after it bought some ecstacy pills and a counterfeit passport.