[This post has been authored by Samarth Srivastava, a 3rd year student at the West Bengal National University of Juridical Sciences.]
[This post has been authored by Samarth Srivastava, a 3rd year student at the West Bengal National University of Juridical Sciences.]
[This post has been authored by Suvam Kumar, a 3rd year student at National Law University, Jodhpur.]
[This post has been authored by Lavanya Jha (West Bengal University of Juridical Sciences, Kolkata) & Shreya Jha (Amity Law School, Delhi).]
The term Artificial Intelligence (AI) was coined by American computer scientist John McCarthy during the Dartmouth Conference on Artificial Intelligence in 1956. It was understood as a system of solving complex problems through reasoning, knowledge representation, planning, navigation, natural language processing, and perception. The shared conception regarding AI has been that it is a method of data processing, wherein the lack of involvement of a processor allows it to have an independent “mind”. Therefore, a processing device like a digital computer through which AI related tasks are accomplished can be viewed as a fundamentally detached, objective observer whereas intelligent behavior can be viewed as a determinate set of independent elements. AI’s primary features can be characterized as unpredictability, rationality, independence, efficiency and accuracy, thereby allowing it to create “patentable” inventions.
This post has been authored by Unmekh Padmabhushan, a final year student of National Law University, Jodhpur.
Machine learning is the process by which a piece of software is able to expand upon its capabilities and knowledge in a self-driven manner without any significant human input. This technology has been used, for example, in disaster warning systems and in driverless cars. Another scholarly use of such technology allows robots to derive patterns and significant correlations from enormous databases of texts in a manner impossible for human beings. This has led to led to an explosion in the ability of those working in the field of the humanities to analyse data like their natural sciences counterparts have done for years. 
This brief introduction to regulation of autonomous vehicles has been authored by Khushi Sharma and Aarushi Kapoor, second year students of Hidayatullah National Law University (HNLU), Raipur.
[Ed. Note: This article was written before the 2019 Personal Data Protection Bill had been made public. Click here for the new Bill.]
“In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose.” – Matt Bellamy
Artificial intelligence is a computer-based system that performs tasks, which typically require human intelligence. In this process, computers use rules to analyze data, study patterns and gather insights from the data. Artificial Intelligence companies persistently find ways of evolving technology that will manage arduous tasks in various sectors for enhanced speed and accuracy. Artificial Intelligence has transformed nearly all the professional sectors including the legal sector. It is finding its way into the legal profession and there is a plethora of software solutions available, which can substitute the humdrum and tedious work done by lawyers. In the legal profession, the changes are diverse where software solutions have outweighed paperwork, documentation and data management.
This blog analyzes the use of AI in the legal industry. It describes various AI tools which are used in the legal sector, and gives an insight into the use of AI in the Indian Judiciary system to reduce pendency of cases. Finally, we discuss the challenges in the implementation of AI in the legal field.
In the legal field, Artificial Intelligence can be applied to find digital counsel in the areas of due diligence, prediction technology, legal analytics, document automation, intellectual property and electronic billing. One such tool, which facilitates the use of artificial intelligence, is Ross Intelligence. This software has natural language search capabilities that enable lawyers to ask questions and receive information such as related case laws, recommended readings and secondary sources. Prediction Technology is a software which speculates a litigation’s probable outcome. In 2004, a group of professors from Washington University examined their algorithm’s accuracy in predicting Supreme Court judgments in 628 cases in 2002. The algorithm’s results were compared to the findings of a team of experts. It proved to be a more accurate predictor by correctly predicting 75 percent of the outcomes compared to the 59 percent of the experts’ accuracy. In 2016, JP Morgan developed an in-house legal technology tool named COIN (Contract Intelligence). It draws out 150 attributes from 12000 commercial credit agreements and contracts within few seconds. According to this organization, this equals to 36,000 hours of legal work by its lawyers.
In an interview with UK’s law Firm Slaughter and May a review of the AI tool, Luminance that is being currently used by them was taken. This tool is designed to assist with contract reviews, especially with regard to due diligence exercises during mergers and acquisitions. It was found out that the AI tool has an impact on the firm’s lawyers, who could spend more time on doing valuable work. It was also found out that the tool fits well into the existing workflows of the firm in relation to M&A due diligence. The documents that the tool helps to review are already stored in a virtual data room; the only additional step the tool needs to take is to introduce documents into the solution itself.
India is also adopting the use of artificial intelligence in the legal field. One of India’s leading law firms Cyril Amarchand Mangaldas is incorporating artificial intelligence in its processes for contract analysis and review, in concurrence with Canadian AI assistant Kira system. This software will analyze and differentiate risky provisions in the contract. It will improve the effectiveness, accuracy and scale up the speed of the firm’s delivery model for legal service and research.
In the Indian judicial system, where a plethora of cases is pending, artificial intelligence can play a significant role to reduce the burden. A deadweight of almost 7.3 lakh cases is left pending per year. A large amount of legal research is required by advocates to argue their case. Use of AI can accelerate the speed of legal research and enhance the judicial process. In this regard, a young advocate named Karan Kalia, developed a comprehensive software program for speedy disposal of trial court cases to the Supreme Court’s E-Committee led by Justice Madan B Lokur. This software offers a trial judge with appropriate case laws instantly, while also identifying their reliability.
AI enables lawyers to get nonpareil insight into the legal realm and get legal research done within few seconds. AI can balance the expenditure required for legal research by bringing about uniformity in the quality of research. AI tools help to review only those documents which are relevant to the case, rather than requiring humans to review every document. AI can analyze data through which it can make quality predictions about the outcome of legal proceedings in a competent manner, and in certain cases, better than humans. Lawyers and law firms can swing their attention to the clients rather than spending time on legal research, making the optimum use of the constrained human resources. They can present arguments and evidence digitally, get them processed and submit them faster.
Although AI is prone to some challenges, these can be subdued with time. The major concern circumscribing AI is data protection. AI is used without any legal structure that generates the risk of information assurance and security measures. A stringent framework is needed to regulate AI to safeguard an individual’s private data and provide safety standards. A few technical barriers will limit the implementation of AI technologies. It is difficult to construct algorithms that capture the law in a useful way. Lack of digitalization of data is also a technical constraint. Complexity of legal reasoning acts as a potential barrier to implementing effective legal technologies. However, this will be eventually rectified with continuous usage and time.
The introduction of AI in the legal sector will not substitute lawyers. In reality, technology will increase the efficiency and productivity of lawyers and not replace them. Instead, the roles of lawyers will shift, rather than decline, and become more interactive with technological applications in their field. None of the AI tools aims to replace a lawyer but they increase the authenticity and accuracy of research and enable to give a more result-oriented suggestion to the clients. As Mcafee and Bryjolfsson have pointed out, “Even in those areas where digital machines have far outstripped humans, people still have vital roles to play.”
The use of AI will manifest a new broom that sweeps clean, i.e., it will bring about far- reaching changes in the legal field. Over the next decade, the use of AI-based software is likely to increase manifold. This will lead to advancement and development in functionality present lawyering technologies such as decision engines, collaboration and communication tools, document automation, e-discovery and research tools and legal expert system the aforementioned. Trending industry concepts like big data and unstructured database will allow vendors to provide more robust performance. There will also be an influx of non-lawyer service providers who will enter the legal industry, some of whom will be wholly consumer-based, some lawyer focused and others will sell their wares to both consumers and lawyers. The future for manual labor in law looks bleak, for the legal world is gearing up to function in tandem with AI.
Freedom of speech and expression is the bellwether of the European Union (“EU”) Member States; so much so that its censorship will be the death of the most coveted human right. Europe possesses the strongest and the most institutionally developed structure of freedom of expression through the European Convention on Human Rights (“ECHR”). In 1976, the ECHR had observed in Handyside v. United Kingdom that a “democratic society” could not exist without pluralism, tolerance and broadmindedness. However, the recently adopted EU Copyright Directive in the Digital Single Market (“Copyright Directive”) seeks to alter this fundamental postulate of the European society by introducing Article 13 to the fore. Through this post, I intend to deal with the contentious aspect of Article 13 of the Copyright Directive, limited merely to its chilling impact on the freedom of expression. Subsequently, I shall elaborate on how the Copyright Directive possesses the ability to affect censorship globally.
The adoption of Article 13 of the Copyright Directive hints towards the EU’s implementation of a collateral censorship-based model. Collateral censorship occurs when a state holds one private party, “A” liable for the speech of another private party, “B”. The problem with such model is that it vests the power to censor content primarily in a private party, namely “A” in this case. The implementation of this model is known to have an adverse effect on the freedom of speech, and the adoption of the Copyright Directive has contributed towards producing such an effect.
The Copyright Directive envisages a new concept of online content sharing service providers (“service providers”), which refers to a “provider… whose main purpose is to store and give access to the public to significant amount of protected subject-matter uploaded by its users…” Article 13(1) of the Copyright Directive states that such service providers shall perform an act of “communication to the public” as per the provisions of the Infosoc Directive. Further, Article 13(2a) provides that service providers shall ensure that “unauthorized protected works” shall not be made available. However, this Article also places service providers under an obligation to provide access to “non-infringing works” or “other protected subject matter”, including those covered by exceptions or limitations to copyright. The Copyright Directive’s scheme of collateral censorship is evident from the functions entrusted to the service providers, wherein they are expected to purge their networks and websites of unauthorized content transmitted or uploaded by third parties. A failure to do so would expose service providers to liability for infringement of the content owner’s right to communication to the public, as provided in the Infosoc Directive.
The implementation of a collateral censorship model will serve as a conduit to crackdown on the freedom of expression. The reason for the same emanates from the existence of certain content which necessarily falls within the grey area between legality and illegality. Stellar examples of this content are memes and parodies. It is primarily in respect of such content that the problems related to censorship may arise. To bolster this argument, consider Facebook, the social media website which boasts 1.49 billion daily active users. As per an official report in 2013, users were uploading 350 million photos a day, the number has risen exponentially today. When intermediaries like Facebook are faced with implementation of the Copyright Directive, it will necessarily require them to employ automated detecting mechanisms for flagging or detecting infringing material, due to the sheer volume of data being uploaded or transmitted. The accuracy of such software in detecting infringing content has been the major point of contention towards its implementation. Even though content like memes and parodies may be flagged as infringing by such software, automated blocking of content is prohibited under Article 13(3) of the Copyright Directive. This brings up the question of human review of such purportedly infringing content. In this regard, first, it is impossible for any human agency to review large tracts of data even after filtration by an automatic system. Second, in case such content is successfully reviewed somehow, a human agent may not be able to correctly decide the nature of such content with respect to its legality.
This scenario shall compel the service providers to resort to taking down the scapegoats of content, memes and parodies, which may even remotely expose them to liability. Such actions of the service providers will certainly censor freedom of expression. Another problem arising from this framework is that of adversely affecting net neutrality. Entrusting service providers with blocking access to content may lead to indiscriminate blocking of certain type of content.
Though the Copyright Directive provides certain safeguards in this regard, they are latent and ineffective. For example, consider access to a “complaints and redress mechanism” provided by Article 13(2b) of the Copyright Directive. This mechanism offers a latent recourse after the actual takedown or blocking of access to certain content. This is problematic because the users are either oblivious to/ unaware of such mechanisms being in place, do not have the requisite time and resources to prove the legality of content or are just fed up of such repeated takedowns. An easy way to understand these concerns is through YouTube’s current unjustified takedown of content, which puts the content owners under the same burdens as expressed above. Regardless of the reason for inaction by the content owners, censorship is the effect.
John Perry Barlow had stated in his Declaration of the Independence of Cyberspace that “Cyberspace does not lie within your borders”. This statement is true to a large extent. Cyberspace and the internet does not lie in any country’s border, rather its existence is cross-border. Does this mean that the law in the EU affects the content we view in India? It certainly does!
The General Data Protection Regulation (“GDPR”) applies to countries beyond the EU. The global effect of the Copyright Directive is similar, as service providers do not distinguish European services from those of the rest of the world. It only makes sense for the websites in this situation to adopt a mechanism which applies unconditionally to each user regardless of his/ her location. This is the same line of reasoning which was adopted by service providers in order to review user and privacy policies in every country on the introduction of the GDPR. Thus, the adoption of these stringent norms by service providers in all countries alike due to the omnipresence of internet-based applications may lead to a global censorship motivated by European norms.
The UN Special Rapporteur had envisaged that Article 13 would have a chilling effect on the freedom of expression globally. Subsequent to the Directive’s adoption, the Polish government protested against its applicability before the CJEU on the ground that it would lead to unwarranted censorship. Such action is likely to be followed by dissenters of the Copyright Directive, namely Italy, Finland, Luxembourg and the Netherlands. In light of this fierce united front, hope hinges on these countries to prevent the implementation of censoring laws across the world.
On 28th June 2019, the National Crime Records Bureau (NCRB) released a Request for Proposal for an Automated Facial Recognition System (AFRS) which is to be used by the police officers in detecting potential criminals and suspects across the country.
The AFRS has potential use in areas like modernising the police force, information gathering, and identification of criminals, suspects, missing persons and personal verification.
In 2018, the Ministry of Civil Aviation launched a facial recognition system to be used for airport entry called “DigiYatra”. The AFRS system is built on similar lines but has a much wider coverage and different purpose. States in India have taken steps to introduce Facial Recognition Systems to detect potential criminals, with Telangana launching its system in August 2018.
The Automated Facial Recognition System (AFRS) will be a mobile and web application which will be hosted and managed by the National Crime Records Bureau (NCRB) data centre but will be used by all police stations across the country.
The AFRS works by comparing the image of an unidentified person captured through CCTV footage to the image which has been kept at the data centre of the NCRB. This will allow the data centre to match the images and detect potential criminals and suspects.
The system has the potential to match facial images with changes in facial expressions, angle, lightening, direction, beard, hairstyle, glasses, scars, tattoos and marks.
The NCRB has proposed to integrate AFRS with multiple existing databases: these include the Crime and Criminal Tracking Network & Systems (CCTNS) which was introduced post Mumbai attacks in 2009 as a nationwide integrated database to criminal incidents by connecting FIR registrations, investigations and chargesheets of police stations and higher offices, the Integrated Criminal Justice System (ICJS) which is a computer network which enables judicial practitioners and agencies to electronically access and share information and Khoya Paya Portal which is a portal used to detect missing children.
In September 2017, the Supreme Court in the historic judgment of K.S. Puttaswamy vs. Union of India declared the right to privacy as a fundamental right under Article 21 of the Indian Constitution. The Supreme Court asserted that the government must cautiously balance individual privacy and the legitimate concerns of the state, even if national security is at stake. The Court also asserted that any invasion of privacy must satisfy the triple test i.e. need (legitimate state concern), proportionality (least invasive manner) and legality (backed by law) to ensure that a fair and reasonable procedure is undertaken without any selective targeting and profiling.
Privacy infringement without legal sanction and through executive action would be violative of the fundamental right to privacy and would disregard the Supreme Court directive. Cyber experts are of the view that such a system could be used as a tool of government abuse and risk the privacy of the citizens and since the country lacks a data protection law, the citizens would become vulnerable to privacy abuse.
Moreover, investigating agencies in the United States like the FBI operate probably the largest facial recognition system in the world. Cyber experts and international institutions have criticised the Chinese government for using surveillance system and facial recognition to keep an eye on the Uighur community in China. However, there have been claims that this system has an accuracy of hardly 2%, which makes it unreliable and cities like London are facing calls to discontinue this system to safeguard the privacy of its citizens.
Finally, such a tracking system impinges upon human dignity by treating every person as a potential criminal or suspect. There are no clear guidelines as to where such cameras are to be placed. The cameras will put every individual under surveillance and even the innocent ones would be tracked. Such surveillance would create fear amongst the citizens which has long term implications.
A rise in the crime rate poses a daunting challenge in front of the investigating agencies and robust measures must be undertaken to counter it. However, such measures should be ably backed by law and should not impinge upon the dignity and the right to privacy of the citizens.
The Data Protection Law drafted by the Justice Srikrishna Committee should be enacted by the Parliament to give legal sanction to such surveillance. Furthermore, the AFRS should be used cautiously to prevent any violation of the fundamental right to privacy.
AFRS system has the potential to bring a paradigm shift in the criminal justice system if its use is well-intentioned and within the democratic framework which ensures right to privacy and limited state surveillance.
Integration of advanced technology into society has become increasingly normal in the past five years as innovative minds constantly push the boundaries of what is achievable and what can be realistically implemented. A large part of this resurgence of complex applied sciences is due to the proliferation of artificial intelligence (abbreviated as AI), into various aspects of everyday life during the 21st century. Ever since its introduction, it has became a staple of the modern world and is being used in a wide variety of ways, right from performing day to day human tasks to performing functions too precise or nuanced for humans, including more creative purposes such as self driving cars, news anchoring, writing movie scripts and making music that is virtually indistinguishable from the music that is composed by a human. In lieu of these advancements and the pace at which they have come, it is no exaggeration to say that AI is here to stay for the foreseeable future.
One of the areas that has received considerable attention in the recent past is the creation of art pieces by artificial intelligence programs. Though there was initially some apprehension that any serious progress could be made in the field, there has been remarkable growth in technology in the past 4 years. As late as October of this year, Christie’s New York location sold a painting (for $432,500) that was created by an algorithm (named ‘Obvious’) developed by three artists. The program was made to observe thousands of portraits as ‘sample pieces’ in order for it to get an idea on how to go about creating a similar piece. Using characteristics picked up from these images, the program will generate a brad new image. This marked the first time that an art piece made by an AI was seen as traditional art; that is, something to be sold and admired, just as any ordinary piece of artwork. One of the artists behind the AI stated that the target audience was the traditional art market as opposed to those involved in technology. This marked a new, bold direction for artificial intelligence to venture into, with the certainty that such AI-centric methods can be used to produce new works that might contribute to the field of artwork and not to simply demonstrate new applications of technology.
While this may seem like a step too far into the future, it might surprise you to learn that application of artificial intelligence to art is nothing new, and has been around since at least 2014, when the concept of GAN’s (Generative Adversarial Networks) were introduced. GANs’ were modeled after the human brain and designed to produce completely original images, different from the samples fed to them. ‘Obvious’ works on this logic, having been trained to produce pieces that are notably distinguishable from other organically created paintings fed to it, while at the same time retaining their style . The nature of the program used to create the portrait is strikingly similar to the one used to create paintings in the style of the late Dutch painter, Rembrandt Harmenszoon van Rijin. The program ‘learnt’ the style of the painter and subsequently created an independent work that was completely its own but in the style of Rembrandt.
This is somewhat similar to another example of the infiltration of AI into the realm of art: Google DeepDream. Introduced in 2015, DeepDream, an image recognition software, uses artificial neural networks that have the ability to emulate how the brain receives and processes information to produce new images by repeating the patterns it perceives in already existing images. However, the factor that separates DeepDream’s neural network from the system that exists in ‘Obvious’ in terms of application is that the latter produces entirely new results that have nothing to do with the inputs that it receives; instead, they are simply used as reference points. Google DeepDream produces an amalgamation of the repetition it observes in the original pieces fed to it.
The challenge is that it becomes difficult to decide on questions of ownership when AI generates original and creative works independent of the humans who created the AI system in the first place. Who is entitled to the licensing rights to the product, to the royalties, and who bears responsibility for copyright infringement and protecting rights from future infringements? Take the Rembrandt project, for example. The amount of work that was put into it is simply staggering, with roughly around 350 paintings being analyzed, and over 150 gigabytes of digitally rendered graphics collected and used in order to provide proper instructions so that the AI could effectively recreate the painting in Rembrandt’s style. It is no wonder that the question of ownership comes up, simply due to the sheer number of people who worked on the project; who would bear responsibility and accountability for the pieces of artwork generated by the system, and what legal rights could he or she assert?
Despite the increasingly rapid advancement in technology, however, there is no law that recognizes the ownership of artificial intelligence. In the United Kingdom, the work of an author has to be ‘original’ in order for it to be protected by the Copyright, Designs and Patent Act, as was held in Infopaq International A/S v Dansuke Dagblades Forening. Section 9(3) of the same Act states that the author of anything that is computer generated will be the person ‘by whom the arrangements necessary for the creation of the work are undertaken’. United States law takes a similar stance; in 1979 the Commission on New Technological Uses of Copyrighted Works (“CONTU”) stated that any work that was created with the use of a computer should be afforded copyright protection if it is an original work that falls within the purview of the original 1976 Act, which states that the author is the one who conceives of the work and fixes it in a tangible medium of expression. Indian law has the Copyright Act of 1957 that gives the author rights over his creation for his lifetime plus another sixty years. The indication of a definite lifetime makes it obvious that there is no recognition of AI for ownership of artwork. This begs the question: should AI be given legal ownership of their creations?
In order to explore the possible answers to this question, a key area that requires deeper exploration is the capability of an AI to create pure original pieces of work. The programs that have worked thus far such as Google’s DeepDream and ‘Obvious’, all rely on some preliminary input in order for the program to generate competent results. This means that regardless of the lack of human hand in the actual creation of the brand new art piece, it was necessary for the AI to base its creations on already existing works. Essentially, the AI that has been used to create these art pieces at the moment simply ‘functions’ – it does not ‘think’. It does not have any knowledge of what it is being used for, let alone why it has been built; it only serves to create the art piece. This remains the biggest roadblock to recognizing AIs’ as capable owners of their works in the eyes of the law, as they cannot own intellectual property without taking cognizance of the creative process they are employing.
If there comes a situation where an artificial intelligence can form an intention derived from an intrinsic desire and/or belief, and can subsequently act based upon the same, there seems to be a clear cut case of viewing such a program as a legal person, statutory definitions aside. However, at the moment, the artificial intelligences that are built to create inventions that can be classified as intellectual property do not autonomously build them – there is always some element of human programming that causes the invention to be created. The intelligence itself does not have the intent of building the invention. In his article “Artificial Consciousness: Utopia or Real Possibility” Giorgio Buttazzo says that despite current technology’s ability to simulate autonomy, it is not possible for computers on their own to exhibit creativity, emotions, or free will. Consciousness involves several facets, one of which is intent. In order to say that an artificial intelligence has a conscience, it has to be proven that its actions are the result of cognitive thinking, which necessitates premonition and cognizance of possible consequences for certain actions.
It would naturally follow that only the most superintelligent AI that has such a consciousness and is able to make artistic pieces on its own would be able to be granted ownership of its creations and will come up with original and creative works. As of today, super intelligent AI has not been developed. However, when such technology is readily available and used in the public domain, perhaps the appropriate laws could be altered so as to allow for them to be creative owners of their works. This would definitely create an impact on the creative processes of any future work, as questions surrounding efficiency and reliability will almost certainly be raised. Can a program create a better painting than a human? Could AI entities purchase artistic pieces of their own? Laws related to sentient AI would have to be carefully constructed as the line between mere ownership of creative work and these entities becoming a permanent force in the market can be easily blurred, and it has the potential to lead to chaotic results.
 Andrew J. Hawkins, Uber seeks permission to resume self-driving car testing on public roads, The Verge, https://www.theverge.com/2018/11/2/18056622/uber-self-driving-car-safety-report-testing-pennsylvania
 Lucy Handley, The World’s First A.I. News Anchor has gone live in China, CNBC, https://www.cnbc.com/2018/11/09/the-worlds-first-ai-news-anchor-has-gone-live-in-china.html
 Annalee Newitz, Movie written by algorithm turns out to be hilarious and intense, ArsTechnica, https://arstechnica.com/gaming/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/
 Brad Merill, It’s Happening: Robots May Be Creative Artists of the Future, MakeUseOf, https://www.makeuseof.com/tag/happening-robots-may-creative-artists-future/
 Ahmed Elgammal, With AI Art, Process is more Important than Product, The Smithsonian, https://www.smithsonianmag.com/innovation/with-ai-art-process-is-more-important-than-product-180970559/
 Hugos Caselles-Dupré, Obvious, explained, Medium, https://email@example.com/ai-the-rise-of-a-new-art-movement-f6efe0a51f2e
 Naomi Rea, Is the Art Market Ready to Embrace Work Made by Artificial Intelligence? Christie’s Will Test the Waters This Fall, Artnet News, https://news.artnet.com/market/artificial-intelligence-christies-1335170
 Tim Schneider and Naomi Rea, Has Artificial Intelligence Given Us the Next Great Art Movement? Experts Say Slow Down, the ‘Field Is in Its Infancy’, Artnet News, https://news.artnet.com/art-world/ai-art-comes-to-market-is-it-worth-the-hype-1352011
 Shlomit Yanisky Ravid & Luis Antonio Velez-Hernandez, Copyrightability of Artworks Produced By Creative Robots and the Originality Requirement: The Formality – Objective Model, 19 MINN. J.L. SCI. & TECH.
 James Temperton, Create your own DeepDream nightmares in seconds, Wired, https://www.wired.co.uk/article/google-deepdream-dreamscope
 Yanisky-Ravid, Shlomit and Moorhead, Samuel, Generating Rembrandt: Artificial Intelligence, Accountability and Copyright – The Human-Like Workers Are Already Here – A New Model, Michigan State Law Review 2017.
 Yanisky-Ravid, Shlomit and Moorhead, Samuel, Generating Rembrandt: Artificial Intelligence, Accountability and Copyright – The Human-Like Workers Are Already Here – A New Model, Michigan State Law Review 2017.
 Can Copyright subsist in an AI generated work?, Clifford Chance: Talking Tech, https://talkingtech.cliffordchance.com/en/ip/copyright/ai-and-ip–copyright-in-ai-generated-works–uk-law-.html
  ECDR 16.
 Copyright, Designs and Patent Act 1988, Section 9(3).
 Ryan Abbott, “I Think, Therefore I Invent: Creative Computers and the Future of Patent Law”, B.C.L. Rev. 57(4), 1079 (28 September 2016).
 Arya Matthew, Protection of Intellectual Property Rights under Indian and International Law, Altacit Global, https://www.altacit.com/publication/protection-of-intellectual-property-rights-under-the-indian-and-international-laws/
 David Calverley, Imagining a Non-Biological Machine as a Legal Person, 22 AI & Society (2008).
 Giorgio Buttazzo, Artificial Consciousness: Utopia or Real Possibility, IEEE, July 2001, available at https://pdfs.semanticscholar.org/c505/98f38ae1d10546513166f564e115b06df83e.pdf
 Ben Allgrove, “Legal Personality for Artificial Intellects: Pragmatic Solution or Science Fiction?” (June 2004) (Master of Philosophy thesis, University of Oxford), available at https://ssrn.com/abstract=926015