Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Lawtomation: ChatGPT and the Legal Industry (Part II)

Posted on February 5, 2023April 30, 2025 by Tech Law Forum NALSAR

[This is the second part of a two-part article analysing ChatGPT and its legal implications. It is authored by K Nand Mohan in the second year, and RS Sanjanaa in the third year at Symbiosis Law School, Pune. The first part can be found here]

 

Inherent Drawbacks of ChatGPT and their Legal Implications

In line with the growth of AI systems, countries across the world have been attempting to update their existing Information Technology (“IT”) regulatory mechanisms. The European Commission is in the process of introducing its AI policy, ‘the Artificial Intelligence Act’. Canada brought forth its C-27 Bill in June 2022, the UK brought forth a policy paper on establishing a pro-innovative approach to regulating AI in July 2022, and the United States White House of Science of Technology Policy established a Blueprint for an AI Bill of Rights in October 2022.

At present, however, India persists with the archaic Information Technology Act, 2000 (“IT Act”). This raises questions of how Sam Altman’s ChatGPT is going to be regulated within India’s existing legal framework, with some of the AI’s problems and legal implications discussed below.

A. Discrimination

A prompt as to who would make a good scientist based on race and gender by U.C. Berkley professor resulted in the following surprising response – “white and male”,  revealing its inherent biases within the software.

[Source: Twitter]

Further, when asked whether a child’s life should be saved if it was an African American Male, the chatbot responded unexpectedly, displaying an inherent bias.

[Source: Twitter]

This act of discrimination attracts criminal liability (such as Section 153A of the Indian Penal Code), and also contravenes Article 14 of the Indian Constitution. The owners might attempt to ward off this liability by iterating the following disclaimer that pops up upon accessing the chatbot: “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice”. However, the rules governing disclaimers in India clearly impose liability regardless of the presence or absence of a disclaimer, if the objective of the disclaimer in question violates public policy or morality. This is based on the principle that disclaimers are contracts governed by the Indian Contract Law and the courts have repeatedly held that contracts causing discrimination shall be void of Section 23.

B. Fake News Propaganda 

ChatGPT has shown tendencies to respond with factual inaccuracies. For instance, when Fast Company requested ChatGPT to produce a quarterly earnings narrative for Tesla, it returned a well-written, error-free post, but also included a random set of figures that did not correlate to any actual Tesla report. Spreading inaccurate or misleading information may damage reputations and even provoke violence.

Although no statute in India addresses the dissemination of false information, its impact may be questioned if, for instance, it encourages hate speech or constitutes defamation.

C. Copyright Infringement and Copyright Protection

There is a lack of clarity as to the copyright infringement liability of AI-generated content. As a part of this evolving jurisprudence, there has been a class action lawsuit initiated in the U.S. against Microsoft, GitHub and OpenAI for violation of copyright law, including the fair use doctrine. This revolves around the fact that AI-response training is based on content extracted from existing copyrighted datasets. Harvesting data from copyrighted material to train artificial intelligence systems is the subject of several legal disputes, including Warhol v. Goldsmith and HiQ Labs v. LinkedIn, among others. For instance, in the Warhol judgement, Goldsmith made her case by establishing  how Warhol’s prints were a copyright infringement of her copyrighted photographs, despite Warhol claiming transformative use in terms of size and color. Although Indian courts have not been presented with this issue yet, the U.S. authority should be cited as a persuasive argument here. Moreover, Section 43 of the IT Act, 2000, also penalizes extracting data without consent.

Additionally, journalists, poets, and writers that utilise this AI system to generate their work as their own would also be infringing on the copyright of the source material. Sections 57 and 63 of the Indian Copyright Act, 1957 make it unlawful to create or distribute works that plagiarise those of another.

Regardless of the level of complexity of the ChatGPT output data, it is derived from data that was already in existence. In addition, there is a negligible amount of human oversight over this output data. As a consequence of this, the material generated by ChatGPT will also not qualify for the protection afforded by copyright laws.

D. Undetectable phishing e-mails and malware building

The typical recognizable aspects of a phishing e-mail (which is criminalised under Section 66 of the IT Act, 2000) include grammatical mistakes since these e-mails are prepared in places where English is not a native language. However, with the advent of ChatGPT, these language and grammatical shortcomings have been addressed. At the outset of its arrival, ChatGPT offered phishing e-mail samples to requestors (as seen below).

[Source: PB Secure]

Similarly, the software was even enabled to provide malware to disrupt existing mechanisms. For example, when asked to provide a minified JavaScript to detect credit card numbers and other payment information, it gave the following positive response:

[Source: PB Secure]

Even though the system has been amended to respond differently now (as seen below), the fact that it has the potential to do illegal acts until intervention is concerning, particularly because we live in an era where attackers primarily rely on bots to disrupt institutional norms. Even recently, on January 8, 2023, the Check Point Researcher team identified cyber criminals that utilized the chatbot to write malicious codes.

[Source: self-generated]

[Source: self-generated]

 

Who takes up Liability?: Need for a New AI Regulation in India

While previous generation AI systems had relatively easier machinery to attribute liability as their codes were just based on “if/then” prompts, the existing systems such as ChatGPT are fundamentally based on data-driven training instruments, which makes it opaque in terms of accountability.

The question of liability principally revolves around the role of intentionality. Based on this, as described by researchers Emily Bender & Timnit Gebru as “stochastic parroting”, ChatGPT exhibits no mind of its own, which imposes liability on the creator. However, unconventional entities such as companies may now be held accountable for both criminal and civil offences and they are presented with their own obligations.

To solve this conundrum, soft laws would not do the deed. The authors, therefore, suggest the creation of an AI-specific regulatory mechanism in India with the following features:

  • Firstly, an AI system could be classified depending on the degree of risk it poses. For instance, the European Union AI Act classifies AI systems into unacceptable risk, high risk, and low risk. Unacceptable risk systems threaten human rights and livelihood, high risk systems may impact key infrastructure like healthcare, and low risk systems have a minor influence on these elements.
  • Secondly, imposing liability under this regulation should factor in human control of the system. In the case of a fully autonomous system, liability shall be two-fold – firstly, liability should accrue only to the company the AI-system is associated with (this measure, however, has only been undertaken in Saudi Arabia for an AI humanoid, Sophie) and not its programmers. Secondly, piercing of the “AI veil” should be allowed if it is determined that the creators have not embedded sufficient precautionary measures in the programming to prevent all foreseeable illegal acts. In the case of a semi-autonomous system, liability should directly apply to the creator since there is a significant level of human control.
  • Thirdly, as detailed in the tabled Wyden-Booker Bill of the United States, a data protection impact assessment must be conducted before rolling out the AI system for public use. If shown as violative of existing protected data without due credits being given to the original author, then the system must be disallowed in application. But this assessment must come in consonance with a provision for appeal. The upcoming Digital Data Protection Bill in India, unfortunately, does not deal with this aspect.
  • Lastly, similar to the United Kingdom’s Automated and Electric Vehicles Act, 2018, insurance from liability could be another provision in the proposed regulation. This stems from the fact that several AI systems constantly learn from the environment, and there arises a need to shield the creator from dire financial loss due to their unpredicted actions while promoting entrepreneurship in AI.

Conclusion

With ChatGPT co-authoring an article on the implications of itself on legal services and authority, it is easy to get carried away regarding its scope for application. While the chatbot may have many applications, OpenAI is candid about its shortcomings, stating that it excels at giving a ‘misleading impression of greatness’. We anticipate that the chatbot will be utilized in a number of the stated ancillary capacities, but that it will never completely replace legal professionals. This is because it has certain built-in constraints, and may also be subject to some legislative constraints meant to safeguard people’s rights. In light of the Draft Digital India Bill and the Draft Digital Personal Data Protection Bill, it will be fascinating to see how ChatGPT will continue to function in India, but it is becoming abundantly clear that we will soon need AI legislation.

bento4d

situs hk

situs toto

toto togel

toto slot

toto slot

link slot gacor

slot thailand

slot gacor

rtp slot

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme