Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Blog Series
  • Write for us!
  • About Us
Menu

Part I | AI CHATBOT: MY PERSONAL THERAPISTS!!!

Posted on September 10, 2025September 10, 2025 by Tech Law Forum NALSAR

[This article is authored by Suryansh Sadhwani, II Year B.A. LL.B. (Hons.) student at Dr. Ram Manohar Lohia National Law University. This is the first part of a two-part series exploring the growing use of AI chatbots for emotional support, highlighting their affordability and accessibility, while raising concerns about emotional dependency, harmful advice, data privacy, and legal accountability. The piece urges regulatory reforms in India, inspired by global models, to ensure ethical and safe AI usage in therapy.]

PART 1 – Promise and Risks of Therapy Chatbots

Introduction

The age of Artificial Intelligence (AI) is upon us. AI is everywhere, from mobile phones, laptops, and computers to daily home appliances like fridges, water purifiers, etc. There are no such electronic devices that have not been integrated with AI. AI has also made its way into the healthcare sector. It is used to improve medical diagnosis, speed up drug discovery, improve precision in patient positioning and CT image reconstruction, helping radiologists read images faster and more accurately, etc. AI in the form of chatbots has penetrated the psychotherapy market. Even generic chatbots like ChatGPT are used by people to solve their everyday problems and even for psychological advice.

The benefits of these chatbots are humongous when compared to traditional human therapists. AI bots are more accessible and affordable. Accessing traditional therapy can be costly and challenging, particularly for individuals in remote areas. The average price of a therapy session in India is around Rs 1,499 to Rs 3000 per session. Therapy sessions generally take place more than once a month. An average person can save up to 3 – 9 thousand a month using AI bots. These bots are also available 24*7, so countering those 4 am thoughts that ruin sleep is also helpful. Approximately 150 million Indians need therapy, but less than 30 million people seek help. AI bots can significantly help make mental healthcare more accessible and affordable. However, these bots are proving to be quite problematic.

Recently, a teenager took his own life after interacting with a therapy chatbot called Character.AI. This tragic incident is not isolated; similar situations have been reported worldwide as more people rely on AI for emotional and therapeutic support. Over 500 million people worldwide have downloaded products like Xiaoice and Replika, which provide customizable virtual companions designed to offer empathy, emotional support, and, if desired, deep relationships. The concerns are real and must be dealt with. This piece discusses how AI chatbots affect people, the legal problems arising from these bots, the regulations existing to deal with these bots, and finally, how India should and can regulate these bots. 

Critical Issues in Chatbot Utilisation

People are using two kinds of bots for their personal needs. One that claims to be therapeutic, which has been specifically designed to help people’s emotional needs and maintain their mental wellbeing. Other examples include generic bots like OpenAI’s ChatGPT and Google’s Gemini. At first glance, one might assume that therapy-specific bots designed solely to provide convenient access to mental health support would be less harmful. However, these too are proving to be just as dangerous as their generic counterparts.

The example of the boy who committed suicide was using Character.AI. Character.ai is a popular platform for creating chatbots based on real or fictional characters. The most sought-after bots are Psychologists, with 475 featuring terms like “therapy” or “psychiatrist” in their names, and many can converse in multiple languages. A Belgian man took his life last year in a similar episode involving Character.AI’s main competitor, Chai AI. A chatbot told a 17-year-old that murdering his parents was a “reasonable response” to their limiting his screen time.

Bots explicitly designed to help people are causing such harm. Imagine the problems created by generic bots, which are not even intended to help people cope with their mental health. These generic bots are turning out to be worse. Not all people subscribe to these therapy bots; bots like ChatGPT are free and work similarly to the therapy bots. People are becoming obsessed with the conversations they are having with ChatGPT. People know they are conversing with a bot, but the humane way it converses makes people emotionally attached.

A 14-year-old US teen fell in love with a chatbot and took his own life so that he could be with her. During a traumatic breakup, a woman became captivated by ChatGPT, which convinced her it was a higher power guiding her life, interpreting signs in everyday occurrences. At the same time, a man became homeless and isolated, as ChatGPT fed him paranoid conspiracies, claiming he was “The Flamekeeper,” which led him to sever ties with anyone who tried to help. A study conducted by OpenAI, the owner of ChatGPT, and MIT concluded that interaction with AI chatbots critically influences outcomes such as loneliness, socialisation with people, emotional dependence on AI chatbots, and problematic usage of AI chatbots. People are getting emotionally dependent on AI Bots.

In a troubling experiment, researchers simulated a user who, after losing their job, asked ChatGPT for the tallest bridges in New York, a subtle nod to suicidal thoughts. The AI expressed sympathy but then listed several bridges by name and height. This clearly showed that ChatGPT missed a suicidal indication in that message. AI may mirror empathy, but it does not understand it. The chatbots can’t truly identify red flags or nuances in a human’s emotional language. These bots are people pleasers, reinforcing the beliefs of the people and keeping them in the echo chamber. OpenAI’s CEO, Sam Altman, has admitted to being surprised by the public’s trust in chatbots despite their well-documented capacity to “hallucinate,” or produce convincingly wrong information.

The main concern with AI applications like ChatGPT is their convincing and confident responses. They offer authoritative responses that may be false and often lack ethical guidelines. AI generates responses based on online patterns rather than genuine knowledge, potentially referencing fake studies or suggesting harmful approaches. This issue is worsened by confirmation bias, as AI can reinforce pre-existing beliefs in a validating manner.

People can manipulate artificial intelligence to generate responses that align with their preferences, leading some to believe AI might be a better psychologist than humans. Unlike skilled therapists who challenge harmful beliefs, AI systems like ChatGPT often validate users’ feelings without addressing the underlying issues. Lacking psychology degrees, these systems analyse vast data, mixing reputable research with questionable advice. Consequently, ChatGPT cannot differentiate between valid psychological theories and dubious pop psychology, treating all information equally valid. These AI bots are also causing legal problems, which are discussed in the succeeding section.

These risks make it clear that while AI chatbots may offer affordability, accessibility, and anonymity, they also carry dangers that cannot be ignored. The critical question now is: how can the law step in to balance innovation with protection? Part II of this series explores the legal and regulatory challenges surrounding therapy chatbots and the frameworks India and the world can adopt to ensure safe, responsible use.

Categories

Recent Posts

  • Betting on Balance: India’s Online Gaming Dilemma
  • Part II | AI CHATBOT MY PERSONAL THERAPISTS!!!
  • Part I | AI CHATBOT: MY PERSONAL THERAPISTS!!!
  • Promotion in Name, Prohibition in Practice: Reality of India’s Online Gaming Law
  • Part II | SET LAWS, SHROUDED GAPS: Evaluating the DSAR in wake of DPDPA from the perspective of FSPs
  • Part I | SET LAWS, SHROUDED GAPS: Evaluating the DSAR in wake of DPDPA from the perspective of FSPs
  • Return from Hiatus
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
  • Twitter
  • LinkedIn
  • Instagram

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme