[This article is authored by Suryansh Sadhwani, II Year B.A. LL.B. (Hons.) student at Dr. Ram Manohar Lohia National Law University. This is the second part of a two-part series on AI chatbots as personal therapists. While the first part explored their promise and risks in making therapy more affordable and accessible, this part examines the legal and regulatory landscape, addressing issues of accountability, data privacy, and applicable laws in India and the European Union, while offering recommendations for a safer and more balanced framework.]
PART 2 – Regulation, Accountability, and the Way Forward
In the first part of this series, we explored how AI chatbots are transforming mental healthcare by making therapy more affordable and accessible. However, we also highlighted significant risks, including emotional dependence, misguided advice, and potential harmful outcomes. In this second part, we will focus on the legal and regulatory landscape. How should India and the world respond to the challenges posed by therapy chatbots? We will discuss issues of accountability, data privacy, existing laws in India and the European Union, as well as provide recommendations for a safer and more balanced approach moving forward.
Regulatory and Legal Challenges in Chatbot Usage
AI bots are presenting a range of legal and regulatory challenges. The law tends to adapt slowly to rapid changes occurring globally. As technology evolves, new inventions that require immediate attention regarding their legal status emerge daily. The rapid development of AI and the widespread use of AI chatbots complicate the creation of appropriate regulations for these technologies. While there are many regulatory challenges, this section will focus on accountability, data privacy, and the distinction between professional and AI-generated opinions.
1) Accountability
Who can be held liable for the harm caused by a response curated and provided by the AI to the user for their query? Can the developer of AI be held accountable for the damage? Or is the AI liable? This question echoes ‘Who can claim copyright over AI-generated text?’
Placing liability solely on AI while freeing companies from accountability leaves users without necessary remedies or compensation for harm. The risks of chatbot misuse, including links to self-harm and suicide, call for a re-evaluation of accountability structures.
Determining liability is complicated because some bots are designed for therapeutic purposes, while others are general tools. Platforms like ChatGPT and Meta host various bots using large language models, further complicating the issue of who is responsible.
2) Data Privacy
Traditionally, human therapists are required to maintain the confidentiality of their patients and can only disclose information in rare circumstances. In contrast, AI bots often store data in the cloud or on the company’s servers, which may be located within the country or even in foreign jurisdictions. When data is stored abroad, it may be subject to investigation under different legal frameworks.
Several incidents highlight the risks of data leakage. For example, in 2023, BetterHelp was fined by the FTC for sharing users’ mental health data with advertisers. Additionally, OpenAI’s ChatGPT experienced a prompt leak in March 2023 that exposed some users’ conversations, and Replika has faced criticism for failing to establish adequate safeguards for sensitive emotional exchanges. These cases demonstrate that sensitive user data in AI therapy contexts is highly vulnerable to breaches and misuse.
Applicable Laws and Regulations Addressing Chatbot Risks
This section will analyse the existing regulations in India and the European Union (EU) and how they can be used to regulate these bots when used for therapy. The EU is included because its General Data Protection Regulation (GDPR) is widely regarded as the global benchmark for data protection and has influenced privacy frameworks across jurisdictions.
1) India
To provide a structured analysis, this section is organised into three sub-sections: (i) therapy bots that are classified as medical devices and their regulation; (ii) generic AI chatbots and the accountability gaps they create; and (iii) privacy and data protection obligations relevant to both categories of bots.
i) Generic AI Chatbots and the Accountability Gap
India, at present, does not have an act that caters explicitly to AI. However, therapy bots claiming to be medical can be regulated as medical devices. A medical device can be defined as a device including software or an accessory, intended by its manufacturer to be used specially for human beings, which may assist in its intended function by such means for one or more of the specific purposes of diagnosis, prevention, monitoring, treatment or alleviation of any disease or disorder. Therapy bots can be classified as medical devices since they aim to help individuals improve their mental health and provide diagnoses when needed.
Section 3(b)(iv) of the Drugs and Cosmetics Act, 1940 allows the Central Government to declare devices that can be used in the diagnosis, treatment, mitigation or prevention of disease or disorder in human beings as medical devices by a notification in the Official Gazette. Recognising therapy bots as medical devices would bring them under the regulatory authority of the Central Drugs Standard Control Organisation (CDSCO) and would be regulated by the Medical Devices Rules, 2017 and the Drugs and Cosmetics Act, 1940.
Consumers could hold AI developers liable under the product liability laws primarily governed by the Consumer Protection Act, 2019 (CPA) if recognised as medical devices. Product Liability means the responsibility of a product manufacturer or product seller, of any product or service, to compensate for any harm caused to a consumer by such defective product manufactured or sold or a deficiency in services. Section 2(22) of CPA defines harm in cases of product liability as personal injury, illness or death, mental agony or emotional distress attendant to personal injury or disease. This partially solves the problem of accountability for AI therapy bots.
ii) Generic AI Chatbots and the Accountability Gap
Generic AI chatbots, like ChatGPT or those hosted by Meta, are not marketed as medical tools, unlike specialised therapy bots. However, they are still commonly utilised for therapeutic or emotional support, leading to a grey area in accountability.
Section 87 of the CPA states that a product liability action cannot be brought against the product seller if the product was misused, altered, or modified at the time of harm. Using generic bots for therapy would come under the category of misusing these bots. And as stated earlier, ChatGPT and Meta also host other bots that use their LLMs. People use bots like ChatGPT more than specially designed therapy bots for their emotional needs, as it’s free and works better than most.
AI tools such as ChatGPT function at the crossroads of general utility and specialised influence. Developers who create systems that simulate human empathy and communication, particularly in response to mental health or emotional inquiries, need to take responsibility for any potential harm that may occur. If they are aware that a significant number of users ask questions related to emotional or mental health, failing to warn, block, or redirect these users adequately could be considered negligence. Thus, in this manner, AI developers of bots like ChatGPT can be held accountable. Another way is to recognise these bots as company agents and develop them; thus, they can be held responsible under the Principal-Agent relationship.
iii) Privacy and Data Protection Safeguards
Whether therapy-specific or general-purpose, all AI bots that handle sensitive mental health disclosures must comply with the Digital Personal Data Protection Act, 2023 (DPDP Act). According to the DPDP Act, the AI bot would only be able to collect and process the user’s data if the user specifically consents to such data processing, and the user has the right to withdraw their consent for such data processing. Sections 11 and 12 of the DPDP Act allow users to obtain information on their data and even allow for erasure and correction of personal data. People with severe mental health conditions may struggle with decision-making. The Mental Healthcare Act of 2017 (MHCA) will enable them to appoint a representative to assist temporarily. The rights and procedures for informed consent under the DPDP Act align with the MHCA, emphasising a rights-based approach.
The DPDP Act is a significant step in creating a regulatory framework for personal data protection in India’s digital health ecosystem, particularly for mental health services. It grants individuals rights like informed consent and data access, promoting autonomy and transparency. However, the Act has challenges, including broad exemptions for the State and limited protection for sensitive data, which raises concerns about misuse. To ensure that the digital mental health infrastructure supports care while protecting individual rights, it’s essential to strengthen the DPDP framework and enhance safeguards for sensitive data.
2) European Union
To ensure clarity, the discussion of the EU’s framework is also divided into two sub-sections. The first examines the regulatory obligations under the Artificial Intelligence Act (AIA), which classifies AI therapy bots as high-risk systems. The second addresses the data privacy and protection requirements under the General Data Protection Regulation (GDPR), with a focus on sensitive mental health data.
i) Regulation of AI Therapy Bots under the AI Act
The EU is making significant progress in regulating AI and users’ privacy in these digital times. It recently passed the AI Act, which contains provisions to regulate the chatbots used for healthcare. The EU’s Artificial Intelligence Act (AIA) categorises AI systems into four risk levels: unacceptable, high, limited, and minimal (or no) risk. AI therapy bots fall into the high-risk category, which will face the most regulation in the EU market. This category includes safety features of regulated products and stand-alone AI systems that could harm people’s health, safety, fundamental rights, or the environment if they fail or are misused.
Chatbots classified as medical devices in the EU must comply with upcoming regulations requiring a risk management system, data governance practices, technical documentation, transparency measures for users, human oversight, accuracy, robustness, and cybersecurity, a quality management system, conformity assessments, and automatic log generation. These steps are crucial for the safe and effective use of chatbots in healthcare.
ii) Data Privacy and Protection under the GDPR
The EU has enacted the General Data Protection Regulation (GDPR) for Data Privacy. Under the EU GDPR framework, chatbot applications that collect mental health data like mood, burnout, and disorders process special category personal data under Article 9, necessitating enhanced protection. This data is identifiable via unique user identifiers and may be stored on servers or cloud services outside the EU, raising concerns. As data controllers, developers must implement strict safeguards under Article 32 and provide clear privacy disclosures before interactions. However, many apps fail to do this effectively. Informed and explicit consent is required for each processing purpose, including data reuse for AI training. Additionally, chatbots using profiling and automated decision-making fall under Article 22, which gives users the right to object and seek human review while requiring transparency about decision-making processes. Even if no human reads the chats, staff may access stored data, necessitating thorough data protection training and compliance, as policies alone are insufficient.
Recommendations
India should classify therapy bots as medical devices when they provide mental health support and bring them under the oversight of the Central Drugs Standard Control Organisation (CDSCO). Stronger data protection measures, in line with the Digital Personal Data Protection (DPDP) Act of 2023, must be enforced to require explicit and informed consent, along with safeguards for sensitive health data. Developers should be held accountable for any harm caused, particularly when bots like ChatGPT are misused for therapeutic purposes, by extending product liability and ensuring human oversight in high-risk scenarios.
Additionally, ethical guidelines, emergency escalation mechanisms, age restrictions, and public awareness campaigns are crucial to prevent misuse and emotional dependency. Finally, India should establish a comprehensive AI regulation framework, inspired by the European Union’s AI Act, to ensure the safe and responsible use of therapy bots.
Conclusion
AI-powered chatbots hold great potential for making mental healthcare more accessible by providing affordability, anonymity, and 24/7 availability—benefits that traditional therapy often lacks. However, as this blog has demonstrated, there are significant risks involved, such as emotional overdependence, the possibility of receiving misguided advice, and the mishandling of sensitive personal data. These challenges serve as a reminder that while technology can simulate empathy, it cannot replace the depth of understanding that human therapists provide.
India is currently at a crucial crossroads. By recognising therapy bots as medical devices, implementing strong data protection measures, and ensuring transparency and accountability among developers, we can create a safer environment for those seeking help. Learning from global initiatives like the EU’s AI Act, India can find a balance between innovation and protection.
Ultimately, AI chatbots should be viewed not as substitutes for human therapists but as supportive tools that can assist where access to care is limited, provided there are adequate safeguards in place. With a cautious and compassionate approach, these technologies can become valuable allies in the pursuit of better mental health.