[This post has been authored by Suvam Kumar, a 3rd year student at National Law University, Jodhpur.]
The COVID-19 pandemic has exposed the frailty of mankind’s societies and systems. In spite of tremendous progress made by humans in several fields of life, we are rendered helpless by the rapid and uncontrolled spread of the coronavirus. In these crucial times, the role of Artificial intelligence (“AI”) becomes very important and countries like China, USA, Canada, Australia, and India have leaned on AI to fight the pandemic. The use of AI has also been approved by the World Economic Forum (“WEF”) which has emphasized the role of AI as a panacea to fight this pandemic. However, the widespread use of AI is not without its own challenges and risks. There are serious concerns regarding the application of AI in the health sector especially during a pandemic like COVID-19; however, they can be mitigated by utilizing a legal regime that regulating AI effectively and conscientiously.
AI and COVID-19 – Mitigating the risks
AI and machine learning are playing a crucial role in the fight against COVID-19. An AI device based in Canada has proved to be a major success as it delivered results faster than the devices approved by the WHO. AI enabled devices are also helping the countries to diagnose the patients and assist in the preparation of effective vaccines. Additionally, countries are using AI based chat-bots to provide information and awareness to the people regarding the coronavirus.
However, the use of AI comes with its own risks, some of which are as follows:
- Privacy: AI enabled devices like chat-bots, robots etc. collect immense amounts of personal data when interacting with the users and providing information about COVID-19. Basic information like name, gender, email address, phone number, and national identification details is a prerequisite for many AI devices. Moreover, the government is not transparent with their policies and this has often led to leakage of user data in the past. Hence, breach of privacy is one of the biggest concerns and a pandemic only exacerbates concerns unless it is strictly regulated.
- Cyber-crimes: With cities placed under lock-down all over the world, people are forced to rely on the online services which gives way to cyber-crimes. It is reported that many health organizations, hospitals and even the WHO have witnessed instances of cyber-crimes like malware infection, ransom-ware etc. which involve the use of AI.
- Potential for biased results: According to a research conducted at the Stanford University School of Medicine, AI enabled devices create algorithms which result in biased outcomes in terms of the medical recommendations generated by the programs. Thus, raw data about the patient’s health might be used without regard to the actual clinical experience.
Though these risks are severe, they can be mitigated by creating an effective legal regime to oversee the functioning of AI. The primary elements of such a regime would be as follows:
- Strict Data Protection Laws: In order to avoid a clash between the right to privacy and public health, it is important to have clear and strict regulations that protect the privacy of persons regardless of the prevailing circumstances. European Union’s General Data Protection Regulation (“GDPR”) is one example wherein effective regulation has secured personal data of the users and has reduced the threat to privacy. Therefore, the application of AI amid Covid-19 should be made after ensuring compliance with such law.
- Consent based data processing: All data processing of personal information shall be done after obtaining necessary consent from the users. Moreover, when AI enabled services are offered to children, a parental consent system must be put in place which requires the consent of parent before processing the data of the children.
- Terms and conditions shall be clear and unambiguous: Users should be fully aware about the terms and conditions of using any AI based services. This is especially important in case of senior citizens and children. Additionally, the use of AI chat-bots must be preceded by a notice mentioning the terms of engagement and giving them an option to opt out of the data collection program.
- Data processing agreement: The data controllers must also sign a data processing agreement with any parties that act as data processors on their behalf. As per Article 28 of the GDPR, a data processing agreement is a legally binding contract which lays down the rights and obligations of each party regarding the protection of personal data of their users. This is intended to make the procedure transparent and efficient.
- Data Protection Impact Assessment: The companies offering AI solutions shall conduct a data protection impact assessment on such devices by testing them on the basis of a variety of situations before launching them. This helps them prevent the threat of rogue chat-bots.
With increasing dependence on machine learning and artificial intelligence in the health sector, there exist serious issues with respect to the privacy and data security of an individual. AI enabled medical services essentially introduce a third-party into the fiduciary relationship between the patients and the doctors, which leads to thorny legal and ethical arguments that merit serious discussion and debate. Since AI based clinical services purport to represent the future of pharmaceutical sector, it is important to consider issues relating to data protection and privacy before such initiatives are launched. Therefore, the benefits of AI in the health sector cannot be realized in the absence of careful examination of the legal and ethical risks and planning an effective response to the same.