Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Policy Lessons for India from Europe’s Artificial Intelligence Act

Posted on July 2, 2022December 27, 2024 by Tech Law Forum NALSAR

[Ed Note: The following post is part of the TLF Editorial Board Test 2021-22. It has been authored by Mehreen Mander, a fourth year student of NALSAR University of Law.]

 

Of late, the Government of India has been prioritizing the development, adoption and promotion of Artificial Intelligence. In 2018, substantial funding was allocated to the national program for artificial intelligence. Many Union Ministries are also making great leaps in the field of AI. In 2017, the Union Ministry of Commerce and Industry set up an AI Task Force which, in its report, hailed the potential of AI to be a solution to many socio-economic problems across ten identified sectors. Furthermore, the Union Ministry of Electronic and Information Technology set up four committees to prepare a roadmap for the National Artificial Intelligence Programme.

However, India does not have any regulatory framework governing AI. NITI Aayog’s 2021 approach document “Responsible AI #AIFORALL” acknowledges that the existing frameworks that come closest to addressing AI, which are the Information Technology Act, 2000 and the proposed Personal Data Protection Bill, 2019, are insufficient to address the specific risks posed by artificial intelligence by a far cry.

A regulatory framework is necessitated by the very design of AI systems. AI systems are complex and opaque (‘the blackbox problem’), and sometimes, AI evolve and change their behavior giving rise to new risks. Given that many AI systems are autonomous and do not need human intervention to perform a task, in certain sectors, this may lead to disastrous results for public safety. Furthermore, AI is completely driven by data, which implies that the kind of data input provided may lead to errors, reinforce systemic biases and lead to discriminatory and undesirable consequences.

Thus, AI by their very design can put public safety, privacy and individuals’ fundamental rights at risk. In cases of breaches, competent authorities require a procedural framework as per which to proceed. On an ethical level, AI should be developed in a manner so as to minimize the risk of such breaches. From the perspective of the businesses, the lack of a regulatory mechanism creates an atmosphere of legal uncertainty and dissuades investments. Mistrust of technology also hampers its progress and development. Thus, in light of the commercial and policy impetus being given to AI, it is high time that Indian policy makers wake up to the need for a regulatory framework.  This article seeks to derive useful policy lessons from the Europe’s Artificial Intelligence Act, 2021.

Europe’s AI Act

The European Commission’s proposed Artificial Intelligence Act (“AI Act”) is the first AI regulation of its kind. With the view to put forward a uniform “European Approach” on the human and ethical implications of AI, the AI Act addressed the latter of the twin objectives put forward by the White Paper on AI – A European approach to excellence and Trust, namely promoting the uptake of AI and addressing associated risks.

The scope of the regulation extends largely to providers, i.e., entities who develop an AI system, place it on the market or put in service for their own use, or users, i.e., entities who use AI system under their authority, not including personal, non-professional activities. Jurisdictionally speaking, the AI Act extends to all users and providers located inside the EU, and the users and providers located outside the EU when their output is used within the EU.

The most notable feature of the AI Act remains its ‘risk-based approach’ instead of a blanket regulation for all AI systems. It classifies AI systems into three tiers based on the degree of risk posed: – unacceptable risk, high risk, and limited risk.

AI systems posing unacceptable risks are completely prohibited. This includes social scoring, and real-time remote biometric identification for law enforcement, among others. Those posing high risk are subject to strict obligations including but not limited to ex-ante conformity assessments and ex-post monitoring assessments. Those posing only limited risks have limited transparency obligations. Further, this category is also encouraged to self-regulate by implementing internal codes of conduct. High risk AI systems include those used in law enforcement, healthcare, education and employment, dispatching emergency services, although not every type of AI system in a particular high risk sector might be a high risk AI system.

The AI Act also provides for enforcement mechanisms among other things which are particular to the EU and thus, not relevant for the narrow scope of this article.

Policy Lessons from the AI Act

Any regulation for AI must co-exist with the sector-specific regulations. Any regulation for AI must therefore, come much before the stage of application, at the stages of input of the training data and model or decision framework that emerge through the training data.

The AI Act emphasizes the need for high-quality data sets. A high-quality data set would address the problem of bias to a large extent. However, ensuring that collection of data sets happens in a manner that does not repeat the systemic and historical patterns of discrimination would be key. The concern for future-proofing is writ all over the AI Act. The idea is to have a definition of AI systems to be broad enough to include any unforeseeable innovations in the future, but still one that is precise and clear. Further, geographical jurisdiction of the regulation also ensures that the liability is affixed on the person most equipped to deal with the risk, and liability cannot be denied on the basis of location, effectively creating accountability.

A risk-based approach ensures that the regulation is focused and efficient, and no unnecessary obstacles are placed in the market. For the high risk AI systems, there are ex-ante conformity assessments, after which the providers are required to register with the competent authority. There are also ex-post monitoring checks. The Act also recognizes the problem of opacity in automation, and mandates human oversight of AI systems.

All policy-making surrounding AI in India so far have looked at AI as an economic opportunity, and little attention has been paid to the legal and ethical implications which could be crucial considerations even at the stage of innovation. The AI initiatives thus far have not sought any engagement from civil society, thus limiting the kinds of interests represented in these forums to government officials and industry specialists. Furthermore, the limitations of data-driven decision-making has not been addressed at all. There can be no presumption of fairness, accuracy or appropriateness when there is no assurance that the data sets account for social differences and checked for problem of bias.

Conclusion

The proposed AI Act has been criticized by commentators, lawyers, and members of the civil society among others for leaving out certain gaps. It has been pointed out that while the accompanying recitals points out the concern surrounding algorithmic bias, the regulation itself does not address it. The regulation also does not confer any obligations upon Big Tech in terms of addressing advertisement data, social media algorithms etc., and leaves it to the future regulators to determine.

However, the proposed AI Act, the first of its kind, has valuable lessons in policymaking that Indian law-makers can learn from. Especially in light of the impetus being given to the development of AI across sectors, it would be prudent to get a robust regulatory mechanism in place sooner than later, and ensure that regulation can happen at stages of development rather than just application. While considering the European approach, needless to say, the particularities of the Indian context must be accounted for.

bento4d

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme