[This post has been authored by Ishita Mundhra, a second-year student at the West Bengal National University of Juridical Sciences]
Introduction
A student-run group at NALSAR University of Law
[This post has been authored by Ishita Mundhra, a second-year student at the West Bengal National University of Juridical Sciences]
Introduction
[This post has been authored by Angeline Priety and Nisha Nahata, fourth year law students at Gujarat National Law University, Gandhinagar. Part II can be found here.]
In recent times, data driven sectors have been going over and beyond to harness technology to consolidate, utilise and analyse data from various sources for efficient functioning. In the Insurance industry, these efforts have led to the creation of the Insurtech sector. In Part I of this essay, the authors shall elucidate the emerging models of insurtech and the Indian legal framework governing it. In Part II we shall highlight the challenges that the Insurtech industry faces and proposes recommendations to navigate through them.
[This post has been authored by Sanjana L.B., a 4th year student at Symbiosis Law School, Hyderabad.]
In January 2021, India had the highest number of Facebook users at 320 million. This was followed by the United States of America (“USA”), with 190 million users. As of February 2021, about 53.1% of the population of Myanmar were active social media users. These numbers are not only indicative of internet penetration, but also of the audience for user-generated content on platforms like Facebook. This article focuses, firstly, on the need for content moderation on social media by looking at harmful precedents of inefficient moderation, and secondly, on the Indian Government’s approach to content moderation through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) and recent developments surrounding the regulation of social media content in India.
[This post has been authored by Shamik Datta and Shikhar Sharma, first year students at NALSAR University of Law and National Law School India University respectively.]
End-to-end encryption ensures that intermediaries or third parties don’t have access to the content of the message and identity of the communicating parties. However, Rule 4 (2) of the new Informational Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules 2021 specifies that all ‘significant social media intermediaries’ must enable the traceability of the first originator of a message. The collected information may be used if and when required by a court of competent jurisdiction or competent authority under Section 69A of the Information Technology Act, 2000. The information derived via the breaking of end-to-end encryption may be used to investigate offences abetted or caused by the spread of fake news. This includes open-ended offences like disturbing ‘public order’, which are broad in their scope, and thus, leave a wide scope for their blatant misuse and arbitrary interpretation. The proviso to Rule 4(2) states that intermediaries are not required to reveal the content of the message, or any other related information. However, under Rule 4 of the IT (Procedure and Safeguards for Interception, Monitoring and Decryption) Rules, 2009, the government possesses the power to demand the revelation of the content of electronic messages. The government could, upon identifying the user under the 2021 Rules, ask the intermediary to decrypt the content of other messages of the same user under the 2009 IT Rules citing “public order” (for example, citing the history of the user as a fake news spreader). This would render the proviso to Rule 4(2) of the 2021 Rules meaningless. Therefore, when the information about the first originator is gathered via enabling traceability and powers to disclose the content of the message is exercised, it leads to a break in end-to-end encryption. This destroys the very purpose of the cryptographic keys and encryption protocols developed over the years to encode the messages and safeguard the identity of their sender.
[This post has been authored by Noyanika Batta, a Senior Associate at Lakshmikumaran & Sridharan Attorneys. She is a 2018 graduate from Gujarat National Law University.]
There exist dichotomous views on the usefulness of surveillance and its relationship with public health. The disease control strategies adopted by the states often necessitate extensive surveillance practices having an overbearing and intrusive effect on the daily lives of its citizens. The debate thus lies in striking the right balance between public health and the need to strengthen public health infrastructures vis-a-vis privacy protection for individual citizens. With the rapid spread of COVID19 debilitating economies and causing health systems across the globe to crumble, it became imperative for governments and organizations to take immediate actions to protect its people. This in turn saw a fierce boom in surveillance technologies dedicated towards monitoring whole populations, with governments trying to chart the virus’ trajectory from broad swathes of personal data. This article seeks to examine the disproportionate risks to data privacy caused by the use of invasive and pervasive technologies such as contact tracing across the world.
[Varsha Singh is a fifth-year law student and contributing editor at robos of Tech Law and Policy, a platform for marginalized genders in the technology law and policy field. This essay is part of an ongoing collaboration between r – TLP and the NALSAR Tech Law Forum Blog and is the third post in the series. Previous entries can be found here.]
We live an increasingly online everyday life. Today, internet platforms are at the helm of conversations, dominating interactions and impacting relationships between social actors. These platforms’ power and control play a role in furthering fundamental values such as the right to communication and access to knowledge and information. Policies that govern this control, both at self-regulatory and state levels, should ensure the protection of such rights and freedoms while ensuring that users can reap these platforms’ benefits. The Ministry of Electronics and Information Technology recently published Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to regulate intermediaries. While these guidelines adversely affect users’ rights and freedoms in general, the adverse effect is amplified manifold when it comes to marginalised genders, especially in light of India’s socio-political and cultural contexts.
[This post is authored by Shikhar Aggarwal, a third year student at National Law University, Delhi.]
This article covers the need for, and rationale behind, the concept of principled Artificial Intelligence (“AI”). It explores the broad contours of the ethical principle of AI responsibility and accountability, analysing how it may be adopted in India. While tort law and product liability may hold the human element behind AI liable for harms caused, the existing frameworks are insufficient for redressing harms caused by autonomous technologies.
[Ed Note: The following post is part of the TLF Editorial Board Test 2020-21. It has been authored by Yashashwini Santuka, a second year student of NALSAR University of Law.]
Advanced systems of healthcare are imperative to the growth of countries, their economies and the well-being of its people. However, developing countries like India are still in the process of adapting to emerging technology in public healthcare due to its resource-constrained setting. The use of Artificial Intelligence (AI) in this scenario is rapidly spreading in public health. Effective deployment and adapting to its unique features to transform public health completely might take longer due to the systemic disparities observed in the country. While AI holds promise for the health systems, its uniform implementation may pose an issue to traditional patient care systems, patients’ safety, safety of their private medical records, and affordability. Such a situation requires regulators to take a systemic view of the healthcare industry, and possibly pre-empt the potential impact of the use and regulation of AI. This article explores the contextual limitations of the healthcare industry in India concerning the regulation of technology and AI.
[This post has been authored by Vaibhav Parikh, Legal Counsel at ICICI Bank. Views are personal]
The value of online/ mobile banking rose from INR 69.47 billion in 2016-17 to INR 21,317 billion in 2019-20. Providing data access to third-party firms by banks and other financial institutions has proved to be one of the important reasons for such rapid development in online/ mobile banking, since it has allowed for introduction of innovative financial services and products to customers (Basel Committee Report on Open Banking, Page 8); such as seamless payments transmission between accounts at different banks, instant payments using Unified Payments Interface (“UPI”) and aggregation of all financial accounts onto one dashboard. Gradually, the delivery of financial services and products is also being offered by non-banking third parties, such as fintech firms. These developments are aspects of open banking and are continuously evolving in nature.
[This is the first part of a two-part post authored by Abhilash Roy and Hrishikesh Bhise, fourth-year students at the National Law Institute University, Bhopal. Click here for Part II ]
The purposes and functions of the internet, as we know it today, have grown manifolds since its inception over thirty years ago. Its importance and use has only grown due to the ongoing pandemic with an estimated 50 to 70% more hits.