Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Search Results for: data protection

Examining Artificial Intelligence and Privacy in the light of COVID-19

Posted on May 13, 2020November 1, 2020 by Tech Law Forum @ NALSAR

[This post has been authored by Suvam Kumar, a 3rd year student at National Law University, Jodhpur.]

The COVID-19 pandemic has exposed the frailty of mankind’s societies and systems. In spite of tremendous progress made by humans in several fields of life, we are rendered helpless by the rapid and uncontrolled spread of the coronavirus. In these crucial times, the role of Artificial intelligence (“AI”) becomes very important and countries like China, USA, Canada, Australia, and India have leaned on AI to fight the pandemic. The use of AI has also been approved by the World Economic Forum (“WEF”) which has emphasized the role of AI as a panacea to fight this pandemic. However, the widespread use of AI is not without its own challenges and risks. There are serious concerns regarding the application of AI in the health sector especially during a pandemic like COVID-19; however, they can be mitigated by utilizing a legal regime that regulating AI effectively and conscientiously.

Read more

Blockchain in the paradigm of GDPR (Part II)

Posted on April 9, 2020April 29, 2020 by Tech Law Forum @ NALSAR

[This is the second part of a two-part article by Muskan Agarwal (National Law Institute University, Bhopal) and Arpita Pandey (National Law Institute University, Bhopal). Part 1 can be found here.]

Previously, the authors looked at the contradictions between blockchain and GDPR with regard to the principal obligations enlisted in GDPR. In this post, the authors will carry out a feasibility assessment of the solutions proposed.

Read more

Blockchain in the Paradigm of GDPR (Part I)

Posted on April 9, 2020April 29, 2020 by Tech Law Forum @ NALSAR

[This is the second part of a two-part article by Muskan Agarwal (National Law Institute University, Bhopal) and Arpita Pandey (National Law Institute University, Bhopal).]

This is the first part of a two-part post that undertakes an analysis of the points of friction present between the fundamentals of blockchain technology and GDPR and of the various solutions that have been proposed to address the inconsistencies.

Read more

Welcoming The Era of Technology Friendly Laws in India

Posted on January 2, 2020November 1, 2020 by Tech Law Forum @ NALSAR

This brief introduction to regulation of autonomous vehicles has been authored by Khushi Sharma and Aarushi Kapoor, second year students of Hidayatullah National Law University (HNLU), Raipur. [Ed. Note: This article was written before the 2019 Personal Data Protection Bill had been made public. Click here for the new Bill.]

India being the 7th largest manufacturer of commercial vehicles in the year 2017-18, coupled with its automobile sector which is the 4th largest in the world, are key factors driving India’s economic growth. Technological advancement is both a cause and effect of this growth. Innovative minds rule India and the world today to such an unimaginable extent that the very idea of acar running by itself with onejust sitting back and relaxing, now seems a reality. The debut of driverless vehicles in India was at Defexpo 2016 in New Delhi where Novus Drive, a driverless shuttle was introduced. However, a recurring question is ‘Are we really ready yet?’

Read more

E-Pharmacy and Tech Law: An Interface (Part II)

Posted on October 2, 2019 by Tech Law Forum NALSAR

This is the second part of a 2-part post authored by Anubhuti Garg, 4th year, and Gourav Kathuria, 2nd year, of NALSAR University of Law. Part I can be found here.

The previous post analysed the laws applicable to e-pharmacies in India. The present post looks at the draft e-pharmacy rules and its implications and suggests ways to ensure the smooth application of the law in India.

Draft E-Pharmacy Rules

On August 28, 2018, the government came out with the Sale of Drugs by E-Pharmacy (Draft Rules) for regulating the sale of drugs through e-pharmacies. These Rules aim to put in place an extensive regulatory regime for e-pharmacies and are important in light of the concerns that e-pharmacies pose. Given below are the salient features of the Rules:

  1. According to the Rules the definition of e-pharmacy includes within its ambit sales made through websites as well as through mobile phone apps termed ‘e-pharmacy portals’.
  2. Mandatory registration is prescribed for all e-pharmacies and sales have to be routed through specified portals. A registration application must be reviewed within 30 days.
  3. Mandatory uploading of prescription by the customer is recommended which must specify the prescribed drugs and quantity thereof. This does not apply to over-the-counter drugs.
  4. All generated data must be kept confidential and localized.
  5. An e-pharmacy cannot sell drugs covered by the Narcotic Drugs and Psychotropic Substances Act, 1985 or and the restriction extends to those listed under Schedule X of the Drugs and Cosmetics Rules.
  6. An e-pharmacy has to comply with the provisions of the Information Technology Act, 2000 and the associated Rules.

Implications of the Policy

Firstly, it will fill the regulation gap that currently exists and will put into place a robust framework to deal with e-pharmacies. Existing laws are inadequate when it comes to addressing the requirements of e-pharmacies, however, the Rules will resolve the issue and prevent misuse of medicines and data.

Secondly, sales of conventional brick and mortar outlets will be adversely affected due to competitive pricing offered by e-pharmacies. Conventional stores may fail to compete with online pharmacies which provide substantial discounts as a result of which offline stores will suffer due to loss of business.

Thirdly, the question of jurisdictional conflicts remains unaddressed as it remains to be seen which law holds the field in case of legal inconsistencies. Several inconsistencies may be spotted in the Draft Rules which need to be resolved if a solution to this issue is to be found.

Impact on the Right to Privacy

Privacy forms an important concern for consumers. There need to be adequate safeguards regarding how the data given by a customer is protected and this warrants heavy regulatory compliances in addition to strict penalties in cases of violations. The recent Aadhar judgment also brought to light numerous concerns regarding privacy which need to be kept in mind when implementing a regulatory framework for e-pharmacies.

The Draft Rules prescribe that e-pharmacies would keep data confidential and localized, however, state and central governments can secure access to the data for “public health purposes”. No criterion is prescribed for what would constitute such a purpose and the Rules also fail to mention which authority can compel e-pharmacies to share health information.  Such ambiguities pose a threat of misuse of data by government.

Further, the Draft Rules come in direct conflict with the draft of the Personal Data Protection Bill, 2018, which allows for the transfer of data outside India where the patient has expressed his/her consent or where the transfer is necessary for prompt action. The conflict between the two needs to be resolved before the Draft Rules can be implemented.

Conclusion

In conclusion, it can be said that the e-pharmacy regime is changing slowly but steadily. The government has taken cognizance of the fact that there are many health concerns surrounding the sale of medicines online and accordingly has formulated a policy which address these concerns. India is taking a step forward in terms of drafting a full-fledged policy exclusively for e-pharmacies; this is sure to make the lives of a lot of citizens easier.

There is no doubt that the proposed Rules are progressive in nature. By making regulations that stand in conformity with global best practices the government is providing impetus to the continued growth of the e-pharmacy industry. However, there exist issues that need to be resolved sooner rather than later, such as the tendency of the government to misuse data and the conflicting nature of its provisions with those of the IT Act, 2000.

India has a long way to go in governing e-pharmacies and there are a lot of loopholes that need to be plugged. Currently, there is no law governing the actions of drug companies and as a result they are operating with little regard to the consequences of their actions. There is a need to bring the Rules into force as quickly as possible, and despite the government’s promise to implement them within 100 days of the elections they are yet to act in this matter.

It is hoped that concerns about consumer privacy are addressed in a more stringent manner by the government and that provisions are put in place which ensure that misuse of the data of the customers is strictly prohibited. The government should address loopholes in the policy and examine how they come into conflict with existing rules and amend them to resolve such contentious issues.

Read more

Explainer on Account Aggregators

Posted on August 15, 2019December 4, 2020 by Tech Law Forum @ NALSAR

This post has been authored by Vishal Rakhecha, currently in his 4th year at NALSAR University of Law, Hyderabad, and serves as an introduction for TLF’s upcoming blog series on Account Aggregators. 

A few days back, Nandan Nilekani unveiled an ‘industry-body’ for Account Aggregators (AAs), by the name of ‘Sahamati.’ He claimed that AAs would revolutionise the field of fintech, and would give users more control over their financial data, while also making the transfer of financial information (FI) a seamless process. But what exactly are AAs, and how do they make transfer of FI seamless?

Read more

Article 13 of the EU Copyright Directive: A license to gag freedom of expression globally?

Posted on August 9, 2019August 4, 2019 by Tech Law Forum @ NALSAR

The following post has been authored by Bhavik Shukla, a fifth year student at National Law Institute University (NLIU) Bhopal. He is deeply interested in Intellectual Property Rights (IPR) law and Technology law. In this post, he examines the potential chilling effect of the EU Copyright Directive.

 

Freedom of speech and expression is the bellwether of the European Union (“EU”) Member States; so much so that its censorship will be the death of the most coveted human right. Europe possesses the strongest and the most institutionally developed structure of freedom of expression through the European Convention on Human Rights (“ECHR”). In 1976, the ECHR had observed in Handyside v. United Kingdom that a “democratic society” could not exist without pluralism, tolerance and broadmindedness. However, the recently adopted EU Copyright Directive in the Digital Single Market (“Copyright Directive”) seeks to alter this fundamental postulate of the European society by introducing Article 13 to the fore. Through this post, I intend to deal with the contentious aspect of Article 13 of the Copyright Directive, limited merely to its chilling impact on the freedom of expression. Subsequently, I shall elaborate on how the Copyright Directive possesses the ability to affect censorship globally.

Collateral censorship: Panacea for internet-related issues in the EU

The adoption of Article 13 of the Copyright Directive hints towards the EU’s implementation of a collateral censorship-based model. Collateral censorship occurs when a state holds one private party, “A” liable for the speech of another private party, “B”. The problem with such model is that it vests the power to censor content primarily in a private party, namely “A” in this case. The implementation of this model is known to have an adverse effect on the freedom of speech, and the adoption of the Copyright Directive has contributed towards producing such an effect.

The Copyright Directive envisages a new concept of online content sharing service providers (“service providers”), which refers to a “provider… whose main purpose is to store and give access to the public to significant amount of protected subject-matter uploaded by its users…” Article 13(1) of the Copyright Directive states that such service providers shall perform an act of “communication to the public” as per the provisions of the Infosoc Directive. Further, Article 13(2a) provides that service providers shall ensure that “unauthorized protected works” shall not be made available. However, this Article also places service providers under an obligation to provide access to “non-infringing works” or “other protected subject matter”, including those covered by exceptions or limitations to copyright. The Copyright Directive’s scheme of collateral censorship is evident from the functions entrusted to the service providers, wherein they are expected to purge their networks and websites of unauthorized content transmitted or uploaded by third parties. A failure to do so would expose service providers to liability for infringement of the content owner’s right to communication to the public, as provided in the Infosoc Directive.

The implementation of a collateral censorship model will serve as a conduit to crackdown on the freedom of expression. The reason for the same emanates from the existence of certain content which necessarily falls within the grey area between legality and illegality. Stellar examples of this content are memes and parodies. It is primarily in respect of such content that the problems related to censorship may arise. To bolster this argument, consider Facebook, the social media website which boasts 1.49 billion daily active users. As per an official report in 2013, users were uploading 350 million photos a day, the number has risen exponentially today. When intermediaries like Facebook are faced with implementation of the Copyright Directive, it will necessarily require them to employ automated detecting mechanisms for flagging or detecting infringing material, due to the sheer volume of data being uploaded or transmitted. The accuracy of such software in detecting infringing content has been the major point of contention towards its implementation. Even though content like memes and parodies may be flagged as infringing by such software, automated blocking of content is prohibited under Article 13(3) of the Copyright Directive. This brings up the question of human review of such purportedly infringing content. In this regard, first, it is impossible for any human agency to review large tracts of data even after filtration by an automatic system. Second, in case such content is successfully reviewed somehow, a human agent may not be able to correctly decide the nature of such content with respect to its legality.

This scenario shall compel the service providers to resort to taking down the scapegoats of content, memes and parodies, which may even remotely expose them to liability. Such actions of the service providers will certainly censor freedom of expression. Another problem arising from this framework is that of adversely affecting net neutrality. Entrusting service providers with blocking access to content may lead to indiscriminate blocking of certain type of content.

Though the Copyright Directive provides certain safeguards in this regard, they are latent and ineffective. For example, consider access to a “complaints and redress mechanism” provided by Article 13(2b) of the Copyright Directive. This mechanism offers a latent recourse after the actual takedown or blocking of access to certain content. This is problematic because the users are either oblivious to/ unaware of such mechanisms being in place, do not have the requisite time and resources to prove the legality of content or are just fed up of such repeated takedowns. An easy way to understand these concerns is through YouTube’s current unjustified takedown of content, which puts the content owners under the same burdens as expressed above. Regardless of the reason for inaction by the content owners, censorship is the effect.

The EU Copyright Directive’s tryst with the world

John Perry Barlow had stated in his Declaration of the Independence of Cyberspace that “Cyberspace does not lie within your borders”. This statement is true to a large extent. Cyberspace and the internet does not lie in any country’s border, rather its existence is cross-border. Does this mean that the law in the EU affects the content we view in India? It certainly does!

The General Data Protection Regulation (“GDPR”) applies to countries beyond the EU. The global effect of the Copyright Directive is similar, as service providers do not distinguish European services from those of the rest of the world. It only makes sense for the websites in this situation to adopt a mechanism which applies unconditionally to each user regardless of his/ her location. This is the same line of reasoning which was adopted by service providers in order to review user and privacy policies in every country on the introduction of the GDPR. Thus, the adoption of these stringent norms by service providers in all countries alike due to the omnipresence of internet-based applications may lead to a global censorship motivated by European norms.

The UN Special Rapporteur had envisaged that Article 13 would have a chilling effect on the freedom of expression globally. Subsequent to the Directive’s adoption, the Polish government protested against its applicability before the CJEU on the ground that it would lead to unwarranted censorship. Such action is likely to be followed by dissenters of the Copyright Directive, namely Italy, Finland, Luxembourg and the Netherlands. In light of this fierce united front, hope hinges on these countries to prevent the implementation of censoring laws across the world.

Read more

Automated Facial Recognition System and The Right To Privacy: A Potential Mismatch

Posted on August 3, 2019August 4, 2019 by Tech Law Forum @ NALSAR

This post has been authored by Ritwik Sharma, a graduate of Amity Law School, Delhi and a practicing Advocate. In a quick read, he brings out the threat to privacy posed by the proposed Automated Facial Recognition System.

 

On 28th June 2019, the National Crime Records Bureau (NCRB) released a Request for Proposal for an Automated Facial Recognition System (AFRS) which is to be used by the police officers in detecting potential criminals and suspects across the country.

The AFRS has potential use in areas like modernising the police force, information gathering, and identification of criminals, suspects, missing persons and personal verification.

In 2018, the Ministry of Civil Aviation launched a facial recognition system to be used for airport entry called “DigiYatra”. The AFRS system is built on similar lines but has a much wider coverage and different purpose. States in India have taken steps to introduce Facial Recognition Systems to detect potential criminals, with Telangana launching its system in August 2018.

What is Automated Facial Recognition System and how does it work?

The Automated Facial Recognition System (AFRS) will be a mobile and web application which will be hosted and managed by the National Crime Records Bureau (NCRB) data centre but will be used by all police stations across the country.

The AFRS works by comparing the image of an unidentified person captured through CCTV footage to the image which has been kept at the data centre of the NCRB. This will allow the data centre to match the images and detect potential criminals and suspects.

The system has the potential to match facial images with changes in facial expressions, angle, lightening, direction, beard, hairstyle, glasses, scars, tattoos and marks.

The NCRB has proposed to integrate AFRS with multiple existing databases: these include the  Crime and Criminal Tracking Network & Systems (CCTNS) which was introduced post Mumbai attacks in 2009 as a nationwide integrated database to criminal incidents by connecting FIR registrations, investigations and chargesheets of police stations and higher offices, the Integrated Criminal Justice System (ICJS) which is a computer network which enables judicial practitioners and agencies to electronically access and share information and Khoya Paya Portal which is a portal used to detect missing children.

State Surveillance vs. Right to Privacy

In September 2017, the Supreme Court in the historic judgment of K.S. Puttaswamy vs. Union of India declared the right to privacy as a fundamental right under Article 21 of the Indian Constitution. The Supreme Court asserted that the government must cautiously balance individual privacy and the legitimate concerns of the state, even if national security is at stake. The Court also asserted that any invasion of privacy must satisfy the triple test i.e. need (legitimate state concern), proportionality (least invasive manner) and legality (backed by law) to ensure that a fair and reasonable procedure is undertaken without any selective targeting and profiling.

Privacy infringement without legal sanction and through executive action would be violative of the fundamental right to privacy and would disregard the Supreme Court directive. Cyber experts are of the view that such a system could be used as a tool of government abuse and risk the privacy of the citizens and since the country lacks a data protection law, the citizens would become vulnerable to privacy abuse.

Moreover, investigating agencies in the United States like the FBI operate probably the largest facial recognition system in the world. Cyber experts and international institutions have criticised the Chinese government for using surveillance system and facial recognition to keep an eye on the Uighur community in China. However, there have been claims that this system has an accuracy of hardly 2%, which makes it unreliable and cities like London are facing calls to discontinue this system to safeguard the privacy of its citizens.

Finally, such a tracking system impinges upon human dignity by treating every person as a potential criminal or suspect. There are no clear guidelines as to where such cameras are to be placed. The cameras will put every individual under surveillance and even the innocent ones would be tracked. Such surveillance would create fear amongst the citizens which has long term implications.

Conclusion

A rise in the crime rate poses a daunting challenge in front of the investigating agencies and robust measures must be undertaken to counter it. However, such measures should be ably backed by law and should not impinge upon the dignity and the right to privacy of the citizens.

The Data Protection Law drafted by the Justice Srikrishna Committee should be enacted by the Parliament to give legal sanction to such surveillance. Furthermore, the AFRS should be used cautiously to prevent any violation of the fundamental right to privacy.

AFRS system has the potential to bring a paradigm shift in the criminal justice system if its use is well-intentioned and within the democratic framework which ensures right to privacy and limited state surveillance.

Read more

The Issue of Artificial Intelligence and its Regulation

Posted on July 8, 2019 by Tech Law Forum @ NALSAR

[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Siddharth Kothari, a second year student of NALSAR University of Law.]

In an era of unprecedented technological advancements across different fields, Artificial Intelligence (AI) is poised to quiver our lives. AI refers to “a class of computer programming designed to solve problems requiring inferential reasoning, decision-making based on incomplete or uncertain information, classification optimisation and perception.”[1] Initially imagined as a technology that could mimic human smartness, AI is set out to exceed far ahead of its original conception.

An AI spring has come up to alter our existence and the key development for this is the flood of data. IDC predicts that the global data sum will grow from 33 zettabytes in 2018 to 175 ZB by 2025, for a compounded annual growth rate of 61 per cent.[2] As Barry Smyth, Professor of Computer science at University College Dublin, says: “Data is to AI what food is to humans.”[3] Added with the exponential fall in cost of storage of data, Artificial Intelligence is being constantly fed and therefore, increasing enormously.

To get a better understanding, AI programs include a wide band of computer algorithms that have self-sufficiency, aptitude and a dynamic ability to solve issues at hand. Machine learning, as was coined by Artur Samuel in 1959, meant “the ability to learn without being explicitly programmed”. Instead of programming everything, machine learning algorithms learn from data. So for example, a chess game that dynamically finds patterns and then uses these patterns to make moves and comes up with its own scoring formula is a part of machine learning AI. Consequently, deep learning is a technique for the implementation of machine learning, inspired by neural functions of the brain. Artificial Neural Networks (ANNs) are algorithms created on the lines of the biological structure of a brain. In ANNs, there are ‘neurons’ which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.[4] Looking at the pace of such developments in AI who knows how are we from a time where the fictional extraordinary intelligence operator, Jarvis from Marvel’s Iron Man or Winston, the top-notch AI assistant from Dan Brown’s book ‘Origin’ actually come into play.

AI technology development with its express evolution has key implications for economies and societies. A study by EY and NASCCOM found that by 2022, around 46% of the workforce will be engaged in entirely new jobs that do not exist today, or will be deployed in jobs that have radically changed skillsets.[5] In 2019, Fortune 1000 companies are expected to increase their Artificial Intelligence investments by a tremendous 91.6 per cent according to a survey by NewVantage Partners, a data and business consultancy company.

Therefore, there is a crucial requirement coupled with a heated debate, for the regulation of Artificial Intelligence. Current legal enforcement systems are surrounding human conduct which if applied to AI may not function. Because there cannot be created the traditional link of intent and causation that is applied in the legal sphere to the nature of machine-learning algorithm. It is obviously impossible to articulate as to how AI internalized a colossal mass of data to reach to its decisions. AI relies on machine-learning algorithms paralleled with ‘deep neural networks’ and can be as or more difficult to understand as a human brain. However, the difference is, humans leave evidence, trails etc. but if an AI program is what they call a ‘black-box’[6] it will make conclusions but without being able to communicate its reasons to do so. Therefore, questioning the ethical hypothesis of AI. Successively, a survey of 1400 US executives was conducted by Deloitte last year. It found ethical concerns to contribute as one of the top risks of Artificial Intelligence.[7]

Thus, there is a well-established need to look at digital governance and an ethical and regulatory framework for AI. When we talk about the Indian setting particularly, where it has penetrated healthcare, agriculture, education, infrastructure and transportation, in June 2018 the government put out a discussion Paper setting a National Strategy for Artificial Intelligence.[8] Mostly, to discuss the regulatory framework to address the privacy issues surrounding the same.

AI governance has some core guiding values as to its ethical framework which includes fairness with respect to fundamental human rights, a continued vigilance as to potential effects and consequences. It also requires AI be transparent as to improved intelligibility for its efficient application and should be free from bias that would result in discrimination of the use of data.

Hence, legal issues, ethics and model frameworks to handle AI are discussed. One of the most important issue is whether the responsibility of damages cause by AI can be vicariously attributed to someone, or to make it a ‘separate legal identity’.

For example the courts in UK have propounded that the a machine-learning system currently cannot be regarded as an agent, because according to them only a person with a mind can be regarded as an agent. But, contrastingly governing bodies in US and Canada are setting conditions wherein softwares can get into binding contracts on behalf of a person. European Parliaments have recommended that in the longer run, autonomous Artificial Intelligence coupled with robotics technology should be attributed with the status of electric persons.

In January 2019, Singapore came up with its model AI regulatory for discussion and adoption as a part to incorporate governance and provide guidance to the private sector when deploying machine-learning solutions. This model framework is based on a two-pronged basis for AI technologies. Bodies using AI in core decision making should ensure the process is transparent, explainable and fair. And secondly, AI solutions should be absolutely human-centric. [9]

Numerous groups across the states have released various guidelines for the ethical design for the implementation of AI. For example, the Massachusetts Institute of Technology media lab and BK centre for Internet and Society at Harvard University, in January 2017 embarked upon a $27 Million USD initiative to “bridge the gap between humanities, social sciences and computing the challenges of AI from a multidisciplinary perspective.”

Subsequently, rapid developments in this digital ecosystem have started another debate on the repercussions of these regulations on data protection and privacy. European Union has for example, released an all-inclusive legal charter for protection of Data called General Data Protection Regulation (GDPR).[10] This regulation plan puts forth the rights and obligations of all stakeholders and a comprehensive plan of action in case of a breach.

As modern AI grows, governments all across the globe are developing or have developed data privacy and security regulations on the lines of AI. Therefore, in the Indian context, the hasty developments in this area urgently require stakeholders to recognize the challenges and risks of modern AI and to acknowledge its intersection with law, policy and ethics. Without explicit guidelines and legislations, AI algorithms continue to roam scot-free without the premise of ‘culpability’. Moreover, machine-learning systems are in many cases black boxes to humans posing a threat to the fundamental feature of the intent and causation of law. Therefore, the need of the hour is a better oversight and legislative regulation as well as protection of Artificial Intelligence algorithms.

[1] Toshinori Munakata,” FUNDAMENTALS OF THE NEW ARTIFICIAL INTELLIGENCE” 1–2 (2d ed. 2008)

[2] https://www.networkworld.com/article/3325397/idc-expect-175-zettabytes-of-data-worldwide-by-2025.html

[3] Barry Smyth, “Making AI meaningful again”

[4] Medium.com: “The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning”

[5] Kapil Chaudhary,”Why we need an AI code of Ethics” https://www.vantageasia.com/need-ai-code-ethics/

[6] Yavar Bathaee, THE ARTIFICIAL INTELLIGENCE BLACK BOX AND THE FAILURE OF INTENT AND CAUSATION, Harvard Journal of Law & Technology Volume 31, Number 2 Spring 2018, https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf

[7] Supra Note 5

[8] NITI Aayog, “National Strategy for Artificial Intelligence” https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf?utm_source=hrintelligencer

[9] Ibid.                                  

[10] https://digitalguardian.com/blog/what-gdpr-general-data-protection-regulation-understanding-and-complying-gdpr-data-protection

Read more

Conundrum of Right to Be Forgotten: An Analysis of The Slippery Slope: To Forgive, Forget or Re-Write History

Posted on May 5, 2019May 5, 2019 by Tech Law Forum @ NALSAR

[Ed Note : In a slightly longer read, Pranay Bhattacharya, a second year student of Maharashtra National Law University (MNLU) Aurangabad talks about the origins and development of the “Right to be Forgotten,”, using this as a background to critically analyze this right as present in India’s Draft Personal Data Protection Bill 2018.]

“Blessed are the forgetful, for they get the better even of their blunders.”

Friedrich Nietzsche

Read more
  • Previous
  • 1
  • …
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • …
  • 13
  • Next

Subscribe

Recent Posts

  • The Digital Personal Data Protection Bill: A Move Towards an Orwellian State?
  • IT AMENDMENT RULES 2022: An Analysis of What’s Changed
  • The Telecommunications Reforms: A Step towards a Surveillance State (Part II)
  • The Telecommunications Reforms: A Step towards a Surveillance State (Part I)
  • Subdermal Chipping – A Plain Sailing Task?
  • A Comparative Analysis of Adtech Regulations in India Vis-a-Vis Adtech Laws in the UK
  • CERT-In Directions on Cybersecurity, 2022: For the Better or Worse?
  • Traversing the Contours of Safe Harbour: Comparison of India and US (Part II)
  • Traversing the Contours of Safe Harbour: Comparison of India and US (Part I)
  • Policy Lessons for India from Europe’s Artificial Intelligence Act

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2023 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme