Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Category: Internet Freedoms

A Surveillance Story

Posted on January 16, 2021January 16, 2021 by Tech Law Forum NALSAR

[This post has been authored by Ada Shaharbanu and Reuel Davis Wilson.]

Our familiarity with surveillance generally brings to mind the methods adopted in the 20th century. Common among these are the tapping of telephone lines, stakeouts and the interception of postal services. However, it becomes difficult to keep a track of the multiplicity of ways in which surveillance is presently conducted. Advanced technology has barely allowed us to familiarize ourselves with one thing before the next comes along.

Read more

Metadata by TLF: Issue 18

Posted on November 18, 2020November 17, 2020 by Tech Law Forum NALSAR

Welcome to our fortnightly newsletter, where our reporters Harsh Jain and Harshita Lilani put together handpicked stories from the world of tech law! You can find other issues here.

Streaming platforms and online news portals brought under the purview of the I&B Ministry

The Cabinet Secretariat issued a notification on November 11, 2020 granting the Ministry of Information and Broadcasting authority over streaming platforms and online news portals. Simply put, this means that platforms such as Netflix, Hotstar, Amazon Prime, etc. will now be under the jurisdiction of the I&B Ministry. While the I&B Ministry cannot regulate these platforms without specific laws being passed towards that end, the notification signals the intent of the government to bring out a regulatory code in the near future. Such a move was expected after Amit Khare, the Secretary of the I&B Ministry, expressed the Ministry’s intent to bring content streamed over OTT platforms under its purview. The online content sector, unlike radio, cinema and television, has till now remained free of censorship. In August 2020, more than a dozen OTT platforms operating in India such as Netflix, Zee5, Voot, Jio, SonyLiv, etc. had signed a self-regulation code aimed at empowering consumers with tools to assist them in making informed choice with regard to viewing decisions for them and their families but the I&B Ministry had refused to support the same.

Read more

Metadata by TLF: Issue 11

Posted on May 14, 2020December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our reporters Kruttika Lokesh and Dhananjay Dhonchak put together handpicked stories from the world of tech law! You can find other issues here.

Private firm blocked from buying “.org” domain

Read more

Right to access Internet: An end to oppressive Internet shutdowns?

Posted on April 7, 2020April 29, 2020 by Tech Law Forum @ NALSAR

[This post has been authored by Mohd Rameez Raza (Faculty of Law, Integral University, Lucknow) and Raj Shekhar (NUSRL, Ranchi).]

The Internet is one of the most powerful instruments of the 21st century for increasing transparency in day to day working, access to information, and most important facilitating active citizen participation in building strong democratic societies. Relying on the same belief, the Kerala High Court, in its monumental, decision has held ‘Right to Internet Access’ as a fundamental right. Thus, making the right to have access to Internet part of ‘Right to Education’ as well as ‘Right to Privacy’ under Article 21 of the Constitution of India.

Read more

Delhi HC’s order in Swami Ramdev v. Facebook: A hasty attempt to win the ‘Hare and Tortoise’ Race

Posted on January 6, 2020January 6, 2020 by Tech Law Forum @ NALSAR

This post has been authored by Aryan Babele, a final year student at Rajiv Gandhi National University of Law (RGNUL), Punjab and a Research Assistant at Medianama.

On 23rd October 2019, the Delhi HC delivered a judgment authorizing Indian courts to issue “global take down” orders to Internet intermediary platforms like Facebook, Google and Twitter for illegal content as uploaded, published and shared by users. The Delhi HC delivered the judgment on the plea filed by Baba Ramdev and Patanjali Ayurved Ltd. requesting the global takedown of certain videos which were alleged to be defamatory in nature.

Read more

Metadata by TLF: Issue 7

Posted on November 14, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Israel spyware ‘Pegasus’ used to snoop on Indian activists, journalists, lawyers

In a startling revelation, Facebook owned messaging app WhatsApp revealed that a spyware known as ‘Pegasus’ has been used to target and surveil Indian activists and journalists. The revelation came to light after WhatsApp filed a lawsuit against the Israeli NSO Group, accusing it of using servers located in the US and elsewhere to send malware to approximately 1400 mobile phones and devices. On its part, the NSO group has consistently claimed that it sells its software only to government agencies, and that it is not used to target particular subjects. The Indian government sought a detailed reply from WhatsApp but has expressed dissatisfaction with the response received, with the Ministry of Electronics and Information Technology stating that the reply has “certain gaps” which need to be further investigated.

Further reading:

  1. Sukanya Shantha, Indian Activists, Lawyers Were ‘Targeted’ Using Israeli Spyware Pegasus, The Wire (31 October 2019).
  2. Seema Chishti, WhatsApp confirms: Israeli spyware was used to snoop on Indian journalists, activists, The Indian Express (1 November 2019).
  3. Aditi Agrawal, Home Ministry gives no information to RTI asking if it bought Pegasus spyware, Medianama (1 November 2019).
  4. Shruti Dhapola, Explained: What is Israeli spyware Pegasus, which carried out surveillance via WhatsApp?, The Indian Express (2 November 2019).
  5. Akshita Saxena, Pegasus Surveillance: All You Want To Know About The Whatsapp Suit In US Against Israeli Spy Firm [Read Complaint], LiveLaw (12 November 2019).

RBI raises concerns over WhatsApp Pay

Adding to the WhatsApp’s woes in India, just after the Israeli spyware Pegasus hacking incident, The RBI has asked the National Payments Corporation of India (NPCI) not to permit WhatsApp to go ahead with the full rollout of its payment service WhatsApp Pay. The central bank has expressed concerns over WhatsApp’s non-compliance with data processing regulations, as current regulations allow for data processing outside India on the condition that it returns to servers located in the country without copies being left on foreign servers.

Further Reading:

  1. Karan Choudhury & Neha Alawadhi, WhatsApp Pay clearance: RBI raises concerns data localisation concerns with NPCI, Business Standard (7 November 2019).
  2. Aditi Agarwal, ‘No payment services on WhatsApp without data localisation’, RBI to SC, Medianama (9 October 2019).
  3. Sujata Sangwan, WhatsApp can’t start payments business in India, YOURSTORY (9 November, 2019).
  4. Yatti Soni, WhatsApp Payments India Launch May Get Delayed Over Data Localisation Concerns, Inc42 (9 October 2019).
  5. Priyanka Pani, Bleak future for messaging app WhatsApp’s payment future in India, IBS Intelligence (9 November 2019).

Kenya passes new Data Protection Law

The Kenyan President, Uhuru Kenyatta recently approved a new data protection law in conformity with the standards set by the European Union. The new bill was legislated after it was found that existing data protection laws were not at par with the growing investments from foreign firms such as Safaricom and Amazon. There was growing concern that tech giants such as Facebook and Google would be able to collect and utilise data across the African subcontinent without any restrictions and consequently violate the privacy of citizens. The new law has specific restrictions on the manner in which personally identifiable data can be handled by the government, companies and individuals, and punishment for violations can to penalties of three million shillings or levying of prison sentences.

Further reading:

  1. Duncan Miriri, Kenya Passes Data Protection Law Crucial for Tech Investments, Reuters (8 November 2019).
  2. Yomi Kazeem, Kenya’s Stepping Up Its Citizens’ Digital Security with a New EU-Inspired Data Protection Law, Quartz Africa (12 November 2019).
  3. Kenn Abuya, The Data Protection Bill 2019 is Now Law. Here is What that Means for Kenyans, Techweez (8 November 2019).
  4. Kenya Adds New Data Regulations to Encourage Foreign Tech Entrants, Pymnts (10 November 2019).

Google gains access to healthcare data of millions through ‘Project Nightingale’

Google has been found to have gained access data to the healthcare data of millions through its partnership with healthcare firm Ascension. The venture, named ‘Project Nightingale’ allows Google to access health records, names and addresses without informing patients, in addition to other sensitive data such as lab results, diagnoses and records of hospitalisation. Neither doctors nor patients need to be told that Google an access the information, though the company has defended itself by stating that the deal amounts to “standard practice”. The firm has also stated that it does not link patient data with its own data repositories, however this has not stopped individuals and rights groups from raising privacy concerns.

Further reading:

  1. Trisha Jalan, Google’s Project Nightingale collects millions of Americans health records, Medianama (12 November 2019).
  2. Ed Pilkington, Google’s secret cache of medical data includes names and full details of millions – whistleblower, The Guardian (12 November 2019).
  3. James Vincent, The problem with Google’s health care ambitions is that no one knows where they end, The Verge (12 November 2019).
  4. Rop Copeland & Sarah E. needlemen, Google’s ‘Project Nightingale’ Triggers Federal Inquiry, Wall Street Journal (12 November 2019).

Law professor files first ever lawsuit against facial recognition in China

Law professor Guo Bing sued the Hangzhou Safari Park after it suddenly made facial recognition registration a mandatory requirement for visitor entrance. The park had previously used fingerprint recognition to allow entry, however it switched to facial recognition as part of the Chinese government’s aggressive rollout of the system meant to boost security and enhance consumer convenience. While it has been speculated that the lawsuit might be dismissed if pursued, it has stirred conversations among citizens over privacy and surveillance issues which it is hoped will result in reform of existing internet laws in the nation.

Further reading:

  1. Xue Yujie, Chinese Professor Files Landmark Suit Against Facial Recognition, Sixth Tone (4 November 2019).
  2. Michael Standaert, China wildlife park sued for forcing visitors to submit to facial recognition scan, The Guardian (4 November 2019).
  3. Kerry Allen, China facial recognition: Law professor sues wildlife park, BBC (8 November 2019).
  4. Rita Liao, China Roundup: facial recognition lawsuit and cashless payments for foreigners, TechCrunch (10 November 2019).

Twitter to ban all political advertising

Twitter has taken the decision to ban all political advertising, in a move that increases pressure on Facebook over its controversial stance to allow politicians to advertise false statements. The policy was announced via CEO Jack Dorsey’s account on Wednesday, and will apply to all ads relating to elections and associated political issues. However, the move may only to prove to have symbolic impact, as political ads on Twitter are just a fraction of those on Facebook in terms of reach and impact.

Further reading:

  1. Julie Wong, Twitter to ban all political advertising, raising pressure on Facebook, The Guardian (30 October 2019).
  2. Makena Kelly, Twitter will ban all political advertising starting in November, The Verge (30 October 2019).
  3. Amol Rajan, Twitter to ban all political advertising, BBC (31 October 2019).
  4. Alex Kantrowitz, Twitter Is Banning Political Ads. But It Will Allow Those That Don’t Mention Candidates Or Bills., BuzzFeed News (11 November 2019).

Read more

Emergence of OTT Market in India: Regulatory and Censorship Issues

Posted on September 27, 2019 by Tech Law Forum NALSAR

This post has been authored by Gaurav Kumar, a 3rd year student at Dr. Ram Manhar Lohiya National Law University (RMLNLU), Lucknow. He is also a Contributing Editor at the RMLNLU Arbitration Law Blog.

The media industry in recent times is witnessing a revolution when it comes to censorship of streaming content. As compared to theatres it has become comparatively much easier for the web industry to dodge any moral scrutiny when releasing its work. While the release of the Narendra Modi biopic during the 2019 Lok Sabha Elections caused significant controversy, a web series on the same subject was allowed to air without any issues, though it was later removed by the Election Commission for having violated the Model Code of Conduct.

There have been many instances where the content of a web series has been objected to for promoting vulgarity, violence and attacking political and religious sentiments. The Delhi HC recently witnessed a PIL filed by an NGO called Justice for Rights Foundation seeking framing of guidelines to regulate the functioning of online media streaming platforms such as Netflix, Amazon and others alleging that they show unregulated, uncertified, and inappropriate content. However, the current situation indicates that content produced by such platforms continues to be outside the purview of censorship laws, thereby requiring a regulatory mechanism to balance out the conflicting views of the government, attempting to play a watchkeeping role and the advocates of creative and artistic freedom.

What are OTT platforms?

“Over-the-top (OTT)” is the buzz-word for services carried over networks that deliver value to customers without the involvement of a carrier service provider in the planning, selling, provisioning and servicing aspects. Essentially, the term refers to providing content over the internet unlike traditional media such as radio and cable TV.

The entertainment industry in recent times has gradually moved towards releasing content on streaming platforms such as Netflix and Amazon Prime. This is due to consumer preferences as expressed in a survey report by Mint and YouGov, which reveals millennials’ preference for online streaming as against cable TV. Another finding by Velocity MR expects the audience movement to reach 80% following the implementation of the new tariff regime for pay-television by TRAI, and the positive responses to series like Sacred Games and Mirzapur from critics and audience shows that quality of content is the key factor influencing the move to streaming services.

Considering its increasing popularity it becomes important to understand OTT with an Indian perspective.  In 2015, amid the burning debates of net neutrality, TRAI floated a Consultation Paper On Regulatory Framework for Over-the-top (OTT) services to “analyze the implications of the growth of OTTs”. In this paper it defined the term “OTT provider” as a “service provider which offers Information and Communication Technology (ICT) services but does not operate a network or lease capacity from a network operator.”. Instead, such providers rely on global internet and access network speeds ( to reach the user, thereby going “over-the-top” of a service provider’s network. Based on the kind of service they provide, there are three types of OTT apps:

  • Messaging and voice services;
  • Application ecosystems, linked to social networks, e-commerce; and
  • Video/audio content.

In November, 2018, TRAI came out with another consultation paper considering a “significant increase in adoption and usage” since its last paper. In order to bring clarity with regard to the understanding of OTT, chapter 2 of this Consultation Paper on Regulatory Framework for Over-The-Top (OTT) Communication Services discussed the definitions adopted for OTT in various jurisdictions. However, it failed to formulate a definition due to the lack of consensus at the global level. Moreover, the earlier definition of the 2015-Consultation paper, which has been reiterated in 2018, also appears to lose context because it was more oriented towards the telecom service providers.

TRAI’s approach while discussing OTT services has been to restrict itself to the telecom industry so as to address their complaints regarding interference by OTT services in the domain traditionally reserved for telecom service providers. Even though it includes “video content” as its third category, a lack of clarity for defining web series within the ambit of OTT in India is evident which explains the absence of a regulatory mechanism for the same.

Differences between OTT platforms and conventional media

Conventional media vests the broadcaster with the discretion to air particular content. The viewer in this case involves all age groups and classes who have no control over the content being broadcasted, as a result of which governmental authorities are in charge of determining whether particular content is suitable for being shown to the public. However, the emergence of streaming has enabled a switch to a more personalized platform that caters to individual consumers enabling them to decide for themselves own what they wish to watch, which completely removes the role of government discretion and intervention.

Although there exist rules and restrictions to regulate pay-television operators, they fail to put any checks and balances on the newly emerged online streaming platforms for the significant differences in their structure and technology. The individualized viewing experience that has come up with the OTT media channels has clearly reduced the amount of surveillance, any existing regulatory bodies could have, over these platforms.

Can OTT platforms be regulated using existing laws?

The censorship of films in India is governed by the Cinematograph Act of 1952, which lays down certain categories in order to certify the films which are to be exhibited. Cable Broadcast is governed by the Cable Television Networks (Regulation) Act, 1995 and Cable Television Networks Rules, 1994. The Cable TV rules explicitly lays down the program and advertising codes that need to be followed in every broadcast.

Although it can be argued that that online streaming of content can be treated like cable broadcast, this would fail to comply with the legal test when it comes to application of the statute to streaming platforms. Certification for cable television does not require a separate mechanism but rather is done by the Central Board of Film Certification itself, and the cable TV rules restrict any program from being carried over cable if it is in contravention of the provisions – specifically Rule 6(n) of the Cable TV Rules – of the Cinematograph Act.

The problem here arises when defining the category within which web series will fall under the existing laws. Under the Cable TV Act, cable service means “the transmission by cables of programs including re-transmission by cables of any broadcast television signals.”[1] Cable television network is defined as “any system consisting of a set of closed transmission paths and associated signal generation, control and distribution equipment, designed to provide cable service for reception by multiple subscribers.”[2] However, the mode of transmission for OTT platforms is substantially different insofar as the content travels through Internet service providers which are difficult to regulate given their expanding nature. This makes the existing broadcasting laws inapplicable to OTT services.

The future of the OTT market

Censorship has always prevailed in the Indian television and cinema industry. Despite accusation of moral policing the CBFC has continued to censor moves to bring them in line with its understanding of public morality. This involves issues of free speech and expression which has seen the courts get involved in these matters, adjudicating upon directions issued by the CBFC in various instances.

TRAI is presently assessing a consultation process to construct a framework to regulate online video streaming platforms like Netflix, Amazon Prime and Hotstar, etc. on requests made by some of the stakeholders of the film industry. Some major tycoons of the industry such as Netflix, Hotstar, Jio, Voot, Zee5, Arre, SonyLIV, ALT Balaji and Eros Now signed a self-censorship code that prohibits the over-the top (OTT) online video platforms from showing certain kinds of content and sets up a redressal mechanism for customer complaints. However, Amazon declined to sign this code, along with Facebook and Google, stating that the current rules are adequate.

Considering the fact that the OTT media industry is increasing rapidly, sooner or later it will require a regulatory body. Portals like Netflix are not even India-run, which furthers the socio-political pressure to scrutinize western content on the government. Moreover, the spread of this industry to the vulnerable group will always remain a concern. Another problem that might come up with time could be of regulating the prices of the services as seen recently with the Cable TV. This may, in fact, lead to conflicts between this emerging online streaming industry and the pre-existing cable TV industry. The courts are already being approached, against the violent and obscene content of some of the series, indicating the need of immediate attention of the legislature to take appropriate steps. The OTT-boom in the Indian entertainment market has certainly revolutionized the viewing experience but it has posed many questions and loopholes that need to be addressed in the near future.

[1] Section 2(b), Cable Television Networks (Regulation) Act, 1995.

[2] Section 2(c), Cable Television Networks (Regulation) Act, 1995.

Read more

Metadata by TLF: Issue 4

Posted on September 10, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Facebook approaches SC in ‘Social Media-Aadhaar linking case’

In 2018, Anthony Clement Rubin and Janani Krishnamurthy filed PILs before the Madras High Court, seeking a writ of Mandamus to “declare the linking of Aadhaar of any one of the Government authorized identity proof as mandatory for the purpose of authentication while obtaining any email or user account.” The main concern of the petitioners was traceability of social media users, which would be facilitated by linking their social media accounts with a government identity proof; this in turn could help combat cybercrime. The case was heard by a division bench of the Madras HC, and the scope was expanded to include curbing of cybercrime with the help of online intermediaries. In June 2019, the Internet Freedom Foundation became an intervener in the case to provide expertise in the areas of technology, policy, law and privacy. Notably, Madras HC dismissed the prayer asking for linkage of social media and Aadhaar, stating that it violated the SC judgement on Aadhaar which held that Aadhaar is to be used only for social welfare schemes. 

Facebook later filed a petition before the SC to transfer the case to the Supreme Court. Currently, the hearing before the SC has been deferred to 13 September 2019 and the proceedings at the Madras HC will continue. Multiple news sources reported that the TN government, represented by the Attorney General of India K.K. Venugopal, argued for linking social media accounts and Aadhaar before the SC. However, Medianama has reported that the same is not being considered at the moment and the Madras HC has categorically denied it.

Further Reading:

  1. Aditi Agrawal, SC on Facebook transfer petition: Madras HC hearing to go on, next hearing on September 13, Medianama (21 August 2019).
  2. Nikhil Pahwa, Against Facebook-Aadhaar Linking, Medianama (23 August 2019).
  3. Aditi Agrawal, Madras HC: Internet Freedom Foundation to act as an intervener in Whatsapp traceability case, Medianama (28 June 2019).
  4. Aditi Agrawal, Kamakoti’s proposals will erode user privacy, says IIT Bombay expert in IFF submission, Medianama (27 August 2019).
  5. Prabhati Nayak Mishra, TN Government Bats for Aadhaar-Social Media Linking; SC Issues Notice in Facebook Transfer Petition, LiveLaw (20 August 2019).
  6. Asheeta Regidi, Aadhaar-social media account linking could result in creation of a surveillance state, deprive fundamental right to privacy, Firstpost (21 August 2019).

Bangladesh bans Mobile Phones in Rohingya camps

Adding to the chaos and despair for the Rohingyas, the Bangladeshi government banned the use of mobile phones and also restricted mobile phone companies from providing service in the region. The companies have been given a week to comply with these new rules. The reason cited for this ban was that refugees were misusing their cell phones for criminal activities. The situation in the region has worsened over the past two years and the extreme violation of Human Rights is termed to be reaching the point of Genocide according to UN officials. This ban on mobile phones, would further worsen the situation in Rohingya by increasing their detachment with the rest of the world, thus making their lives at the refugee camp even more arduous.

Further Reading:

  1. Nishta Vishwakarma, Bangladesh bans mobile phones services in Rohingya camps, Medianama (4 September 2019).
  2. Karen McVeigh, Bangladesh imposes mobile phone blackout in Rohingya refugee camp, The Guardian (5 September 2019).
  3. News agencies, Bangladesh bans mobile phone access in Rohingya camps, Aljazeera (3 September 2019).
  4. Ivy Kaplan, How Smartphones and Social Media have Revolutionised Refugee Migration, The Globe Post (19 October 2018).
  5. Abdul Aziz, What is behind the rising chaos in Rohingya camps, Dhakka Tribune (24 March 2019).

YouTube to pay 170 million penalty for collecting the data of children without their consent

Alphabet Inc.’s Google and YouTube will be paying a $170 million penalty to the Federal Trade Commission. It will be paid to settle allegations that YouTube collected the personal information of children by tracking their cookies and earning millions through targeted advertisements without parental consent. The FTC Chairman, Joe Simons, condemned the company for publicizing its popularity with children to potential advertisers, while blatantly violating the Children’s Online Privacy Protection Act. The company has claimed to advertisers, that it does not comply with any child privacy laws since it doesn’t have any users under the age of 13. Additionally, the settlement mandates that YouTube will have to create policies to identify content that is aimed at children and notify creators and channel owners of their obligations to collect consent from their parents. In addition, YouTube has already announced that it will be launching YouTube Kids soon which will not have targeted advertising and will have only child-friendly content. Several prominent Democrats in the FTC have criticized the settlement, despite it being the largest fine on a child privacy case so far, since the penalty is seen as a pittance in contrast to Google’s overall revenue.

Further Reading:

  1. Avie Schenider, Google, YouTube To Pay $170 Million Penalty Over Collecting Kids’ Personal Info, NPR (4 September 2019).
  2. Diane Bartz, Google’s YouTube To Pay $170 Million Penalty for Collecting Data on Kids, Reuters (4 September 2019).
  3. Natasha Singer and Kate Conger, Google Is Fined $170 Million for Violating Children’s Privacy on YouTube, New York Times (4 September 2019).
  4. Peter Kafka, The US Government Isn’t Ready to Regulate The Internet. Today’s Google Fine Shows Why, Vox (4 September 2019).

Facebook Data Leak of Over 419 Million Users

Recently, researcher Sanyam Jain located online unsecured servers that contained phone numbers for over 419 million Facebook users, including users from US, UK and Vietnam. In some cases, they were able to identify the user’s real name, gender and country. The database was completely unsecured and could be accessed by anybody. The leak increases the possibility of sim-swapping or spam call attacks for the users whose data has been leaked. The leak has happened despite Facebook’s statement in April that it would be more dedicated towards the privacy of its users and restrict access to data to prevent data scraping. Facebook has attempted to downplay the effects of the leak by claiming that the actual leak is only 210 million, since there are multiple duplicates in the data that was leaked, however Zack Whittaker, Security Editor at TechCrunch has highlighted that there is little evidence of such duplication. The data appears to be old since recently the company has changed its policy such that it users can no longer search for phone numbers. Facebook has claimed that there appears to be no actual evidence that there was a serious breach of user privacy.

Further Reading:

  1. Zack Whittaker, A huge database of Facebook users’ phone numbers found online, TechCrunch (5 September 2019).
  2. Davey Winder, Unsecured Facebook Server Leaks Data Of 419 Million Users, Forbes (5 September 2019).
  3. Napier Lopez, Facebook leak contained phone numbers for 419 million users, The Next Web (5 September 2019).
  4. Kris Holt, Facebook’s latest leak includes data on millions of users, The End Gadget (5 September 2019).

Mozilla Firefox 69 is here to protect your data

Addressing the growing data protection concerns Mozilla Firefox will now block third party tracking cookies and crypto miners by its Enhanced Tracking Protection feature. To avail this feature users will have to update to Firefox 69, which enforces stronger security and privacy options by default. Browser’s ‘Enhanced Tracking Protection’ will now remain turned on by default as part of the standard setting, however users will have the option to turn off the feature for particular websites. Mozilla claims that this update will not only restrict companies from forming a user profile by tracking browsing behaviour but will also enhance the performance, User Interface and battery life of the systems running on Windows 10/mac OS.

Further Readings

  1. Jessica Davies, What Firefox’s anti-tracking update signals about wider pivot to privacy trend, Digiday (5 September 2019).
  2. Jim Salter, Firefox is stepping up its blocking game, ArsTechnica (9 June 2019).
  3. Ankush Das, Great News! Firefox 69 Blocks Third Party Cookies, Autoplay Videos & Cryptominers by Default, It’s Foss (5 September 2019).
  4. Sean Hollister, Firefox’s latest version blocks third-party trackers by default for everyone, The Verge (3 September 2019).
  5. Shreya Ganguly, Firefox will now block third-party tracking cookies and cryptomining by default for all users, Medianama (4 September 2019).

Delhi Airport T3 terminal to use ‘Facial Recognition’ technology on a trial basis

Delhi airport would be starting a three-month trial of the facial recognition system in its T3 terminal. This system is called the Biometric Enabled Seamless Travel experience (BEST). With this technology, passenger’s entry would be automatically registered at various points such as check-in, security etc. Portuguese company- toolbox has provided the technical and software support for this technology. Even though this system is voluntary in the trial run the pertinent question of whether it will remain voluntary after it is officially incorporated is still to be answered. If the trial run is successful, it will be officially incorporated.

Further Reading:

  1. Soumyarendra Barik, Facial Recognition tech to debut at Delhi airport’s T3 terminal; on ‘trial basis’ for next three months, Medianama (6 September 2019).
  2. PTI, Delhi airport to start trial run of facial recognition system at T3 from Friday, livemint (5 September 2019).
  3. Times Travel Editor, Delhi International Airport installs facial recognition system for a 3 month trial, times travel (6 September 2019).
  4. Renée Lynn Midrack, What is Facial Recognition, lifewire (10 July 2019).
  5. Geoffrey A. Fowler, Don’t smile for surveillance: Why airport face scans are a privacy trap, The Washington Post (10 June 2019).

UK Court approves use of facial recognition systems by South Wales Police

In one of the first cases of its kind a British court ruled that police use of live facial recognition systems is legal and does not violate privacy and human rights. The case, brought by Cardiff resident Ed Bridges, alleged that his right to privacy had been violated by the system which he claimed had recorded him at least twice without permission, and the suit was filed to hold the use of the system as being violative of human rights including the right to privacy. The court arrived at its decision after finding that “sufficient legal controls” were in place to prevent improper use of the technology, including the deletion of data unless it concerned a person identified from the watch list.

Further Reading:

  1. Adam Satariano, Police Use of Facial Recognition Is Accepted by British Court, New York Times (4 September 2019).
  2. Owen Bowcott, Police use of facial recognition is legal, Cardiff high court rules, The Guardian (4 September 2019).
  3. Lizzie Dearden, Police used facial recognition technology lawfully, High Court rules in landmark challenge, The Independent (4 September 2019).
  4. Donna Lu, UK court backs police use of face recognition, but fight isn’t over, New Scientist (4 September 2019).

Read more

Article 13 of the EU Copyright Directive: A license to gag freedom of expression globally?

Posted on August 9, 2019August 4, 2019 by Tech Law Forum @ NALSAR

The following post has been authored by Bhavik Shukla, a fifth year student at National Law Institute University (NLIU) Bhopal. He is deeply interested in Intellectual Property Rights (IPR) law and Technology law. In this post, he examines the potential chilling effect of the EU Copyright Directive.

 

Freedom of speech and expression is the bellwether of the European Union (“EU”) Member States; so much so that its censorship will be the death of the most coveted human right. Europe possesses the strongest and the most institutionally developed structure of freedom of expression through the European Convention on Human Rights (“ECHR”). In 1976, the ECHR had observed in Handyside v. United Kingdom that a “democratic society” could not exist without pluralism, tolerance and broadmindedness. However, the recently adopted EU Copyright Directive in the Digital Single Market (“Copyright Directive”) seeks to alter this fundamental postulate of the European society by introducing Article 13 to the fore. Through this post, I intend to deal with the contentious aspect of Article 13 of the Copyright Directive, limited merely to its chilling impact on the freedom of expression. Subsequently, I shall elaborate on how the Copyright Directive possesses the ability to affect censorship globally.

Collateral censorship: Panacea for internet-related issues in the EU

The adoption of Article 13 of the Copyright Directive hints towards the EU’s implementation of a collateral censorship-based model. Collateral censorship occurs when a state holds one private party, “A” liable for the speech of another private party, “B”. The problem with such model is that it vests the power to censor content primarily in a private party, namely “A” in this case. The implementation of this model is known to have an adverse effect on the freedom of speech, and the adoption of the Copyright Directive has contributed towards producing such an effect.

The Copyright Directive envisages a new concept of online content sharing service providers (“service providers”), which refers to a “provider… whose main purpose is to store and give access to the public to significant amount of protected subject-matter uploaded by its users…” Article 13(1) of the Copyright Directive states that such service providers shall perform an act of “communication to the public” as per the provisions of the Infosoc Directive. Further, Article 13(2a) provides that service providers shall ensure that “unauthorized protected works” shall not be made available. However, this Article also places service providers under an obligation to provide access to “non-infringing works” or “other protected subject matter”, including those covered by exceptions or limitations to copyright. The Copyright Directive’s scheme of collateral censorship is evident from the functions entrusted to the service providers, wherein they are expected to purge their networks and websites of unauthorized content transmitted or uploaded by third parties. A failure to do so would expose service providers to liability for infringement of the content owner’s right to communication to the public, as provided in the Infosoc Directive.

The implementation of a collateral censorship model will serve as a conduit to crackdown on the freedom of expression. The reason for the same emanates from the existence of certain content which necessarily falls within the grey area between legality and illegality. Stellar examples of this content are memes and parodies. It is primarily in respect of such content that the problems related to censorship may arise. To bolster this argument, consider Facebook, the social media website which boasts 1.49 billion daily active users. As per an official report in 2013, users were uploading 350 million photos a day, the number has risen exponentially today. When intermediaries like Facebook are faced with implementation of the Copyright Directive, it will necessarily require them to employ automated detecting mechanisms for flagging or detecting infringing material, due to the sheer volume of data being uploaded or transmitted. The accuracy of such software in detecting infringing content has been the major point of contention towards its implementation. Even though content like memes and parodies may be flagged as infringing by such software, automated blocking of content is prohibited under Article 13(3) of the Copyright Directive. This brings up the question of human review of such purportedly infringing content. In this regard, first, it is impossible for any human agency to review large tracts of data even after filtration by an automatic system. Second, in case such content is successfully reviewed somehow, a human agent may not be able to correctly decide the nature of such content with respect to its legality.

This scenario shall compel the service providers to resort to taking down the scapegoats of content, memes and parodies, which may even remotely expose them to liability. Such actions of the service providers will certainly censor freedom of expression. Another problem arising from this framework is that of adversely affecting net neutrality. Entrusting service providers with blocking access to content may lead to indiscriminate blocking of certain type of content.

Though the Copyright Directive provides certain safeguards in this regard, they are latent and ineffective. For example, consider access to a “complaints and redress mechanism” provided by Article 13(2b) of the Copyright Directive. This mechanism offers a latent recourse after the actual takedown or blocking of access to certain content. This is problematic because the users are either oblivious to/ unaware of such mechanisms being in place, do not have the requisite time and resources to prove the legality of content or are just fed up of such repeated takedowns. An easy way to understand these concerns is through YouTube’s current unjustified takedown of content, which puts the content owners under the same burdens as expressed above. Regardless of the reason for inaction by the content owners, censorship is the effect.

The EU Copyright Directive’s tryst with the world

John Perry Barlow had stated in his Declaration of the Independence of Cyberspace that “Cyberspace does not lie within your borders”. This statement is true to a large extent. Cyberspace and the internet does not lie in any country’s border, rather its existence is cross-border. Does this mean that the law in the EU affects the content we view in India? It certainly does!

The General Data Protection Regulation (“GDPR”) applies to countries beyond the EU. The global effect of the Copyright Directive is similar, as service providers do not distinguish European services from those of the rest of the world. It only makes sense for the websites in this situation to adopt a mechanism which applies unconditionally to each user regardless of his/ her location. This is the same line of reasoning which was adopted by service providers in order to review user and privacy policies in every country on the introduction of the GDPR. Thus, the adoption of these stringent norms by service providers in all countries alike due to the omnipresence of internet-based applications may lead to a global censorship motivated by European norms.

The UN Special Rapporteur had envisaged that Article 13 would have a chilling effect on the freedom of expression globally. Subsequent to the Directive’s adoption, the Polish government protested against its applicability before the CJEU on the ground that it would lead to unwarranted censorship. Such action is likely to be followed by dissenters of the Copyright Directive, namely Italy, Finland, Luxembourg and the Netherlands. In light of this fierce united front, hope hinges on these countries to prevent the implementation of censoring laws across the world.

Read more

Mackinnon’s “Consent of The Networked” Deconstruction (Part II)

Posted on July 7, 2019November 12, 2019 by Prateek Surisetti

SERIES INTRODUCTION

Rebecca MacKinnon’s “Consent of the Networked: The Worldwide Struggle for Internet Freedom” is an interesting read on free speech, on the internet, in the context of a world where corporations are challenging the sovereignty of governments. Having read the book, I will be familiarizing readers with some of the themes and ideas discussed in MacKinnon’s work.

In Part I, we discussed censorship in the context of authoritarian governments.

In Part II, we will be dealing with the practices of democratic governments vis-à-vis online speech.

In Part III, we shall discuss the influence of corporations on online speech.

Essentially, the discussion will revolve around the interactions between the three stakeholders: netizens, corporations providing internet-based products and governments (both autocratic and democratic). Each of the stakeholders have varied interests or agendas and work with or against in each other based on the situation.

Governments wish to control corporations’ online platforms to pursue political agendas and corporations wish to attract users and generating profits, while also having to acquiesce to government demands to access markets. The ensuing interactions, involving corporations and governments, affect netizens’ online civil liberties across the world.

DEMOCRATIC GOVERNMENTS

In this section, we will be dealing with the actions of democratic governments and their effects on online speech.

MacKinnon notes that apart from authoritarian governments, even democratic institutions, albeit to a lesser degree, are indulging in activities that are detrimental to free speech online. For instance, after the U.S. learnt that the Chinese had access to a “kill switch” that would allow the Chinese government to terminate all access to the internet in its territory, the U.S. legislature attempted to pass a legislation that would provide the U.S. government with a similar capability. Though the legislation wasn’t passed, the same shows there exist voices within democratic set-ups that seek governmental power over cyberspace.

Further, corporations in the U.S. might be asked to comply with warrantless demands for information or surveillance and there doesn’t exist a recourse in law for them. These corporations might even be asked to comply with specific “requests” from the government. For instance, Amazon was initially hosting the WikiLeaks, but allegedly under U.S. pressure, Amazon backed out. It is pertinent to note that such pressure from the government, exerted in an opaque manner, is problematic as such actions skirt Due Process concerns.

The Panopticon Effect has consequences in democratic countries too. If government actions are opaque, citizens will be unaware of the breadth of surveillance and consequently, will alter their behaviour as a result of believing that they are being watched at all times.

Anonymity, Corporate Policing and Legitimization of Authoritarian Censorship

In addition to such opaque measures, democratic institutions also deal in legal censorship. MacKinnon refers to it as “Democratic Censorship”. The essential concern that democratic countries face while dealing with censorship is to balance the value curtailing online criminals and problematic speech (e.g. child pornography), while safeguarding the civil liberties of other netizens. Issues relevant to the balancing include anonymity, corporation policing of platforms and legitimization of authoritarian censorship.

The issue of anonymity features prominently in discussions involving balancing online privacy with online safety. While requiring netizens to identify themselves online would make them more accountable for their online transgressions, netizens involved in political activities, fearing social sanctions (e.g. anti-abortion speech related judgment), might refrain from posting.  Without the option of anonymity, cyberspace would cease to serve as a platform for unpopular speech. Further, a government, generally influenced by majoritarian views, cannot be expected to regulate without bias.  Hence, any requirement of non-anonymity can serve as a potential tool for censorship even in democratic setups.

For protecting netizens from problematic speech (e.g. child pornography), the government tasks the private sector to police their platforms. For instance, Google is expected to screen its video sharing platform YouTube for problematic speech. As is seen through this instance, legislating “Intermediary Liability” is one possible method of ensuring corporations police their platforms as the application of intermediary liability laws makes a corporation liable for problematic speech found on its platform.   In Italy, Google executives were sentenced to prison for failing to prevent the uploading of a video of an autistic child and thereby, violating the child’s privacy.

What are the consequences of requiring corporations to police their platform?

First, issues of legitimacy arise. Should an entity that isn’t accountable to the public at all be given the authority to act as gatekeepers for content? Customer accounts are intruded into and regulated by those who aren’t accountable to the public. We will revisit this argument in Part III.

Consider the case of the Internet Watch Foundation. It is an organization that creates an updated list of websites it considers objectionable. U.K. based Internet Service Providers use the list, out of their own volition, block access to the listed websites. It isn’t MacKinnon’s contention that the IWF is a fraud, but the example showcases the immense power that private entities could exercise over online speech and the vacuum of accountability measures.

Second, ascribing liability on corporations for failing to remove problematic speech would push them towards being extremely cautious with screening content. In other words, corporations, in their zeal to avoid any liability whatsoever, would be inclined to block all content that seem problematic, but mightn’t actually be problematic. Hence, content that shouldn’t be getting blocked might be. There would be “collateral filtering” or blockage of content that isn’t actually intended to be blocked by the regulator. For instance, if the word “sex” was flagged for blocking to weed out pornographic websites, even content relating to health and marriage that uses the word “sex” will get blocked.

Third, the intermediary liability model pushes corporations to adopt the practice of blocking potentially problematic content at the outset and subsequently, reviewing the blocking, if necessary, much later. Such a practice runs contrary to one of the foundational principles of Due Process i.e. “Innocent until proven guilty”. Additionally, such a practice is especially detrimental to the efficacy of political speech, as such speech often loses its impact with the passage of time. For instance, if a journalist writes a scathing article on the government for fuelling communal riots, the article will have its maximum impact when published during or shortly after the riots, when the issue is fresh in the minds of readers. Therefore, even if corporations republish content upon reviewing, the content may have lost its potency.

Hence, there exist various problems with requiring corporations to police their platforms.

Moving on, MacKinnon argues that “Democratic Censorship” also leads to legitimization of censorship policies of authoritarian governments.

In this regard, The U.S. government’s actions in the realm of intellectual property are especially problematic. In its zealousness to protect copyrights, the U.S. and other countries have overlooked Due Process. For instance, WikiLeaks revealed that U.S. and 34 other countries were negotiating an international treaty called the Anti-Counterfeiting Trade Agreement, which required intermediaries to police their platforms and remove content without having to prove violation.

When democratic governments eschew Due Process in this manner, they legitimize the actions of authoritarian governments. It allows authoritarian governments to claim that their internet policy is in accordance with international standards. When the U.S. legislature, pushed by lobbyists, sacrificed civil liberties to protect intellectual property rights, it gave the Chinese and Russians a cover for supressing dissent. For example, The Russians clamped down on dissenters by taking them to task for violating Microsoft’s copyright. As an aside, it is heart-warming to note that Microsoft changed its policy after the event.

Conclusion

In Part II, we have to attempted to (a) understand the pressures that democratic governments place on corporations, (b) understand “democratic censorship” and the attempts of democracies to balance measures against problematic speech with protection of netizens’ civil liberties, (c) understand “intermediary liability” and “collateral filtering” and (d) dilution of Due Process in democracies and the dilution’s effect of legitimizing censorship policies of authoritarian regimes.

In Part III, we will analyse the influence of corporations on online speech.

 

Image taken from here.

Read more
  • 1
  • 2
  • 3
  • Next

Subscribe

Recent Posts

  • A Surveillance Story
  • Data Protection in EdTech Start-ups: An Analysis
  • Principled Artificial Intelligence: Adopting the Principle of AI Accountability and Responsibility in India
  • Regulation of Content on OTT Platforms: An Explainer
  • Metadata by TLF: Issue 19
  • Criminal Liability of Artificial Intelligence (Part II)
  • Criminal Liability of Artificial Intelligence (Part I)
  • Examining the Rise of the ‘Splinternet’
  • Data Rights in Sports: The case of Event Data
  • Suggestions for Copyright Reforms

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Finance
  • Freedom of Speech
  • GDPR
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2021 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme