Welcome to our fortnightly newsletter, where our reporters Kruttika Lokesh and Dhananjay Dhonchak put together handpicked stories from the world of tech law! You can find other issues here.
Private firm blocked from buying “.org” domain
[This post has been authored by Mohd Rameez Raza (Faculty of Law, Integral University, Lucknow) and Raj Shekhar (NUSRL, Ranchi).]
The Internet is one of the most powerful instruments of the 21st century for increasing transparency in day to day working, access to information, and most important facilitating active citizen participation in building strong democratic societies. Relying on the same belief, the Kerala High Court, in its monumental, decision has held ‘Right to Internet Access’ as a fundamental right. Thus, making the right to have access to Internet part of ‘Right to Education’ as well as ‘Right to Privacy’ under Article 21 of the Constitution of India.
This post has been authored by Aryan Babele, a final year student at Rajiv Gandhi National University of Law (RGNUL), Punjab and a Research Assistant at Medianama.
On 23rd October 2019, the Delhi HC delivered a judgment authorizing Indian courts to issue “global take down” orders to Internet intermediary platforms like Facebook, Google and Twitter for illegal content as uploaded, published and shared by users. The Delhi HC delivered the judgment on the plea filed by Baba Ramdev and Patanjali Ayurved Ltd. requesting the global takedown of certain videos which were alleged to be defamatory in nature.
Israel spyware ‘Pegasus’ used to snoop on Indian activists, journalists, lawyers
In a startling revelation, Facebook owned messaging app WhatsApp revealed that a spyware known as ‘Pegasus’ has been used to target and surveil Indian activists and journalists. The revelation came to light after WhatsApp filed a lawsuit against the Israeli NSO Group, accusing it of using servers located in the US and elsewhere to send malware to approximately 1400 mobile phones and devices. On its part, the NSO group has consistently claimed that it sells its software only to government agencies, and that it is not used to target particular subjects. The Indian government sought a detailed reply from WhatsApp but has expressed dissatisfaction with the response received, with the Ministry of Electronics and Information Technology stating that the reply has “certain gaps” which need to be further investigated.
RBI raises concerns over WhatsApp Pay
Adding to the WhatsApp’s woes in India, just after the Israeli spyware Pegasus hacking incident, The RBI has asked the National Payments Corporation of India (NPCI) not to permit WhatsApp to go ahead with the full rollout of its payment service WhatsApp Pay. The central bank has expressed concerns over WhatsApp’s non-compliance with data processing regulations, as current regulations allow for data processing outside India on the condition that it returns to servers located in the country without copies being left on foreign servers.
Kenya passes new Data Protection Law
The Kenyan President, Uhuru Kenyatta recently approved a new data protection law in conformity with the standards set by the European Union. The new bill was legislated after it was found that existing data protection laws were not at par with the growing investments from foreign firms such as Safaricom and Amazon. There was growing concern that tech giants such as Facebook and Google would be able to collect and utilise data across the African subcontinent without any restrictions and consequently violate the privacy of citizens. The new law has specific restrictions on the manner in which personally identifiable data can be handled by the government, companies and individuals, and punishment for violations can to penalties of three million shillings or levying of prison sentences.
Google gains access to healthcare data of millions through ‘Project Nightingale’
Google has been found to have gained access data to the healthcare data of millions through its partnership with healthcare firm Ascension. The venture, named ‘Project Nightingale’ allows Google to access health records, names and addresses without informing patients, in addition to other sensitive data such as lab results, diagnoses and records of hospitalisation. Neither doctors nor patients need to be told that Google an access the information, though the company has defended itself by stating that the deal amounts to “standard practice”. The firm has also stated that it does not link patient data with its own data repositories, however this has not stopped individuals and rights groups from raising privacy concerns.
Law professor files first ever lawsuit against facial recognition in China
Law professor Guo Bing sued the Hangzhou Safari Park after it suddenly made facial recognition registration a mandatory requirement for visitor entrance. The park had previously used fingerprint recognition to allow entry, however it switched to facial recognition as part of the Chinese government’s aggressive rollout of the system meant to boost security and enhance consumer convenience. While it has been speculated that the lawsuit might be dismissed if pursued, it has stirred conversations among citizens over privacy and surveillance issues which it is hoped will result in reform of existing internet laws in the nation.
Twitter to ban all political advertising
Twitter has taken the decision to ban all political advertising, in a move that increases pressure on Facebook over its controversial stance to allow politicians to advertise false statements. The policy was announced via CEO Jack Dorsey’s account on Wednesday, and will apply to all ads relating to elections and associated political issues. However, the move may only to prove to have symbolic impact, as political ads on Twitter are just a fraction of those on Facebook in terms of reach and impact.
The media industry in recent times is witnessing a revolution when it comes to censorship of streaming content. As compared to theatres it has become comparatively much easier for the web industry to dodge any moral scrutiny when releasing its work. While the release of the Narendra Modi biopic during the 2019 Lok Sabha Elections caused significant controversy, a web series on the same subject was allowed to air without any issues, though it was later removed by the Election Commission for having violated the Model Code of Conduct.
There have been many instances where the content of a web series has been objected to for promoting vulgarity, violence and attacking political and religious sentiments. The Delhi HC recently witnessed a PIL filed by an NGO called Justice for Rights Foundation seeking framing of guidelines to regulate the functioning of online media streaming platforms such as Netflix, Amazon and others alleging that they show unregulated, uncertified, and inappropriate content. However, the current situation indicates that content produced by such platforms continues to be outside the purview of censorship laws, thereby requiring a regulatory mechanism to balance out the conflicting views of the government, attempting to play a watchkeeping role and the advocates of creative and artistic freedom.
“Over-the-top (OTT)” is the buzz-word for services carried over networks that deliver value to customers without the involvement of a carrier service provider in the planning, selling, provisioning and servicing aspects. Essentially, the term refers to providing content over the internet unlike traditional media such as radio and cable TV.
The entertainment industry in recent times has gradually moved towards releasing content on streaming platforms such as Netflix and Amazon Prime. This is due to consumer preferences as expressed in a survey report by Mint and YouGov, which reveals millennials’ preference for online streaming as against cable TV. Another finding by Velocity MR expects the audience movement to reach 80% following the implementation of the new tariff regime for pay-television by TRAI, and the positive responses to series like Sacred Games and Mirzapur from critics and audience shows that quality of content is the key factor influencing the move to streaming services.
Considering its increasing popularity it becomes important to understand OTT with an Indian perspective. In 2015, amid the burning debates of net neutrality, TRAI floated a Consultation Paper On Regulatory Framework for Over-the-top (OTT) services to “analyze the implications of the growth of OTTs”. In this paper it defined the term “OTT provider” as a “service provider which offers Information and Communication Technology (ICT) services but does not operate a network or lease capacity from a network operator.”. Instead, such providers rely on global internet and access network speeds ( to reach the user, thereby going “over-the-top” of a service provider’s network. Based on the kind of service they provide, there are three types of OTT apps:
In November, 2018, TRAI came out with another consultation paper considering a “significant increase in adoption and usage” since its last paper. In order to bring clarity with regard to the understanding of OTT, chapter 2 of this Consultation Paper on Regulatory Framework for Over-The-Top (OTT) Communication Services discussed the definitions adopted for OTT in various jurisdictions. However, it failed to formulate a definition due to the lack of consensus at the global level. Moreover, the earlier definition of the 2015-Consultation paper, which has been reiterated in 2018, also appears to lose context because it was more oriented towards the telecom service providers.
TRAI’s approach while discussing OTT services has been to restrict itself to the telecom industry so as to address their complaints regarding interference by OTT services in the domain traditionally reserved for telecom service providers. Even though it includes “video content” as its third category, a lack of clarity for defining web series within the ambit of OTT in India is evident which explains the absence of a regulatory mechanism for the same.
Conventional media vests the broadcaster with the discretion to air particular content. The viewer in this case involves all age groups and classes who have no control over the content being broadcasted, as a result of which governmental authorities are in charge of determining whether particular content is suitable for being shown to the public. However, the emergence of streaming has enabled a switch to a more personalized platform that caters to individual consumers enabling them to decide for themselves own what they wish to watch, which completely removes the role of government discretion and intervention.
Although there exist rules and restrictions to regulate pay-television operators, they fail to put any checks and balances on the newly emerged online streaming platforms for the significant differences in their structure and technology. The individualized viewing experience that has come up with the OTT media channels has clearly reduced the amount of surveillance, any existing regulatory bodies could have, over these platforms.
The censorship of films in India is governed by the Cinematograph Act of 1952, which lays down certain categories in order to certify the films which are to be exhibited. Cable Broadcast is governed by the Cable Television Networks (Regulation) Act, 1995 and Cable Television Networks Rules, 1994. The Cable TV rules explicitly lays down the program and advertising codes that need to be followed in every broadcast.
Although it can be argued that that online streaming of content can be treated like cable broadcast, this would fail to comply with the legal test when it comes to application of the statute to streaming platforms. Certification for cable television does not require a separate mechanism but rather is done by the Central Board of Film Certification itself, and the cable TV rules restrict any program from being carried over cable if it is in contravention of the provisions – specifically Rule 6(n) of the Cable TV Rules – of the Cinematograph Act.
The problem here arises when defining the category within which web series will fall under the existing laws. Under the Cable TV Act, cable service means “the transmission by cables of programs including re-transmission by cables of any broadcast television signals.” Cable television network is defined as “any system consisting of a set of closed transmission paths and associated signal generation, control and distribution equipment, designed to provide cable service for reception by multiple subscribers.” However, the mode of transmission for OTT platforms is substantially different insofar as the content travels through Internet service providers which are difficult to regulate given their expanding nature. This makes the existing broadcasting laws inapplicable to OTT services.
Censorship has always prevailed in the Indian television and cinema industry. Despite accusation of moral policing the CBFC has continued to censor moves to bring them in line with its understanding of public morality. This involves issues of free speech and expression which has seen the courts get involved in these matters, adjudicating upon directions issued by the CBFC in various instances.
TRAI is presently assessing a consultation process to construct a framework to regulate online video streaming platforms like Netflix, Amazon Prime and Hotstar, etc. on requests made by some of the stakeholders of the film industry. Some major tycoons of the industry such as Netflix, Hotstar, Jio, Voot, Zee5, Arre, SonyLIV, ALT Balaji and Eros Now signed a self-censorship code that prohibits the over-the top (OTT) online video platforms from showing certain kinds of content and sets up a redressal mechanism for customer complaints. However, Amazon declined to sign this code, along with Facebook and Google, stating that the current rules are adequate.
Considering the fact that the OTT media industry is increasing rapidly, sooner or later it will require a regulatory body. Portals like Netflix are not even India-run, which furthers the socio-political pressure to scrutinize western content on the government. Moreover, the spread of this industry to the vulnerable group will always remain a concern. Another problem that might come up with time could be of regulating the prices of the services as seen recently with the Cable TV. This may, in fact, lead to conflicts between this emerging online streaming industry and the pre-existing cable TV industry. The courts are already being approached, against the violent and obscene content of some of the series, indicating the need of immediate attention of the legislature to take appropriate steps. The OTT-boom in the Indian entertainment market has certainly revolutionized the viewing experience but it has posed many questions and loopholes that need to be addressed in the near future.
 Section 2(b), Cable Television Networks (Regulation) Act, 1995.
 Section 2(c), Cable Television Networks (Regulation) Act, 1995.
In 2018, Anthony Clement Rubin and Janani Krishnamurthy filed PILs before the Madras High Court, seeking a writ of Mandamus to “declare the linking of Aadhaar of any one of the Government authorized identity proof as mandatory for the purpose of authentication while obtaining any email or user account.” The main concern of the petitioners was traceability of social media users, which would be facilitated by linking their social media accounts with a government identity proof; this in turn could help combat cybercrime. The case was heard by a division bench of the Madras HC, and the scope was expanded to include curbing of cybercrime with the help of online intermediaries. In June 2019, the Internet Freedom Foundation became an intervener in the case to provide expertise in the areas of technology, policy, law and privacy. Notably, Madras HC dismissed the prayer asking for linkage of social media and Aadhaar, stating that it violated the SC judgement on Aadhaar which held that Aadhaar is to be used only for social welfare schemes.
Facebook later filed a petition before the SC to transfer the case to the Supreme Court. Currently, the hearing before the SC has been deferred to 13 September 2019 and the proceedings at the Madras HC will continue. Multiple news sources reported that the TN government, represented by the Attorney General of India K.K. Venugopal, argued for linking social media accounts and Aadhaar before the SC. However, Medianama has reported that the same is not being considered at the moment and the Madras HC has categorically denied it.
Adding to the chaos and despair for the Rohingyas, the Bangladeshi government banned the use of mobile phones and also restricted mobile phone companies from providing service in the region. The companies have been given a week to comply with these new rules. The reason cited for this ban was that refugees were misusing their cell phones for criminal activities. The situation in the region has worsened over the past two years and the extreme violation of Human Rights is termed to be reaching the point of Genocide according to UN officials. This ban on mobile phones, would further worsen the situation in Rohingya by increasing their detachment with the rest of the world, thus making their lives at the refugee camp even more arduous.
Alphabet Inc.’s Google and YouTube will be paying a $170 million penalty to the Federal Trade Commission. It will be paid to settle allegations that YouTube collected the personal information of children by tracking their cookies and earning millions through targeted advertisements without parental consent. The FTC Chairman, Joe Simons, condemned the company for publicizing its popularity with children to potential advertisers, while blatantly violating the Children’s Online Privacy Protection Act. The company has claimed to advertisers, that it does not comply with any child privacy laws since it doesn’t have any users under the age of 13. Additionally, the settlement mandates that YouTube will have to create policies to identify content that is aimed at children and notify creators and channel owners of their obligations to collect consent from their parents. In addition, YouTube has already announced that it will be launching YouTube Kids soon which will not have targeted advertising and will have only child-friendly content. Several prominent Democrats in the FTC have criticized the settlement, despite it being the largest fine on a child privacy case so far, since the penalty is seen as a pittance in contrast to Google’s overall revenue.
Recently, researcher Sanyam Jain located online unsecured servers that contained phone numbers for over 419 million Facebook users, including users from US, UK and Vietnam. In some cases, they were able to identify the user’s real name, gender and country. The database was completely unsecured and could be accessed by anybody. The leak increases the possibility of sim-swapping or spam call attacks for the users whose data has been leaked. The leak has happened despite Facebook’s statement in April that it would be more dedicated towards the privacy of its users and restrict access to data to prevent data scraping. Facebook has attempted to downplay the effects of the leak by claiming that the actual leak is only 210 million, since there are multiple duplicates in the data that was leaked, however Zack Whittaker, Security Editor at TechCrunch has highlighted that there is little evidence of such duplication. The data appears to be old since recently the company has changed its policy such that it users can no longer search for phone numbers. Facebook has claimed that there appears to be no actual evidence that there was a serious breach of user privacy.
Addressing the growing data protection concerns Mozilla Firefox will now block third party tracking cookies and crypto miners by its Enhanced Tracking Protection feature. To avail this feature users will have to update to Firefox 69, which enforces stronger security and privacy options by default. Browser’s ‘Enhanced Tracking Protection’ will now remain turned on by default as part of the standard setting, however users will have the option to turn off the feature for particular websites. Mozilla claims that this update will not only restrict companies from forming a user profile by tracking browsing behaviour but will also enhance the performance, User Interface and battery life of the systems running on Windows 10/mac OS.
Delhi airport would be starting a three-month trial of the facial recognition system in its T3 terminal. This system is called the Biometric Enabled Seamless Travel experience (BEST). With this technology, passenger’s entry would be automatically registered at various points such as check-in, security etc. Portuguese company- toolbox has provided the technical and software support for this technology. Even though this system is voluntary in the trial run the pertinent question of whether it will remain voluntary after it is officially incorporated is still to be answered. If the trial run is successful, it will be officially incorporated.
In one of the first cases of its kind a British court ruled that police use of live facial recognition systems is legal and does not violate privacy and human rights. The case, brought by Cardiff resident Ed Bridges, alleged that his right to privacy had been violated by the system which he claimed had recorded him at least twice without permission, and the suit was filed to hold the use of the system as being violative of human rights including the right to privacy. The court arrived at its decision after finding that “sufficient legal controls” were in place to prevent improper use of the technology, including the deletion of data unless it concerned a person identified from the watch list.
Freedom of speech and expression is the bellwether of the European Union (“EU”) Member States; so much so that its censorship will be the death of the most coveted human right. Europe possesses the strongest and the most institutionally developed structure of freedom of expression through the European Convention on Human Rights (“ECHR”). In 1976, the ECHR had observed in Handyside v. United Kingdom that a “democratic society” could not exist without pluralism, tolerance and broadmindedness. However, the recently adopted EU Copyright Directive in the Digital Single Market (“Copyright Directive”) seeks to alter this fundamental postulate of the European society by introducing Article 13 to the fore. Through this post, I intend to deal with the contentious aspect of Article 13 of the Copyright Directive, limited merely to its chilling impact on the freedom of expression. Subsequently, I shall elaborate on how the Copyright Directive possesses the ability to affect censorship globally.
The adoption of Article 13 of the Copyright Directive hints towards the EU’s implementation of a collateral censorship-based model. Collateral censorship occurs when a state holds one private party, “A” liable for the speech of another private party, “B”. The problem with such model is that it vests the power to censor content primarily in a private party, namely “A” in this case. The implementation of this model is known to have an adverse effect on the freedom of speech, and the adoption of the Copyright Directive has contributed towards producing such an effect.
The Copyright Directive envisages a new concept of online content sharing service providers (“service providers”), which refers to a “provider… whose main purpose is to store and give access to the public to significant amount of protected subject-matter uploaded by its users…” Article 13(1) of the Copyright Directive states that such service providers shall perform an act of “communication to the public” as per the provisions of the Infosoc Directive. Further, Article 13(2a) provides that service providers shall ensure that “unauthorized protected works” shall not be made available. However, this Article also places service providers under an obligation to provide access to “non-infringing works” or “other protected subject matter”, including those covered by exceptions or limitations to copyright. The Copyright Directive’s scheme of collateral censorship is evident from the functions entrusted to the service providers, wherein they are expected to purge their networks and websites of unauthorized content transmitted or uploaded by third parties. A failure to do so would expose service providers to liability for infringement of the content owner’s right to communication to the public, as provided in the Infosoc Directive.
The implementation of a collateral censorship model will serve as a conduit to crackdown on the freedom of expression. The reason for the same emanates from the existence of certain content which necessarily falls within the grey area between legality and illegality. Stellar examples of this content are memes and parodies. It is primarily in respect of such content that the problems related to censorship may arise. To bolster this argument, consider Facebook, the social media website which boasts 1.49 billion daily active users. As per an official report in 2013, users were uploading 350 million photos a day, the number has risen exponentially today. When intermediaries like Facebook are faced with implementation of the Copyright Directive, it will necessarily require them to employ automated detecting mechanisms for flagging or detecting infringing material, due to the sheer volume of data being uploaded or transmitted. The accuracy of such software in detecting infringing content has been the major point of contention towards its implementation. Even though content like memes and parodies may be flagged as infringing by such software, automated blocking of content is prohibited under Article 13(3) of the Copyright Directive. This brings up the question of human review of such purportedly infringing content. In this regard, first, it is impossible for any human agency to review large tracts of data even after filtration by an automatic system. Second, in case such content is successfully reviewed somehow, a human agent may not be able to correctly decide the nature of such content with respect to its legality.
This scenario shall compel the service providers to resort to taking down the scapegoats of content, memes and parodies, which may even remotely expose them to liability. Such actions of the service providers will certainly censor freedom of expression. Another problem arising from this framework is that of adversely affecting net neutrality. Entrusting service providers with blocking access to content may lead to indiscriminate blocking of certain type of content.
Though the Copyright Directive provides certain safeguards in this regard, they are latent and ineffective. For example, consider access to a “complaints and redress mechanism” provided by Article 13(2b) of the Copyright Directive. This mechanism offers a latent recourse after the actual takedown or blocking of access to certain content. This is problematic because the users are either oblivious to/ unaware of such mechanisms being in place, do not have the requisite time and resources to prove the legality of content or are just fed up of such repeated takedowns. An easy way to understand these concerns is through YouTube’s current unjustified takedown of content, which puts the content owners under the same burdens as expressed above. Regardless of the reason for inaction by the content owners, censorship is the effect.
John Perry Barlow had stated in his Declaration of the Independence of Cyberspace that “Cyberspace does not lie within your borders”. This statement is true to a large extent. Cyberspace and the internet does not lie in any country’s border, rather its existence is cross-border. Does this mean that the law in the EU affects the content we view in India? It certainly does!
The General Data Protection Regulation (“GDPR”) applies to countries beyond the EU. The global effect of the Copyright Directive is similar, as service providers do not distinguish European services from those of the rest of the world. It only makes sense for the websites in this situation to adopt a mechanism which applies unconditionally to each user regardless of his/ her location. This is the same line of reasoning which was adopted by service providers in order to review user and privacy policies in every country on the introduction of the GDPR. Thus, the adoption of these stringent norms by service providers in all countries alike due to the omnipresence of internet-based applications may lead to a global censorship motivated by European norms.
The UN Special Rapporteur had envisaged that Article 13 would have a chilling effect on the freedom of expression globally. Subsequent to the Directive’s adoption, the Polish government protested against its applicability before the CJEU on the ground that it would lead to unwarranted censorship. Such action is likely to be followed by dissenters of the Copyright Directive, namely Italy, Finland, Luxembourg and the Netherlands. In light of this fierce united front, hope hinges on these countries to prevent the implementation of censoring laws across the world.
Rebecca MacKinnon’s “Consent of the Networked: The Worldwide Struggle for Internet Freedom” is an interesting read on free speech, on the internet, in the context of a world where corporations are challenging the sovereignty of governments. Having read the book, I will be familiarizing readers with some of the themes and ideas discussed in MacKinnon’s work.
In Part I, we discussed censorship in the context of authoritarian governments.
In Part II, we will be dealing with the practices of democratic governments vis-à-vis online speech.
In Part III, we shall discuss the influence of corporations on online speech.
Essentially, the discussion will revolve around the interactions between the three stakeholders: netizens, corporations providing internet-based products and governments (both autocratic and democratic). Each of the stakeholders have varied interests or agendas and work with or against in each other based on the situation.
Governments wish to control corporations’ online platforms to pursue political agendas and corporations wish to attract users and generating profits, while also having to acquiesce to government demands to access markets. The ensuing interactions, involving corporations and governments, affect netizens’ online civil liberties across the world.
In this section, we will be dealing with the actions of democratic governments and their effects on online speech.
MacKinnon notes that apart from authoritarian governments, even democratic institutions, albeit to a lesser degree, are indulging in activities that are detrimental to free speech online. For instance, after the U.S. learnt that the Chinese had access to a “kill switch” that would allow the Chinese government to terminate all access to the internet in its territory, the U.S. legislature attempted to pass a legislation that would provide the U.S. government with a similar capability. Though the legislation wasn’t passed, the same shows there exist voices within democratic set-ups that seek governmental power over cyberspace.
Further, corporations in the U.S. might be asked to comply with warrantless demands for information or surveillance and there doesn’t exist a recourse in law for them. These corporations might even be asked to comply with specific “requests” from the government. For instance, Amazon was initially hosting the WikiLeaks, but allegedly under U.S. pressure, Amazon backed out. It is pertinent to note that such pressure from the government, exerted in an opaque manner, is problematic as such actions skirt Due Process concerns.
The Panopticon Effect has consequences in democratic countries too. If government actions are opaque, citizens will be unaware of the breadth of surveillance and consequently, will alter their behaviour as a result of believing that they are being watched at all times.
Anonymity, Corporate Policing and Legitimization of Authoritarian Censorship
In addition to such opaque measures, democratic institutions also deal in legal censorship. MacKinnon refers to it as “Democratic Censorship”. The essential concern that democratic countries face while dealing with censorship is to balance the value curtailing online criminals and problematic speech (e.g. child pornography), while safeguarding the civil liberties of other netizens. Issues relevant to the balancing include anonymity, corporation policing of platforms and legitimization of authoritarian censorship.
The issue of anonymity features prominently in discussions involving balancing online privacy with online safety. While requiring netizens to identify themselves online would make them more accountable for their online transgressions, netizens involved in political activities, fearing social sanctions (e.g. anti-abortion speech related judgment), might refrain from posting. Without the option of anonymity, cyberspace would cease to serve as a platform for unpopular speech. Further, a government, generally influenced by majoritarian views, cannot be expected to regulate without bias. Hence, any requirement of non-anonymity can serve as a potential tool for censorship even in democratic setups.
For protecting netizens from problematic speech (e.g. child pornography), the government tasks the private sector to police their platforms. For instance, Google is expected to screen its video sharing platform YouTube for problematic speech. As is seen through this instance, legislating “Intermediary Liability” is one possible method of ensuring corporations police their platforms as the application of intermediary liability laws makes a corporation liable for problematic speech found on its platform. In Italy, Google executives were sentenced to prison for failing to prevent the uploading of a video of an autistic child and thereby, violating the child’s privacy.
What are the consequences of requiring corporations to police their platform?
First, issues of legitimacy arise. Should an entity that isn’t accountable to the public at all be given the authority to act as gatekeepers for content? Customer accounts are intruded into and regulated by those who aren’t accountable to the public. We will revisit this argument in Part III.
Consider the case of the Internet Watch Foundation. It is an organization that creates an updated list of websites it considers objectionable. U.K. based Internet Service Providers use the list, out of their own volition, block access to the listed websites. It isn’t MacKinnon’s contention that the IWF is a fraud, but the example showcases the immense power that private entities could exercise over online speech and the vacuum of accountability measures.
Second, ascribing liability on corporations for failing to remove problematic speech would push them towards being extremely cautious with screening content. In other words, corporations, in their zeal to avoid any liability whatsoever, would be inclined to block all content that seem problematic, but mightn’t actually be problematic. Hence, content that shouldn’t be getting blocked might be. There would be “collateral filtering” or blockage of content that isn’t actually intended to be blocked by the regulator. For instance, if the word “sex” was flagged for blocking to weed out pornographic websites, even content relating to health and marriage that uses the word “sex” will get blocked.
Third, the intermediary liability model pushes corporations to adopt the practice of blocking potentially problematic content at the outset and subsequently, reviewing the blocking, if necessary, much later. Such a practice runs contrary to one of the foundational principles of Due Process i.e. “Innocent until proven guilty”. Additionally, such a practice is especially detrimental to the efficacy of political speech, as such speech often loses its impact with the passage of time. For instance, if a journalist writes a scathing article on the government for fuelling communal riots, the article will have its maximum impact when published during or shortly after the riots, when the issue is fresh in the minds of readers. Therefore, even if corporations republish content upon reviewing, the content may have lost its potency.
Hence, there exist various problems with requiring corporations to police their platforms.
Moving on, MacKinnon argues that “Democratic Censorship” also leads to legitimization of censorship policies of authoritarian governments.
In this regard, The U.S. government’s actions in the realm of intellectual property are especially problematic. In its zealousness to protect copyrights, the U.S. and other countries have overlooked Due Process. For instance, WikiLeaks revealed that U.S. and 34 other countries were negotiating an international treaty called the Anti-Counterfeiting Trade Agreement, which required intermediaries to police their platforms and remove content without having to prove violation.
When democratic governments eschew Due Process in this manner, they legitimize the actions of authoritarian governments. It allows authoritarian governments to claim that their internet policy is in accordance with international standards. When the U.S. legislature, pushed by lobbyists, sacrificed civil liberties to protect intellectual property rights, it gave the Chinese and Russians a cover for supressing dissent. For example, The Russians clamped down on dissenters by taking them to task for violating Microsoft’s copyright. As an aside, it is heart-warming to note that Microsoft changed its policy after the event.
In Part II, we have to attempted to (a) understand the pressures that democratic governments place on corporations, (b) understand “democratic censorship” and the attempts of democracies to balance measures against problematic speech with protection of netizens’ civil liberties, (c) understand “intermediary liability” and “collateral filtering” and (d) dilution of Due Process in democracies and the dilution’s effect of legitimizing censorship policies of authoritarian regimes.
In Part III, we will analyse the influence of corporations on online speech.
Image taken from here.
Rebecca MacKinnon’s “Consent of the Networked: The Worldwide Struggle for Internet Freedom” (2012) is an interesting read on online speech. Having read the book, I will be familiarizing readers with some of the themes discussed in it.
In Part I, we will discuss censorship in the context of authoritarian governments.
In Part II, we will be dealing with the practices of democratic governments vis-à-vis online speech.
In Part III, we shall discuss the influence of corporations on online speech.
Essentially, the discussion will revolve around the interactions between the three stakeholders: netizens, corporations providing internet-based products and governments (both autocratic and democratic). Each of the stakeholders have varied interests or agendas and work with or against in each other based on the situation.
Governments wish to control corporations’ online platforms to pursue political agendas and corporations wish to attract users and generate profits, while also having to acquiesce to government demands to access markets. The ensuing interactions, involving corporations and governments, affect netizens’ online civil liberties across the world.
PART I: AUTHORITARIAN GOVERNMENTS (THE CHINESE MODEL)
“Networked Authoritarianism” is the exercise of authoritarianism, by a government, through the control over the network used by the citizens. MacKinnon explains the phenomenon through an explanation of the Chinese government’s exercise of control over the Chinese networks.
Interestingly, the Chinese citizenry is unaware of the infamous Tiananmen Square protests. The government, with compliant corporates (in order to access Chinese markets, corporations comply), works in an opaque manner to manipulate information reaching the people. The people aren’t even aware of the fact of manipulation!
The government does allow discussion, but within the limits prescribed by it. This is the concept of “Authoritarian Deliberation”. Considerable discussion occurs on the “e-parliament” (a website where the Chinese public is allowed to make suggestions on issues of policy) and the Chinese government has stated that it cares about public opinion, but any discussion that could potentially lead to unrest is screened out. In other words, the government is engendering a false sense of freedom amongst its populace.
Now, let us have a look at the modus operandi of such Chinese censorship.
Firstly, The Chinese networks are connected to the global networks through 8 gateways. Each of the gateways contain data filters that restrict websites that contain specific restricted key words. As a slight aside, it is pertinent to note that western corporations, such as Forcepoint and Narus, also provide software that assist authoritarian governments in censorship and surveillance.
Now, the Chinese netizens can access global networks through certain technical means. But there exists a lack of incentive to do so as the Chinese have their own, government compliant, versions of Twitter, Facebook and Google (Weibo; RenRen & Kaixin001: Facebook; Baidu respectively) with which the people are content. Given the size of the Chinese market, investors abound and consequently, there doesn’t exist a dearth of products.
Secondly, as mentioned earlier, the Chinese government forces corporations to manage their platforms in compliance with the government’s standards. Content from offshore servers of non-compliant corporations are blocked by the data filters. But if a corporation intends to work in China, it will have to self-regulate and ensure that platforms are compliant with the censorship policy.
Thirdly, in addition to censorship, the Chinese government also manipulates discussions through “Astroturfing”. Originally a marketing term, it refers to the practice of paying people a certain fee to propagate views beneficial to the payee. The 50 Cent Army (etymology from fee per post) is a common term used to refer to those paid by the Chinese government.
Apart from Astroturfing, there also exist people who voluntarily spread propaganda on the internet. While the Chinese government can disavow knowledge of their activities, they are given special treatment by the government to carry out their agendas.
Through the approach followed above, the Chinese government has manipulated its populace with wondrous success. From the example above we have learnt that mere access to the internet doesn’t ensure political reform. It depends on the authoritarian government’s ability to manipulate the networks. There exist other examples of other countries successfully preventing unrest through manipulation of speech on its networks.
Censorship in Other Countries
Iran, too, has successfully manipulated networks. The Iranian government was able to restrict communications and debilitate the Green Movement, an uprising against the president at the time. Even if the government isn’t actually monitoring the communications, if enough people believe it is doing so, the government will have achieved its purpose.
The Russian government, instead of using online tools to restrict content, restricts speech through offline methods in the form of defamation laws and threat of physical consequences. Even the Chinese take offline retaliatory measures. We will discuss one such example (Shi Tao) in Part III.
Now, let us look at a few of the approaches or policies that democratic countries have adopted to tackle censorship in repressive regimes.
Approaches to Tackling Authoritarian Censorship
Initially, policies attempted to ensure that netizens were able to access an uncensored internet. Access to an uncensored internet was expected to create political consciousness and consequently, revolution against repressive regimes. Hence, government funding was aimed towards circumvention technology that would facilitate netizens in accessing the uncensored cyberspace. Ironically though, while the public treasury being used to fund circumvention technology, American corporations are aiding censorship by providing the censorship technology to authoritarian regimes.
But there exist other approaches as well. Certain policy experts, with the belief that free speech precedes democracy, are in favour of encouraging citizens, under repressive regimes, to host and develop content. Advocates of this approach argue that such an approach would be more beneficial towards building communities of dissent as opposed to attempting to provide them access to offshore content. Further, such an approach doesn’t portray the U.S. as an enemy of the authoritarian state, leading to lesser complications, since the content will be generated by the citizens of the repressive state itself.
Lastly, some experts have suggested that democratic countries should make efforts to set their own house in order, instead of interfering with other regimes. Laws, in even the most democratic of countries, could be draconian. For instance, the U.K. was set to allow for disconnection of a user’s internet access, if she or he violates copyright thrice. And these laws serve as a justification for authoritarian regimes to censor.
Here, using Chinese censorship as an example, we have attempted to understand (a) the concepts of “networked authoritarianism” and “authoritarian deliberation”, (b) the online and offline methods of censorship employed by authoritarian governments (gateway regulation, corporate compliance, “astroturfing”, et cetera) and (c) approaches adopted by democracies to tackle censorship by repressive regimes.
In Part II, we will discuss the effects of actions by democratic governments on online speech.
Image taken from here.
The “Existence” of a Non-Existent Law and the Broader Issues it Raises
The Information Technology Act 2000 (hereinafter referred to as the “IT Act”), India’s nodal law on regulation of information technology, was significantly amended in 2008 in order to plug certain loopholes in the original Act as well as accommodate further technological development within its legal framework. Among other things, this 2008 amendment to the Act introduced Section 66A, which essentially made sharing of “grossly offensive”, “insulting” or “menacing” information (Read: criticism of political parties) through electronic media a criminal offence.
In its landmark 2015 judgment of Shreya Singhal v. The Union of India, the Supreme Court struck down Section 66A on the ground that it imposed an unreasonable restriction on the freedom of speech and expression guaranteed under Art 19(1)(a) of our Constitution, a fundamental right closely tied to the democratic ideal of constructive criticism of public authorities. In the Court’s own words, “(Section 66A) takes within its sweep protected speech and speech that is innocent in nature and is liable therefore to be used in such a way as to have a chilling effect on free speech and would, therefore, have to be struck down on the ground of overbreadth.” (paragraph 90, emphasis added). Needless to say, this judgment was widely celebrated as a victory for free speech in general and online free speech in particular.
Shockingly however, Section 66A is still in use. Most recently, Priyanka Sharma , BJP’s Youth Wing convenor from Howrah was booked under this Section for circulating a meme ridiculing Mamata Banerjee (Surprisingly, even her order for bail makes no mention of the fact that one of the Sections she was booked under is unconstitutional). In another instance, a man from Guntur was arrested under Section 66A for duping people on a dating app through impersonation. Further, in March last year, Lucknow citizen Rahat Khan was one among five people booked under this Section for allegedly making “offensive” comments against UP Chief Minister Yogi Adityanath. (Interestingly-, he was later offered the position of social media in-charge for AIMIM). The same story holds true for a Gujarat Based lawyer- activist who made allegedly “offensive” religious statements against a particular group, and for a teenager in Tamil Nadu who targeted Prime Minister Narendra Modi in a private Facebook chat. Appallingly, a Telangana man was even convicted under Section 66A by a local court for making derogatory comments on social media. The list is sadly unending. Thus, to cut a long story short, Section 66A is being used rampantly even today.
A recent Hindustan Times Report in fact indicates that more than 3,000 people have been booked under Section 66A after its declaration as unconstitutional. Ironically, this is about 500 people more than those who were booked under this Section in 2014 when there was no judgment pronouncing upon its unconstitutionality. In substantiation, another independent study found several Section 66A cases being listed on portals like Indian Kanoon and SCC Online (neither of which are exhaustive) post 2015, some even having even culminated in convictions. Further, National Crime Records Bureau (NCRB) data for 2015-16 also shows continued arrests under Section 66A. Thus, it is abundantly clear that Section 66A is enjoying a healthy life even four years after its judicial death. Such an extension of its lifespan is nothing short of a mockery of the Supreme Court, our country’s highest judicial body.
Such a mockery of the Supreme Court can be traced back to either the intentional use the Section or alternatively to its inadvertent use by the police and the judiciary. However, it is important to note that these reasons are not mutually exclusive. As also argued by Abhinav Sekhri and Apar Gupta, it is the combination of knowing misuse and inadvertent use of Section 66A which is proving to be deadly for free speech in India.
In substantiation of the first limb of this combination, there is ample evidence to show that the Government has done little to stop continued use of Section 66A despite having notice of the same. For instance, when NCRB data (which has been referred to above) was used to point out the Government’s failure to contain this Section’s use post Shreya Singhal, the NCRB (a government agency) amusingly issued a “corrigendum” which essentially stated that this data was incorrect. Even more amusingly, it stopped publishing data on Section 66A from then on!
In another instance, it was observed that since Section 66A was declared unconstitutional, there was increased cases under Section 66 and 67 of the IT Act. As also argued by several Reports, this shows that in instances where the police realise that Section 66A is unconstitutional, they cover up their mistake by merely changing the section numbers. As a result, citizens are essentially arrested and fit into Sections which prima facie are not applicable to their case. This not only harasses them and wastes public machinery, but also causes a chilling effect – the very thing Shreya Singhal intended to avoid.
In a third example, the Government took no action against a stern notice issued to it by the Supreme Court in response to a 2019 PIL filed before it by the PUCL highlighting the continued use of Section 66A. Notably, the official Ministry of Electronics and Information Technology page containing the IT Act contains no mention of Section 66A’s unconstitutionality till date.
Additionally, day in and day out, there are numerous reports about use of Section 66A in the press, and it is no coincidence that most of these reports concern statements against political leaders and their parties, including the ruling party. Thus, it is a stretch to believe that our leaders do not know about this unconstitutional use (news regarding Section 66A literally concerns themselves). Despite such knowledge (Read: Because of this knowledge), none of our legislators or executive officials have brought this issue to either the Parliament or the Executive. The Parliament can easily issue an enabling amendment to the Act scrapping the Section; alternatively, the Executive can very easily issue a notification to that effect. Sadly, it comes as no surprise that nether of these two things have been done till date. Thus, it is clear that the Government is knowingly turning a blind eye towards the unconstitutional use of Section 66A.
However (in substantiation of the second limb of the combination), it is important to note that not all use of Section 66A post Shreya Singhal is politically motivated or intentionally malicious. Many instances show that the police are simply not aware that the Section has been pronounced unconstitutional. Alternatively, there is no clarity as to the exact effect of Shreya Singhal, considering that Section 66A is still present in the bare text of the Statute. In a documented instance, a Police Inspector expressed complete ignorance about the unconstitutionality of Section 66A and when the same was pointed out to him, he said that “it was one particular case only” and that he had aptly booked the concerned accused. Even an officer as senior as Inspector General of Police and Commissioner of Police, Jalandhar was quoted as saying that nothing can be done until the “Government has issued a notification.” Thus, Section 66A is also being used inadvertently.
The effect of such its knowing and unknowing use is that the onus of enforcing the verdict in Shreya Singhal is put on defendants, who are not reasonably expected to know the details of which statutory provisions are unconstitutional. Additionally, as mentioned above, such use is a direct affront to the judiciary and renders its judgment effectively meaningless. Needless to add, every unconstitutional use of Section 66A is curtailing the constitutional right to freedom of speech and expression, which is the touchstone against which a democracy is judged. One can only imagine the chilling effect such a curtailment has had in present times against the backdrop of Lok Sabha elections in the country.
Thus, it is imperative to ensure that the Court’s verdict in Shreya Singhal is enforced at the earliest through any and all possible means such as amendment, notification and widespread promulgation to enforcement agencies and the general public.
On a broader note, the Section 66A story is a case in point – it points towards wider problems related to promulgation of judicial decisions and their enforcement. On this point, it is important to note that there is no written or institutionalized mechanism of making judgments of our courts (including the Supreme Court) known to even the police and the judiciary (let alone the general public). As is seen through the case of Section 66A illustrated above, lack of such a mechanism is questioning the very relevance of our judiciary on a daily basis. Against this backdrop, it is imperative that we deliberate to bring out better promulgation strategies in the near future. While this might not be practically possible for every case, we can atleast make a beginning by ensuring that there is a practice to promulgate bare text law-changing decisions of the Supreme Court to government institutions like the police and the judiciary. By doing so, we can at the very least attempt to ensure that a 66A-like situation does not arise again.