Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Category: Internet Governance

Metadata by TLF: Issue 4

Posted on September 10, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Facebook approaches SC in ‘Social Media-Aadhaar linking case’

In 2018, Anthony Clement Rubin and Janani Krishnamurthy filed PILs before the Madras High Court, seeking a writ of Mandamus to “declare the linking of Aadhaar of any one of the Government authorized identity proof as mandatory for the purpose of authentication while obtaining any email or user account.” The main concern of the petitioners was traceability of social media users, which would be facilitated by linking their social media accounts with a government identity proof; this in turn could help combat cybercrime. The case was heard by a division bench of the Madras HC, and the scope was expanded to include curbing of cybercrime with the help of online intermediaries. In June 2019, the Internet Freedom Foundation became an intervener in the case to provide expertise in the areas of technology, policy, law and privacy. Notably, Madras HC dismissed the prayer asking for linkage of social media and Aadhaar, stating that it violated the SC judgement on Aadhaar which held that Aadhaar is to be used only for social welfare schemes. 

Read more

Sahamati: Self Regulatory Organisation for Financial Data Sharing Ecosystem

Posted on September 6, 2019December 4, 2020 by Tech Law Forum @ NALSAR

This post, authored by Mr. Srikanth Lakshmanan, is part of TLF’s blog series on Account Aggregators. Other posts can be found here. 

Mr. Srikanth Lakshmanan is the founder of CashlessConsumer, a consumer collective working on digital payments to increase awareness, understand technology, represent consumers in digital payments ecosystem to voice perspectives, concerns with a goal of moving towards a fair cashless society with equitable rights. 

Read more

Article 13 of the EU Copyright Directive: A license to gag freedom of expression globally?

Posted on August 9, 2019August 4, 2019 by Tech Law Forum @ NALSAR

The following post has been authored by Bhavik Shukla, a fifth year student at National Law Institute University (NLIU) Bhopal. He is deeply interested in Intellectual Property Rights (IPR) law and Technology law. In this post, he examines the potential chilling effect of the EU Copyright Directive.

 

Freedom of speech and expression is the bellwether of the European Union (“EU”) Member States; so much so that its censorship will be the death of the most coveted human right. Europe possesses the strongest and the most institutionally developed structure of freedom of expression through the European Convention on Human Rights (“ECHR”). In 1976, the ECHR had observed in Handyside v. United Kingdom that a “democratic society” could not exist without pluralism, tolerance and broadmindedness. However, the recently adopted EU Copyright Directive in the Digital Single Market (“Copyright Directive”) seeks to alter this fundamental postulate of the European society by introducing Article 13 to the fore. Through this post, I intend to deal with the contentious aspect of Article 13 of the Copyright Directive, limited merely to its chilling impact on the freedom of expression. Subsequently, I shall elaborate on how the Copyright Directive possesses the ability to affect censorship globally.

Collateral censorship: Panacea for internet-related issues in the EU

The adoption of Article 13 of the Copyright Directive hints towards the EU’s implementation of a collateral censorship-based model. Collateral censorship occurs when a state holds one private party, “A” liable for the speech of another private party, “B”. The problem with such model is that it vests the power to censor content primarily in a private party, namely “A” in this case. The implementation of this model is known to have an adverse effect on the freedom of speech, and the adoption of the Copyright Directive has contributed towards producing such an effect.

The Copyright Directive envisages a new concept of online content sharing service providers (“service providers”), which refers to a “provider… whose main purpose is to store and give access to the public to significant amount of protected subject-matter uploaded by its users…” Article 13(1) of the Copyright Directive states that such service providers shall perform an act of “communication to the public” as per the provisions of the Infosoc Directive. Further, Article 13(2a) provides that service providers shall ensure that “unauthorized protected works” shall not be made available. However, this Article also places service providers under an obligation to provide access to “non-infringing works” or “other protected subject matter”, including those covered by exceptions or limitations to copyright. The Copyright Directive’s scheme of collateral censorship is evident from the functions entrusted to the service providers, wherein they are expected to purge their networks and websites of unauthorized content transmitted or uploaded by third parties. A failure to do so would expose service providers to liability for infringement of the content owner’s right to communication to the public, as provided in the Infosoc Directive.

The implementation of a collateral censorship model will serve as a conduit to crackdown on the freedom of expression. The reason for the same emanates from the existence of certain content which necessarily falls within the grey area between legality and illegality. Stellar examples of this content are memes and parodies. It is primarily in respect of such content that the problems related to censorship may arise. To bolster this argument, consider Facebook, the social media website which boasts 1.49 billion daily active users. As per an official report in 2013, users were uploading 350 million photos a day, the number has risen exponentially today. When intermediaries like Facebook are faced with implementation of the Copyright Directive, it will necessarily require them to employ automated detecting mechanisms for flagging or detecting infringing material, due to the sheer volume of data being uploaded or transmitted. The accuracy of such software in detecting infringing content has been the major point of contention towards its implementation. Even though content like memes and parodies may be flagged as infringing by such software, automated blocking of content is prohibited under Article 13(3) of the Copyright Directive. This brings up the question of human review of such purportedly infringing content. In this regard, first, it is impossible for any human agency to review large tracts of data even after filtration by an automatic system. Second, in case such content is successfully reviewed somehow, a human agent may not be able to correctly decide the nature of such content with respect to its legality.

This scenario shall compel the service providers to resort to taking down the scapegoats of content, memes and parodies, which may even remotely expose them to liability. Such actions of the service providers will certainly censor freedom of expression. Another problem arising from this framework is that of adversely affecting net neutrality. Entrusting service providers with blocking access to content may lead to indiscriminate blocking of certain type of content.

Though the Copyright Directive provides certain safeguards in this regard, they are latent and ineffective. For example, consider access to a “complaints and redress mechanism” provided by Article 13(2b) of the Copyright Directive. This mechanism offers a latent recourse after the actual takedown or blocking of access to certain content. This is problematic because the users are either oblivious to/ unaware of such mechanisms being in place, do not have the requisite time and resources to prove the legality of content or are just fed up of such repeated takedowns. An easy way to understand these concerns is through YouTube’s current unjustified takedown of content, which puts the content owners under the same burdens as expressed above. Regardless of the reason for inaction by the content owners, censorship is the effect.

The EU Copyright Directive’s tryst with the world

John Perry Barlow had stated in his Declaration of the Independence of Cyberspace that “Cyberspace does not lie within your borders”. This statement is true to a large extent. Cyberspace and the internet does not lie in any country’s border, rather its existence is cross-border. Does this mean that the law in the EU affects the content we view in India? It certainly does!

The General Data Protection Regulation (“GDPR”) applies to countries beyond the EU. The global effect of the Copyright Directive is similar, as service providers do not distinguish European services from those of the rest of the world. It only makes sense for the websites in this situation to adopt a mechanism which applies unconditionally to each user regardless of his/ her location. This is the same line of reasoning which was adopted by service providers in order to review user and privacy policies in every country on the introduction of the GDPR. Thus, the adoption of these stringent norms by service providers in all countries alike due to the omnipresence of internet-based applications may lead to a global censorship motivated by European norms.

The UN Special Rapporteur had envisaged that Article 13 would have a chilling effect on the freedom of expression globally. Subsequent to the Directive’s adoption, the Polish government protested against its applicability before the CJEU on the ground that it would lead to unwarranted censorship. Such action is likely to be followed by dissenters of the Copyright Directive, namely Italy, Finland, Luxembourg and the Netherlands. In light of this fierce united front, hope hinges on these countries to prevent the implementation of censoring laws across the world.

Read more

Mackinnon’s “Consent of The Networked” Deconstruction (Part I)

Posted on July 7, 2019November 12, 2019 by Prateek Surisetti

SERIES INTRODUCTION

Rebecca MacKinnon’s “Consent of the Networked: The Worldwide Struggle for Internet Freedom” (2012) is an interesting read on online speech. Having read the book, I will be familiarizing readers with some of the themes discussed in it.

In Part I, we will discuss censorship in the context of authoritarian governments.

In Part II, we will be dealing with the practices of democratic governments vis-à-vis online speech.

In Part III, we shall discuss the influence of corporations on online speech.

Essentially, the discussion will revolve around the interactions between the three stakeholders: netizens, corporations providing internet-based products and governments (both autocratic and democratic). Each of the stakeholders have varied interests or agendas and work with or against in each other based on the situation.

Governments wish to control corporations’ online platforms to pursue political agendas and corporations wish to attract users and generate profits, while also having to acquiesce to government demands to access markets. The ensuing interactions, involving corporations and governments, affect netizens’ online civil liberties across the world.

PART I: AUTHORITARIAN GOVERNMENTS (THE CHINESE MODEL)

“Networked Authoritarianism” is the exercise of authoritarianism, by a government, through the control over the network used by the citizens. MacKinnon explains the phenomenon through an explanation of the Chinese government’s exercise of control over the Chinese networks.

Interestingly, the Chinese citizenry is unaware of the infamous Tiananmen Square protests. The government, with compliant corporates (in order to access Chinese markets, corporations comply), works in an opaque manner to manipulate information reaching the people. The people aren’t even aware of the fact of manipulation!

The government does allow discussion, but within the limits prescribed by it. This is the concept of “Authoritarian Deliberation”. Considerable discussion occurs on the “e-parliament” (a website where the Chinese public is allowed to make suggestions on issues of policy) and the Chinese government has stated that it cares about public opinion, but any discussion that could potentially lead to unrest is screened out. In other words, the government is engendering a false sense of freedom amongst its populace.

Now, let us have a look at the modus operandi of such Chinese censorship.

Modus Operandi

Firstly, The Chinese networks are connected to the global networks through 8 gateways. Each of the gateways contain data filters that restrict websites that contain specific restricted key words. As a slight aside, it is pertinent to note that western corporations, such as Forcepoint and Narus, also provide software that assist authoritarian governments in censorship and surveillance.

Now, the Chinese netizens can access global networks through certain technical means. But there exists a lack of incentive to do so as the Chinese have their own, government compliant, versions of Twitter, Facebook and Google (Weibo; RenRen & Kaixin001: Facebook; Baidu respectively) with which the people are content. Given the size of the Chinese market, investors abound and consequently, there doesn’t exist a dearth of products.

Secondly, as mentioned earlier, the Chinese government forces corporations to manage their platforms in compliance with the government’s standards. Content from offshore servers of non-compliant corporations are blocked by the data filters. But if a corporation intends to work in China, it will have to self-regulate and ensure that platforms are compliant with the censorship policy.

Thirdly, in addition to censorship, the Chinese government also manipulates discussions through “Astroturfing”.  Originally a marketing term, it refers to the practice of paying people a certain fee to propagate views beneficial to the payee. The 50 Cent Army (etymology from fee per post) is a common term used to refer to those paid by the Chinese government.

Apart from Astroturfing, there also exist people who voluntarily spread propaganda on the internet. While the Chinese government can disavow knowledge of their activities, they are given special treatment by the government to carry out their agendas.

Through the approach followed above, the Chinese government has manipulated its populace with wondrous success. From the example above we have learnt that mere access to the internet doesn’t ensure political reform. It depends on the authoritarian government’s ability to manipulate the networks. There exist other examples of other countries successfully preventing unrest through manipulation of speech on its networks.

Censorship in Other Countries

Iran, too, has successfully manipulated networks. The Iranian government was able to restrict communications and debilitate the Green Movement, an uprising against the president at the time. Even if the government isn’t actually monitoring the communications, if enough people believe it is doing so, the government will have achieved its purpose.

The Russian government, instead of using online tools to restrict content, restricts speech through offline methods in the form of defamation laws and threat of physical consequences. Even the Chinese take offline retaliatory measures. We will discuss one such example (Shi Tao) in Part III.

Now, let us look at a few of the approaches or policies that democratic countries have adopted to tackle censorship in repressive regimes.

Approaches to Tackling Authoritarian Censorship

Initially, policies attempted to ensure that netizens were able to access an uncensored internet. Access to an uncensored internet was expected to create political consciousness and consequently, revolution against repressive regimes. Hence, government funding was aimed towards circumvention technology that would facilitate netizens in accessing the uncensored cyberspace. Ironically though, while the public treasury being used to fund circumvention technology, American corporations are aiding censorship by providing the censorship technology to authoritarian regimes.

But there exist other approaches as well. Certain policy experts, with the belief that free speech precedes democracy, are in favour of encouraging citizens, under repressive regimes, to host and develop content. Advocates of this approach argue that such an approach would be more beneficial towards building communities of dissent as opposed to attempting to provide them access to offshore content.  Further, such an approach doesn’t portray the U.S. as an enemy of the authoritarian state, leading to lesser complications, since the content will be generated by the citizens of the repressive state itself.

Lastly, some experts have suggested that democratic countries should make efforts to set their own house in order, instead of interfering with other regimes. Laws, in even the most democratic of countries, could be draconian. For instance, the U.K. was set to allow for disconnection of a user’s internet access, if she or he violates copyright thrice.  And these laws serve as a justification for authoritarian regimes to censor.

Conclusion

Here, using Chinese censorship as an example, we have attempted to understand (a) the concepts of “networked authoritarianism” and “authoritarian deliberation”, (b) the online and offline methods of censorship employed by authoritarian governments (gateway regulation, corporate compliance, “astroturfing”, et cetera) and (c) approaches adopted by democracies to tackle censorship by repressive regimes.

In Part II, we will discuss the effects of actions by democratic governments on online speech.

 

Image taken from here.

Read more

The Dark Web : To Regulate Or Not Regulate, That Is The Question.

Posted on December 29, 2018December 29, 2018 by Shweta Rao

[Ed Note : In an interesting read, Shweta Rao of NALSAR University of Law brings us upto speed on the debate regarding regulation of the mysterious “dark web” and provides us with a possible way to proceed as far as this hidden part of the web is concerned. ]

Human Traffickers, Whistleblowers, Pedophiles, Journalists and Lonely-Hearts Chat-room participants all find a home on the Dark Web, the underbelly of the World Wide Web that is inaccessible to the ordinary netizen.  The Dark Web is a small fraction of the Deep Web, a term it is often confused with, but the distinction between the two is important.

The Dark Web unlike the Deep Web is only accessible through anonymous servers, as distinguished from non-anonymous surface web accessing servers like Google, Bing etc. One such server is The onion router (Tor),one of the most popular servers for accessing the dark web, which derives its name from the similarity of the platform’s multilayered encryption to that of the layers of an onion. Dark Web sites also require users to enter a unique Tor address with an additional security layer of a password input. These access restrictions are what distinguish the Dark Web from the Deep Web, which may be breached into through Surface Web applications. Further, the Deep Web may, due to its discreet nature, seem to occupy a fraction of the World Wide Web, when in actuality, it is estimated to be 4000-5000 times larger than the Surface Web and hosts around 90% of the internet’s web traffic.  The Dark Web, in contrast to these figures, occupies a minuscule amount of space, with less than 45,000 Dark Web sites as recorded in 2015. Thus, the difference between Deep and Dark Web lies not in their respective content, but in the requirements and means of access to these two spaces along with the quantity of web traffic they attract.

The Dark Web has existed nearly as long as the Internet has and begun as a parallel project to the US Department of Defense’s (USDD’s) 1960s ARPANET Project. The USDD allowed the Dark Web to be accessible to the public via the Tor for it to mask its own communications. Essentially, if the Dark Web was used for only USDD communications there would be no anonymity as anyone who made their way into the system would be aware that all communications would be that of the USDD. So, by allowing the public to access it via the Tor, the USDD could use the general traffic of the Dark Web through the Tor to mask its communications under the stampede of information passing through the Tor.

While the Internet became a household name by the late 90’s the Dark Web remained obscure until 2013 when it gained infamy due to the arrest of Ross William Ulbricht ( aka the Dread Pirate Roberts) the operator of the Silk Route, marketplace for illegal goods and services.

While fully regulating a structure such as the Dark Web is a near impossible feat, this arrest has indeed pushed the previously obscure Dark Web into the spotlight, putting prosecutors and law enforcement agencies across the world on the alert. This new-found attention into the workings of the Dark Web is the junction at which the debate for regulation policies emerges.

The debate on the status of surveillance of the Dark Web broadly has two branches. The first branch, which has emerged with more force post the exposure of the Silk Route, advocates for more frequent and stricter probes into the activities of the Dark Web. In contrast, the second branch weighs increased regulation against issues of breach of privacy, which is one of the main reasons behind use of servers such as Tor.

In order to understand the reasoning behind either branch’s stance, it is essential to look at the breakup of the Dark Web and its various uses, each finding its place at different points along the spectrum having legality and illegality as its extremes.

The Dark Web, as mentioned  previously, occupies a faction of the space on the Deep Web with less than 50,000 websites currently functioning, and the number of fully active sites are even lower. Legal activities take up about 55.9% of the total space on the Dark Web whilst the rest of the space contains illegal activities such as counterfeit, child pornography, illegal arms dealing and drug pedaling amongst others. Activities such as whistleblowing and hacking given their contextual scenario-based characteristic would thus not allow themselves to be placed in one or the other category and would fall into a “grey area” of sorts.

With over 50% of the activity on the Dark Web being illegal, the call for increased regulations seems to be reasonable. However, those who are regular residents of this fraction of the internet oft differ. And this hinges on, as mentioned earlier, the issue of privacy.

Privacy has become a buzzword across the globe in the recent past with various nations having to reevaluate the rights their citizens’ information had in the midst of the boom of the data wars. From the General Data Protection Regulations (GDPR) in the EU to the Puttaswamy case in India, across the globe, the Right to Privacy has been thrown into the spotlight. Its relevance only grows with corporations both large and small mining information from users across platforms. Privacy has thus become the need of the hour, and the privacy that the Dark Web provides has been one its biggest USPs. It has harbored anyone one requiring the shield of privacy including political whistleblowers who have in the past have released vital information on violations against citizens in both tyrannical regimes as well as in democracies. Edward Snowden, whose claim to infamy was indeed surrounding privacy and surveillance, had used and still continues to use the Dark Web to ensure the protection of any communications from his location, which is a Moscow airport terminal.

In the age of #FakeNews  targeting the journalism community, the need to protect the private Tor gateway that many journalists use to protect their sensitive information seems to be of paramount importance. But despite what the creators of the Tor would like to believe, the bulk of the active traffic (which differs from the actual number of sites present) in the aforementioned “illegal” branch of the Dark Web is predominantly is that of child pornography distribution.

However, this might not bode the end of the privacy sphere as created by the Tor within the Dark Web. Using the same logic as used by the USDD, given that the increased activity of child pornography and abuse sites is a known factor, it becomes easier for authorities to single out threads of heightened activity within the Dark Web without compromising its integral cloak of privacy. This tactic has been successfully used by the American FBI in the Playpen case where it  singled out the thread of rapid activity created by a  website called Playpen which had over 200,000 active accounts that participated in the creating and viewing of child pornography. The FBI singled out the traffic for this site due to its dynamic activity and once the source of the activity was precisely determined, the FBI in a unprecedented move extracted the Playpen website from the Dark Web onto a federal server and then were able to access the IP addresses of over a 1000 users who were then prosecuted, with the creator of the site having received 30 years of jailtime. This was all done without the privacy of other Tor users being breached.

Thus, whilst Hamlet’s existential question may not have a middle ground to settle on, the status of regulations on the Dark Web could be established by using the past precedent and by using better non-invasive surveillance methods along with international cooperation, in order to respect its true intended purpose.

Read more

TechLaw Symposium at NALSAR University of Law, Hyderabad – Press Note

Posted on October 4, 2018December 4, 2020 by Tech Law Forum @ NALSAR

[Ed Note : The following press note has been authored by Shweta Rao and Arvind Pennathur from NALSAR University of Law. Do watch  this space for more details on the symposium!]

On the 9th of September NALSAR University of Law’s Tech Law Forum conducted its first ever symposium with packed panels discussing a variety of issues under the broad theme of the Right to Privacy. This symposium took place against the backdrop of the recent draft Data Protection Bill and Report released by the Srikrishna Committee.

Read more

Consent to Cookie: Analysis of European ePrivacy Regulations

Posted on February 24, 2017 by Vishal Rakhecha

This article is an analysis of the newly passed ‘Regulation on Privacy and Electronic Communications’ passed by the European Union.

A huge part of our daily life now revolves around the usage of websites and communication mediums like Facebook, WhatsApp, Skype, etc. The suddenness with which these services have become popular left law-making authorities with little opportunity to give directions to these companies and regulate their actions. For the large part these services worked on the basis of self-regulation and on the terms and conditions which consumers accepted. These services gave people access to their machinery for free, in return for personal data about the consumer. This information is later sold to advertisers who later on send ‘personalised’ advertisements to the consumer on the basis of the information received.

With growing consciousness about the large-scale misuse that can take place if the data falls into wrong hands, citizens have started to seek accountability on part of these websites. With increasing usage of online services in our daily lives and growing awareness about the importance of privacy, the pressure on governments to make stricter privacy laws is increasing.

The nature of data that these services collect from the consumer can be extremely personal, and with no checks on the nature of data that can be collected, there is a possibility for abuse. It can be sold with no accountability in the handling of such information. Regulations such as those related to data collection, data retention, data sharing and advertising are required, and for the most part have been lacking in almost all countries. The European Union however has been in a constant tussle with internet giants like Google, Facebook and Amazon, over regulations, as though these companies have operations in Europe, they are not under its jurisdiction. In fact they are not under the jurisdiction of any countries except the ones they are based in. The EU on 10 January 2017 released a proposal on the Privacy of individuals while using Electronic communications which will come into force in May 2018.

The objective of the ‘Regulation on Privacy and Electronic Communications’ is to strengthen the data protection framework in the EU. The key highlights of the data protection laws are as follows:

  • Unified set of Rules across EU – These rules and regulations will be valid and enforceable across the European Union and will provide a standard compliance framework for the companies functioning in the Union.
  • Newer Players – Over-the-top services are those services which are being used instead of traditional such as SMS and call. The law seeks to regulate these Over-The-Top services (OTT) such as WhatsApp, Gmail, Viber, Skype, etc., and the communication between Internet-of-Things devices which have been outside the legal framework as the existing laws and regulations are not wide enough in scope to cover the technology used.
  • Cookies – A cookie is information about the user’s activity on the website, such as what is there in the user’s shopping cart. The new regulations make it easy for the end-users to give consent for end-users for cookies on web browsers and making the users more in control of the kind of data that is being shared.
  • Protection against spam – The proposal bans unsolicited electronic communication from mediums like email, phone calls, SMS, etc. This proposal basically places a restriction on spam, mass sending of mails or messages with advertisements with or without the end-user consenting to receive those advertisements.
  • Emphasis on Consent – The regulation lays strict emphasis on the idea of user-consent in terms of any data being used for any purpose that is not strictly necessary to provide that service. The consent in this case should be ‘freely given, specific, informed, active and unambiguous consent expressed by a statement or clear affirmative action’.
  • Limited power to use metadata – Unless the data is necessary for a legal purpose, the service provider will either erase the metadata or make the data anonymous. Metadata is data about data – it is used by the Internet Service Providers, websites and governments to make a summary of the data available to create patters or generalised behaviour to use specific data easily.

The Regulation has far-reaching effects in terms of taking into its fold businesses which were earlier not a part of the regulations and would cover any technological company which provides electronic communications services in the Union. This would require businesses to sustain costs to redesign their communication system and ensuring that their future software updates are designed in such a way that the users’ consent is taken.

The main argument raised by the proposal in favour of bringing in the new Regulation is that an increasing number of users want control over their data and want to know where their data is going and who it is accessed by. This is because of the growing consciousness about the far-reaching effects of providing huge quantities of personal information to private entities with little or no check on the use of the data.

The biggest relief given to both the users and service providers was the change in the cookie policy. The previous regulation made it mandatory for the website to take consent before any cookie was placed on the user’s computer. This would have led to the user being bombarded with requests on the computer. The new regulation lets the user choose the settings for the cookies from a range of high-to-low privacy while installing the browser and after every six months they would receive a notification that they can change the setting.

There is however the issue of how the websites will know that the user has opted out of receiving targeted advertisements. There is a possibility of using a tool called Do-No-Track – a tool when turned on sends out signals to a web browser, that the user does not wish to be tracked. The system was utilised in the past, but given the lack of consensus in the industry as to the method of usage and the fact that a large number of websites simply ignored the DNT signals, it lost its utility. This Regulation will give the much necessary push for the usage of this system as would be useful, because if a user chooses not be tracked the websites have to respect that choice.

The Regulation also makes consent the central feature of communications system. Earlier consent was said to be implied, that if the individual is using the operators service was considered as consent to allowing the operator to collect information about the end-user. This could have a huge effect on the way these entities earn revenue where in some cases the sole method of earning revenue is advertising. Technology companies have to dole out huge amounts of money to pay to run their servers and for the staff which works on maintaining the website and researching on newer technology to improve their services. Companies which are dependent on advertising could lose a large amount of the revenue which they get if a large number of its users opt-out of providing information and receiving targeted advertisements.

Several critics from the industry argue that the new framework will make it extremely difficult for the operators as they do not necessarily classify data. The multiple layers of data and information collected are simply classified as ‘analytics’. The websites do not always know the purpose the data is going to be used until after it is used. This would make it difficult for the operator when it comes to deciding what comes under the law. In addition, the operators depend on third-parties to collect the information for them. The regulation makes it abundantly clear that the information to be collected should be the bare minimum that is required to provide the services and data that is required for web audience measuring. The third-parties also would be protected under this law, if the information collected by the website necessary to provide those services or if the user has already given consent. A more transparent system instead would make the system accountable as it would give a factual basis to assess whether the operator is complying with reasonable ethical standards.

The users also have an option under the law not to receive unsolicited calls, messages and mails. These kinds of calls, messages and mails are a huge nuisance with the companies doing this facing no liability. Only UK among the countries in the EU has strict laws and hefty fines for such kind of direct advertisements. This system would require the prior consent of the user when obtaining the information and before the sending of advertisements, and inform them about the nature of marketing and the nature of withdrawal. Even though consent is given to the operator the law mandates the communication of the procedure of opting opt-out to the user in clear terms. The operator will also have to have a prefix for all the marketing calls. This is similar to India, where the TRAI initiated Do-Not-Disturb system gives the user an option to block different kinds of unsolicited and automated advertisements through calls and messages.

The Regulation can form a benchmark for the other countries. The regulation with its central focus being the privacy and consent of the user, places a requirement for transparency and accountability of the operator – a necessary condition to run any organisation providing such services. While the changes may seem radical in terms of the costs that the industry as a whole may incur, given the sensitive nature of the information that they deal with, such regulations will and should become a norm for all the players in the market and any new players who wish to join it.

Read more

TRAI’s Consultation Paper on Net Neutrality and the Regulatory Approach to Net Neutrality in India

Posted on February 14, 2017 by Ashwin Murthy

An explanation of the regulatory and policy approaches analysed by the TRAI in their Consultation Paper

Read more

The Internet Finds Itself in a Web – What the U.S Withdrawal from ICANN and its Transition Signify

Posted on October 13, 2014 by Tech Law Forum @ NALSAR

The following post is by Madhulika Srikumar, a fourth year student at GNLU, Gandhinagar. She has an avid interest in the debate on ownership of Internet, Internet security and freedoms, and has worked earlier on issues relating to ICANN and Internet Jurisdiction. She brings us an interesting commentary on the US withdrawl from ICANN, and how it  may affect Internet Governance as it currently exists.

The Internet finds itself in a “web” these days, a web of polarizing powers and conflicting interests; a web that could possibly result in changing the Internet as we know it. Attempting to untangle this web is no mean feat.

The Internet is best defined by the values that formed it. These values are of “open” code or software that govern the Internet, whose source is available to all and can be taken, modified and improved. It is these ideals that many still hope to preserve in today’s Internet governance.

Read more

ICANN and a Changing Internet

Posted on September 21, 2014 by Kartik Chawla

(ImageSource: https://flic.kr/p/5kN3ek)

(This post was earlier published on SpicyIP)

Read more
  • Previous
  • 1
  • 2
  • 3
  • 4

Subscribe

Recent Posts

  • Regulating Real Money Games: Examining Alternatives to Prohibition (Part II)
  • Regulating Real Money Games: Examining Alternatives to Prohibition (Part I)
  • Brain Computer Interface: A Breakthrough Medical Development or a Black Mirror Episode for Your Personal Data?
  • Legal issues with Blockchain in Corporate Governance System of Indian Banks
  • Shadow Libraries: Remedying Knowledge Inequalities or Sullying the Copyright Act?
  • The Insurtech Revolution: What Lies Ahead for India? (Part II)
  • The Insurtech Revolution: What Lies Ahead for India? (Part I)
  • Duty of a Data Fiduciary to Report a Breach: Part II
  • Duty of a Data Fiduciary to Report a Breach: Part I
  • Exploring the Feasibility of Pretrial Risk Assessment Tools

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2022 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme