Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Search Results for: data protection

The Conundrum of Compelled Decryption Vis-À-Vis Self-Incrimination

Posted on July 20, 2020November 1, 2020 by Tech Law Forum @ NALSAR

[This post has been authored by Shivang Tandon, a fourth year student at Faculty of Law, Banaras Hindu University.]

The ‘self-incrimination’ doctrine is an indispensable part of the criminal law jurisprudence of a civilized nation. Article 20(3) of the Indian Constitution and the Fifth Amendment of the Constitution of the United States provide protection against self-incrimination.

Read more

Delhi HC’s order in Swami Ramdev v. Facebook: A hasty attempt to win the ‘Hare and Tortoise’ Race

Posted on January 6, 2020January 6, 2020 by Tech Law Forum @ NALSAR

This post has been authored by Aryan Babele, a final year student at Rajiv Gandhi National University of Law (RGNUL), Punjab and a Research Assistant at Medianama.

On 23rd October 2019, the Delhi HC delivered a judgment authorizing Indian courts to issue “global take down” orders to Internet intermediary platforms like Facebook, Google and Twitter for illegal content as uploaded, published and shared by users. The Delhi HC delivered the judgment on the plea filed by Baba Ramdev and Patanjali Ayurved Ltd. requesting the global takedown of certain videos which were alleged to be defamatory in nature.

Read more

Chance or Skill: Fantasy Sports Leagues and Gambling

Posted on December 3, 2019December 13, 2019 by Tech Law Forum @ NALSAR

This piece has been authored by Karthik Subramaniam, a second year student at NALSAR University of Law. It discusses the debate surrounding Fantasy Gaming Leagues and gambling.

Spectator sports have been popular since time immemorial. fantasy sports leagues across the world have gained huge fan bases and manage to rake in massive revenues. The National Football League (NFL), the premier American Football tournament in the United States managed to generate revenue of around 13,680 million USD in 2017, which surpasses the GDP of almost 64 countries. Many individuals see these leagues as geese that lay golden eggs, thereby, motivating them to get involved.

Sports Fantasy leagues are games in which participants accumulate points based on the statistical accomplishments of the athletes they have selected to their teams based on a draft. One of the earliest published accounts of a fantasy sports involved Oakland businessman Wilfred “Bill” Winkenbach who devised fantasy golf in the latter part of the 1950s in the United States.With sports leagues such as the National Football League (NFL), the National Basketball Association (NBA), Major League Soccer (MLS), Major League Baseball (MLB) and the National Hockey League (NHL) present in North America, Fantasy Sports Leagues boasted almost 56.8 Million players in 2015. With people spending an estimated 11 Billion USD on online fantasy leagues in 2013, the need for regulations in the area becomes all the more important. The spending of so much money on a game where the pay-outs are highly uncertain draws a very fine line between such spending and the illegal act of gambling.

The Unlawful Internet Gambling Enforcement Act of 2006(UIGEA) is a legislation in the United States that regulates online gambling and ensures the protection of individuals operating in this rapidly booming field. The UIGEA prohibits gambling businesses from knowingly accepting payments in connection with the participation of another person in a bet or wager that involves the use of the Internet and that is unlawful under any federal or state law. The presence of this legislation has made gambling over the internet illegal. In fact, betting on sports is illegal in all American states apart from Nevada. The reason fantasy football and other such fantasy sports fall outside the ambit of being a gamble or wager is due to the UIGEA. The legislation says that games of “skill” played for a certain reward (money in most cases) can be permitted to continue online, while games that involve an element of “chance” cannot. UIGEA includes a specific exemption for fantasy sports from being considered as gambling, provided that they meet three essential requirements: 1) The value of the prize involved is entirely independent of the number of players involved, 2) the outcome is based on the relative knowledge and skill of the participants and determined by statistical results, and 3) the outcome cannot be determined by the score of the game or based solely on one individual player’s performance in a single real sporting event. These specific exceptions were carved out to accommodate the slowly growing (in 2005) industry of online fantasy gaming. In a letter dated Feb. 1, 2006, the top lawyers from the NFL, NBA, NHL and MLB asked members of Congress to co-sponsor UIGEA, which included the fantasy carve-out. The legality of the same has allowed many states to impose taxes on fantasy gaming, allowing them to join in on the money bandwagon as well.

The Indian standpoint on this is based on the differentiation between games involving chance, and games involving skill as well, as laid out in the American context.

The Fantasy Sports League sector in India has been gaining steam for a while. Recently, in April 2019, Dream11, a Mumbai-based fantasy gaming start-up became India’s first gaming unicorn. The online gaming revenue is expected to increase from around 2,000 crores in 2014 to around 11,900 crores in 2023. The sector has undergone a massive change over the last few years with people rapidly gaining interest.

The current Indian legislations that deal with gaming in India are the Public Gambling Act, 1867(PGA) and the Prize Competitions Act, 1955(PCA). The 1867 Act, being a pre-Independence legislation offers a colonial approach to gambling, which put forward a largely anti-gambling rhetoric.[1] While this legislation looks at predominantly criminalising gambling in most cases, it also tries to distinguish between betting on a “game of chance” and staking on a “game of skill” to provide a safe harbour to activities such as wagering on horse races which were extremely popular amongst the British. While the application of this legislation was limited to the erstwhile British presidencies, the adoption of the principle espoused by the statute in most state legislations illustrates a similar mind-set showcased by Indian states.

Very much like the UIGEA, the Indian position with respect to declaring online-fantasy games as legal lies on the emphasis given on “games of skill”. While the PGA does not clearly mention specific games that constitute as games of “mere skill”, Indian courts have by and large adopted the “dominant factor test” or “predominance test” which requires the court to analyses the case and verify if chance or skill “is the dominating factor in determining the result of the game”.[2] Courts have recognised that no game is a game of pure skill alone and almost all games involve an element, albeit infinitesimal, of chance.

Based on the above, Sports Fantasy games have been exempt from the provisions of the PGA and have been declared as a legitimate business activity protected under Article 19(1)(g) of the Constitution of India. With a high potential for growth in the Indian market, foreign entities have been looking at exploring possibilities in the country. The classification of games into those involving “skill” and those involving “chance”, in India has been very vague, and often left up to the interpretation of courts. In an era of growing acceptance and evolution there is a requirement of an effective legislation specifically classifying activities as illegal gambling, separating it from those involving preponderance of skill.

[1]Vivek Benegal, Gambling Experiences, Problems and Policy in India: A Historical Analysis, in 108, =&0=&, December 2013, 2062-2067.

[2]Dr. K.R. Lakshmanan v. State of Tamil Nadu, AIR 1996 SC 1153; State of Andhra Pradesh v. K. Satyanarayana, AIR 1968 SC 825.

Read more

Artificial Intelligence is a Road Map to Transmogrification of Legal Industry

Posted on September 30, 2019 by Tech Law Forum NALSAR

This piece, taking an optimistic view of the use of AI in the legal industry, has been authored by Priyal Agrawal and Laxmi Rathore. They are currently in their 3rd year at the Kirit P. Mehta School of Law, NMIMS, Mumbai.

“In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose.” – Matt Bellamy

Artificial intelligence is a computer-based system that performs tasks, which typically require human intelligence. In this process, computers use rules to analyze data, study patterns and gather insights from the data. Artificial Intelligence companies persistently find ways of evolving technology that will manage arduous tasks in various sectors for enhanced speed and accuracy. Artificial Intelligence has transformed nearly all the professional sectors including the legal sector. It is finding its way into the legal profession and there is a plethora of software solutions available, which can substitute the humdrum and tedious work done by lawyers. In the legal profession, the changes are diverse where software solutions have outweighed paperwork, documentation and data management.

This blog analyzes the use of AI in the legal industry. It describes various AI tools which are used in the legal sector, and gives an insight into the use of AI in the Indian Judiciary system to reduce pendency of cases. Finally, we discuss the challenges in the implementation of AI in the legal field.

In the legal field, Artificial Intelligence can be applied to find digital counsel in the areas of due diligence, prediction technology, legal analytics, document automation, intellectual property and electronic billing. One such tool, which facilitates the use of artificial intelligence, is Ross Intelligence. This software has natural language search capabilities that enable lawyers to ask questions and receive information such as related case laws, recommended readings and secondary sources. Prediction Technology is a software which speculates a litigation’s probable outcome. In 2004, a group of professors from Washington University examined their algorithm’s accuracy in predicting Supreme Court judgments in 628 cases in 2002. The algorithm’s results were compared to the findings of a team of experts. It proved to be a more accurate predictor by correctly predicting 75 percent of the outcomes compared to the 59 percent of the experts’ accuracy. In 2016, JP Morgan developed an in-house legal technology tool named COIN (Contract Intelligence). It draws out 150 attributes from 12000 commercial credit agreements and contracts within few seconds. According to this organization, this equals to 36,000 hours of legal work by its lawyers.

In an interview with UK’s law Firm Slaughter and May a review of the AI tool, Luminance that is being currently used by them was taken. This tool is designed to assist with contract reviews, especially with regard to due diligence exercises during mergers and acquisitions. It was found out that the AI tool has an impact on the firm’s lawyers, who could spend more time on doing valuable work.  It was also found out that the tool fits well into the existing workflows of the firm in relation to M&A due diligence. The documents that the tool helps to review are already stored in a virtual data room; the only additional step the tool needs to take is to introduce documents into the solution itself.

India is also adopting the use of artificial intelligence in the legal field. One of India’s leading law firms Cyril Amarchand Mangaldas is incorporating artificial intelligence in its processes for contract analysis and review, in concurrence with Canadian AI assistant Kira system. This software will analyze and differentiate risky provisions in the contract. It will improve the effectiveness, accuracy and scale up the speed of the firm’s delivery model for legal service and research.

In the Indian judicial system, where a plethora of cases is pending, artificial intelligence can play a significant role to reduce the burden. A deadweight of almost 7.3 lakh cases is left pending per year. A large amount of legal research is required by advocates to argue their case. Use of AI can accelerate the speed of legal research and enhance the judicial process. In this regard, a young advocate named Karan Kalia, developed a comprehensive software program for speedy disposal of trial court cases to the Supreme Court’s E-Committee led by Justice Madan B Lokur. This software offers a trial judge with appropriate case laws instantly, while also identifying their reliability.

AI enables lawyers to get nonpareil insight into the legal realm and get legal research done within few seconds. AI can balance the expenditure required for legal research by bringing about uniformity in the quality of research. AI tools help to review only those documents which are relevant to the case, rather than requiring humans to review every document. AI can analyze data through which it can make quality predictions about the outcome of legal proceedings in a competent manner, and in certain cases, better than humans. Lawyers and law firms can swing their attention to the clients rather than spending time on legal research, making the optimum use of the constrained human resources. They can present arguments and evidence digitally, get them processed and submit them faster.

Although AI is prone to some challenges, these can be subdued with time. The major concern circumscribing AI is data protection. AI is used without any legal structure that generates the risk of information assurance and security measures. A stringent framework is needed to regulate AI to safeguard an individual’s private data and provide safety standards.  A few technical barriers will limit the implementation of AI technologies. It is difficult to construct algorithms that capture the law in a useful way. Lack of digitalization of data is also a technical constraint. Complexity of legal reasoning acts as a potential barrier to implementing effective legal technologies. However, this will be eventually rectified with continuous usage and time.

The introduction of AI in the legal sector will not substitute lawyers. In reality, technology will increase the efficiency and productivity of lawyers and not replace them. Instead, the roles of lawyers will shift, rather than decline, and become more interactive with technological applications in their field. None of the AI tools aims to replace a lawyer but they increase the authenticity and accuracy of research and enable to give a more result-oriented suggestion to the clients. As Mcafee and Bryjolfsson have pointed out, “Even in those areas where digital machines have far outstripped humans, people still have vital roles to play.”

The use of AI will manifest a new broom that sweeps clean, i.e., it will bring about far- reaching changes in the legal field. Over the next decade, the use of AI-based software is likely to increase manifold. This will lead to advancement and development in functionality present lawyering technologies such as decision engines, collaboration and communication tools, document automation, e-discovery and research tools and legal expert system the aforementioned. Trending industry concepts like big data and unstructured database will allow vendors to provide more robust performance. There will also be an influx of non-lawyer service providers who will enter the legal industry, some of whom will be wholly consumer-based, some lawyer focused and others will sell their wares to both consumers and lawyers. The future for manual labor in law looks bleak, for the legal world is gearing up to function in tandem with AI.

Read more

The Alternative Facts of Virtual Reality

Posted on July 8, 2019 by Tech Law Forum @ NALSAR

[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Rhea Reddy, a third year student of NALSAR University of Law.]

Recently, Facebook announced its plans to develop a full-body virtual reality system.  The company aims to create life-like avatars of users to provide for a more immersive social media and gaming experience. These detailed avatars will be brought into VR simulations so that users can play sports or interact with each other in the (digital) flesh. The avatars are intended to be anatomically correct, down to the last detail of muscle and skin. They would further replicate the real-time movements of users, along with their clothing and facial expressions. Though this technology may be a long way from being implemented, it would be prudent to discuss its legal implications because of the threat it poses, particularly to democratic processes.

VR has already raised concerns about privacy and data collection by companies, which can be read about here. However, in this post, the focus is on how Facebook’s proposed technology would help propagate fake news and stifle dissent within a country. The unchecked spreading of fake news, primarily through social media, is currently a major concern for many countries. This has reached the point where there have been multiple instances of people being lynched based on rumours forwarded through Whatsapp. Facebook’s failures to tackle fake news are also widely known, with it not even having fact-checking partners in countries like Hungary. In addition, its algorithm traps users in ‘filter bubbles’ based on the content they had previously engaged with, tailoring their news feed to their interests. Users would then be able to access only the news and other information that conform to their existing views. This opinion-based segregation makes it difficult for legitimate journalism to counter the spread of fake news as the potential for exposure to conflicting viewpoints is greatly reduced.

Virtual Reality would only add to this fake news problem. Ordinarily, people form memories and opinions based on real-life experiences and observations. VR, however, blurs the divide between real life and simulation by immersing users in an experience they perceive to be real. In other words, VR isn’t real but it feels real. For this reason, it can have long term psychological impacts on users even after they leave the virtual world. Facebook’s proposed technology goes one step further by proposing to develop life-like simulations of users and environments. By presenting users with an ‘objective’ reality, it would have an intensive immersion effect that may be used to manipulate the mind, emotions, and consequently, the behaviour of users. Fake news would then be thought of as ‘objectively’ experienced in real life, rather than just on a screen.

In this way, the proposed VR technology can control what users experience and, thereby, effectively sell them a particular message. Life-like avatars of people ranging from politicians to celebrities can be manipulated to propagate a false message or a certain viewpoint. Before long, news events could be simulated using the avatars of news anchors. With Facebook looking to emulate even the user’s body language and body movements, these simulations will be nearly indistinguishable from the real users. This problem is exacerbated when malicious hackers make use of VR to propagate their agenda. For instance, Russia has already managed to interfere with the election of the POTUS by spreading articles on social media. If it subsequently manages to obtain an influential individual’s body-mapped information, it can use such information to manipulate the masses. Therefore, the proposed technology would allow for greater damage to be inflicted upon democratic processes.

With the question becoming what constitutes reality itself and due to the extent of harm that may be caused by this proposed technology, the need for its regulation becomes all the more necessary. Without legal sanctions, one cannot hope for Facebook to remove fake news on its own, especially since it has previously refused to do so. However, any attempts at prescriptive legislation that aim to block content before it is even posted would threaten the right to freedom of speech. This is harmful because it may allow for the censorship of legitimate journalism before it can be cross-verified as real news, thereby impeding the discovery of truth through open discussions. This, in turn, may lead to self-censorship and have a chilling effect on citizens in the country. Therefore, prescriptive legislation should not be used to address such a complex issue.

In the past, governments have not responded adequately to the challenges created by new technology. As a case in point, existing laws in India are insufficient to deal with the fake news problem. Instead of being mandated to comply with positive controls, social media platforms have been provided a safe harbour by Section 79 of the IT Act, 2000. This section protects companies like Facebook from liability for any actions committed by their users unless they are made aware of a particular post on their platform. In addition, the companies are only to observe an ambiguous requirement of due diligence while discharging their duties under this Act. Further, such companies are required to censor content only when directed to do so by a court. Therefore, censorship of fake news could only happen after delayed bureaucratic and legal processes. However, with the quick spread of misinformation, and the intensity of immersion and manipulation by VR technology, this delayed reactive process is largely ineffective.

Due to the inadequacy of current reactive legislation, there is a need for more effective regulation. But, since such regulation can potentially be misused, great care must be taken before introducing laws regulating fake news in democratic countries. This need for caution makes itself more apparent when observing previous attempts to regulate fake news in countries such as Singapore, Germany, and Russia. Singaporean law-makers attempted to deal with fake news by forcing corrections to be added to online content that they deemed to be false. These corrections would not affect the original content of the articles, but would instead add the facts next to the falsehood. But how would this apply to videos with life-like simulations? Moreover, even if textual disclaimers could be inserted into every fake simulation being distributed on all platforms, mere text would not be very effective against the emotional impact of purposefully evocative virtual reality simulations. Further, in Germany and Singapore, authorities have found it difficult to differentiate misinformation and hate speech from satire. In Russia, the government is even allowed to block sites and delete articles with which it disagreed by branding them as ‘fake news’. Therefore, the current restrictive framework existing across the world is largely inadequate and allows for governments to take on authoritarian characteristics.

In this way, regulation may allow the government to become an arbiter of the truth, giving authorities the power to control what is shown on social media platforms. The Netflix series, Black Mirror, has already depicted a preview of this future of VR and augmented reality [AR]. In its episode titled ‘Men against Fire’, it focuses on AR technology that makes soldiers see their ‘enemies’ as aggressive mutants given the name ‘roaches’. In reality, these roaches were just terrified citizens who were deemed to be genetically inferior by the government. By programming them to look like the enemy, this technology allowed the government to make unknowing soldiers key pawns in the genocide. Even though this is a drastic example, it does show how VR in the hands of governments, religious and cultural authorities, etc. has the potential to obscure the truth and further a particular agenda.

Until a more comprehensive regulatory framework that includes checks to authoritarian tendencies comes into force, certain measures may be adopted to improve societal resistance to fake news. Firstly, governments could limit the liability protections offered to intermediaries. Among other things, this may be done by requiring companies to censor fake news on their own or by holding them accountable for the defamation of individuals on their sites. Increased liability may then encourage intermediaries to better screen the content they permit on their platforms. Secondly, companies could be committed to editing any simulated avatars of users in such a manner so as to ensure that they cannot be confused with the real versions. Lastly, the encouragement of media literacy could also be pursued. A project launched in Italy aimed to teach citizens, as part of the country’s high school education curriculum, how to identify suspect URLs or reach out to experts online. This can help citizens themselves become more aware of potential falsehoods.

In conclusion, advancements in VR may allow for videos to be manipulated by changing how real people appear to behave. These fake videos have the potential to lead to inter and intra-state conflicts. In the guise of protecting citizens from such videos, governments may resort to the dangerous weapon of censorship. For these reasons, solutions to prevent the spread of fake news at its outset must be devised, without compromising the right to free speech or devolving the country into an authoritarian regime. Only then would Virtual Reality not threaten the collapse of objective reality.

Read more

#SaveYourInternet – How Europe’s New Copyright Directive Threatens to Colonise the Internet

Posted on July 8, 2019 by Tech Law Forum @ NALSAR

[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Manasvin Andra, a third year student of NALSAR University of Law.]

Controlling the public’s access to the Internet has always been considered the tool of the demagogue. We take it as being par for the course when restrictions are imposed on users in countries such as China and Myanmar, but a ban imposed by Facebook even on members of the far-right sees intense debate erupt over the perceived violation of the recipient’s freedom of speech.

This respect for one’s right to speak his/her mind freely owes a lot to the foundational place that is accorded to the right of free speech and expression by influential charters like the Universal Declaration of Human Rights, and it is clear to see that these freedoms have been extended with equal verve and vigour to apply to relatively new forms of media such as the Internet.

Placed against this background, the adoption by the European Union of the Directive on Copyright in the Digital Single Market is unusual, troubling and even sinister, as the law poses a grave threat to citizens’ right to privacy and freedom of speech and expression, even as it professes its benevolent intentions.

But what exactly does the new Directive entail, and how will free speech be impacted now that it has been adopted? Can the law pass muster if challenged in the Court of Justice for the European Union? What is the prevailing position of law on the issue?

It is these issues that the present post shall attempt to discuss.

I. The EU Directive on Copyright – What is it and how does it work?

The deliberations for a new EU-wide copyright law were opened due to the ineffectiveness of the previous Directive, which was implemented in 2001 and whose provisions were found unsuitable for today’s digital market. While the Directive succeeded in achieving some of its goals its major failure was its complete ineffectiveness on the issue of fair remuneration to content creators, which has contributed significantly to the “value gap” (i.e. difference in revenues) that currently exists between internet companies and content creators.

The newly implemented Directive aims to address this discrepancy through Article 17 (Draft Article 13), which imposes an obligation on intermediary platforms to obtain licenses for the content uploaded by users.

This means that for every video that is uploaded to its platform, YouTube must obtain a license from the uploader, failing which it must ensure the “non-availability” of the relevant video. Essentially, it means that the company loses the right to display content if it fails to acquire a license, and it must therefore ensure that the video is no longer available for public viewing.

The obligation imposed on intermediaries to negotiate licenses with users presents some very obvious difficulties, the most pertinent of which is the immense financial liability that companies will inevitably have to bear.

This is because in order to comply with the law online platforms will have to pre-emptively acquire a license from every single creator for the billions of copyrighted works that are created every day, an impossible task considering that a copyright arises automatically when a new work is created.

All of this ultimately means that in order to escape liability internet platforms will have to install automated filters, which will lead to a handful of companies monitoring content that is uploaded by a vast majority of Internet users across the world.

II. What does the Directive mean for the free use of the Internet?

The current model for dealing with copyright infringements is the notice-and-takedown process, through which intermediary platforms can escape liability when their users infringe copyright so long as they are unaware of the infringement, provided that they also act quickly to remove it once knowledge of the copyrighted content reaches them. However, the new model changes this paradigm by instituting a system where filtering is done before the content is uploaded, rather than in case a violation is reported.

The problem with this kind of filtering structure is that it renders any exceptions to the law completely useless, as even sophisticated algorithms lack the ability to correctly distinguish genuine videos, parodies and memes from actual copyright infringements. The filters would operate in such a manner as to automatically block any work that contains unlicensed copyrighted material, leaving users with no choice but to complain to the intermediaries in order to gain the benefit of the exceptions.

It is this aspect of the Directive that has led to Article 17 being branded a ‘meme ban’, as despite the EU carving out an exception for “quotation, criticism, review, caricature, parody and pastiche”, the use of filters means that there is no way to correctly distinguish whether a given upload is parody or outright infringement.

Furthermore, developing the kind of filters required by the Directive is an extremely expensive process, with only a few companies possessing the resources needed to develop their own filtering software.

This means that while major tech companies will carry on unharmed it will be the small and medium companies that will bear the brunt of the new law, as the compulsion to adopt technologies created by other companies will lead to control of the Internet vesting in the hands of multinational entities such as Google and Facebook.

As Pirate Party MEP and prominent critic of the Copyright Directive Julia Reda puts it, “it is naive to think that this censorship infrastructure will not be used for purposes other than fighting copyright infringement”, which demonstrates the dangers of the amount of control that the law purports to hand over to private MNCs.

Both of the above measures can somewhat be justified if the Directive effectively addressed the prevailing “value gap”, but there is little to no indication that such a move will actually work.

The reason is simple: the immense liability that intermediary platforms face under the new law mean that they will adopt large-scale filtering rather than risk incurring penalties by attempting to negotiate licenses with content creators, which will lead to only large artistes with deep pockets – and not the small creators whom the Directive purports to help – benefiting from the change in status quo.

III. What does current case law mean for the legality of the Directive?

Given the controversy surrounding the Directive it was perhaps inevitable that its legality would eventually be called into question, and so it proved as Poland became the first member state to file a case against the law in the CJEU.

However, this leads us to a question – what is the position of existing case law on the issues that have dominated the discourse surrounding the Directive, namely, filtering and general monitoring of the Internet?

As it turns out, these questions are not new for the EU’s top Court, as it dealt with a very similar matter in the 2012 SABAM/Netlog case. There, SABAM sought a decree against Netlog which would force the company to install a filtering system to scan for potential copyright infringements, on the grounds that users were using SABAM’s copyrighted content in uploading their works.

If this sounds familiar it is because such a measure is exactly what the new Directive contemplates, but crucially, the CJEU in the case held that such a filtering system could not be deemed permissible.

The Court arrived at this decision by finding that such a large-scale filtering infrastructure would violate users’ fundamental right to the security of their personal data as provided by Article 8 of the Charter, as well as their freedom to receive or impart information as given in Article 11.

According to the Court, such a requirement would not amount to striking a fair balance between SABAM’s right to intellectual property and Netlog (and its users’) freedom to conduct business, the right to protection of personal data and the freedom to receive or impart information, and it therefore refused to grant the injunction.

The ramifications of this ruling are clear – filtering/general monitoring of infringements has been held to violate users’ right to privacy and freedom of information, and therefore Article 17 of the Copyright Directive stands contrary to the existing case law on the issue of filtering.

Conclusion

Article 17 has clear and adverse consequences for content creators and ordinary users – but existing case law offers opponents a glimmer of hope that the provision will be scrapped before it can be implemented fully.

The blatant violation of users’ right to privacy and the prospect of a few companies controlling the entirety of the Internet has understandably resulted in many fearing the loss of freedom that is so unique to the Internet, and it is clear that content creators will be casualties rather than beneficiaries if Article 17 is allowed to remain on the books.

The need of the hour is to find better and more effective ways of ensuring that creators are compensated fairly for their content, but colonising the Internet, as the European Union has purported to do with its new law, can never be accepted as a solution.

Read more

Self Driving Cars and the Accountability of Autonomous Artificial Intelligence

Posted on July 8, 2019 by Tech Law Forum @ NALSAR

[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Vedaarth Uberoi, a third year student of NALSAR University of Law.]

From Terminator to Robocop, Wall-E to HAL-9000, artificial intelligence and fully autonomous machines have been a part of popular culture and science fiction, and with recent technological advances, they are coming closer and closer to a thing of reality which raises a question of what would be the legal consequences of such a development. In practical terms, recent controversies of accidents involving autonomous cars are a useful avenue to explore such a legal conundrum which is sure to be more prevalent in the future.

The very texture of urban life is completely altered when a person cannot look a driver in the eye to judge their intentions, or when a two-ton truck is run by an array of sensors and computers, whose decisions are foreign to human reasoning.

Those in favour of fully autonomous cars point towards the overall improvements to road safety which would be imminent with the advent of self-driving cars. Ninety-four percent of car crashes in America are caused by driver error(speeding and drunk driving among other examples), and both fully or even partially autonomous cars could help reduce that number substantially. Even so, crashes, injuries, and fatalities would not end entirely even if self-driving cars were to be ubiquitous. Overall, eventually, those figures are still expected to number far fewer than the number of people killed in car crashes in the present day.

The problem is that Rome wasn’t built in a day and the introduction of autonomous self driving cars wouldn’t instantly bring about a complete change, but rather in gradual stages as autonomous technology would slowly propagate and expand in the consciousness of the market and society. During that period, which could last decades, the social and legal status of Robocar safety would be judged and questioned against the inadequate and unsuitable existing standards, practices, and sentiments.

Who is to blame? 

In 2018, University of Brighton researcher John Kingston analyzed three legal theories of criminal liability that could apply to an entity controlled by artificial intelligence.

Perpetrator via another – the programmer or the user could be held liable for directly instructing the AI entity to commit the crime.

Natural and probable consequence – the programmer or the user could be held liable for causing the AI entity to commit a crime as a consequence of its natural operation. For example, if a human obstructs the work of a factory robot and the AI decides to squash the human as the easiest way to clear the obstruction to continue working, if this outcome was likely and the programmer knew or should have known that, the programmer could be held criminally liable.

Direct liability – the AI system has demonstrated, of its own independent volition, the necessary elements of liability in criminal law. Legally, courts may be capable of assigning criminal liability to the AI system of an existing self-driving car for speeding; however, it is not clear that this would be a useful thing for a court to do.

If one is to direct that question of liability specifically with respect to car accidents and ask “who do I sue,” a plaintiff in a traditional car crash would assign blame to the driver or the car manufacturer, depending on the cause of the crash. In a crash involving an autonomous car, a plaintiff can be understood to have four options to pursue.

Operator of the vehicle: The viability of a claim against the operator will determine on the level of autonomy. For instance, if the autonomous technology allows the passenger to cede full control to the vehicle, then the passenger will likely not be found to be at fault for a crash caused by the technology.

However, in any situation where you’re expecting the human and the computer algorithms to share control of the car(as is the prevalent form of self driving car system in the present day), it is very tricky to hand that control back and forth. It should be noted that Waymo, the Alphabet subsidiary pursuing driverless technology, has consistently argued against such systems where control of a vehicle is handed back and forth between the driver and the algorithms. The company has instead pushed for a perfected automation technology that totally eliminates the role of a human driver.

Car manufacturer: A plaintiff will need to determine whether the manufacturer such as GM had a part in installing autonomous technology into the vehicle.

Company that created the finished autonomous car: Volvo is an example of a manufacturer who has pledged to take full responsibility for accidents caused by its self-driving technology.

In 2015, Volvo issued a press release claiming that Volvo would accept full liability whenever its cars in autonomous mode and announced that it will pay for any injuries or damaged caused by its fully autonomous software, which it expects to start selling in 2020. President and Chief Executive of Volvo Cars Håkan Samuelsson went further urging “regulators to work closely with car makers to solve controversial outstanding issues such as questions over legal liability in the event that a self-driving car is involved in a crash or hacked by a criminal third party.”

Company that created the autonomous car technology: Companies such as Google who are developing the software behind the autonomous car and those manufacturing the sensor systems that allow a vehicle to detect its surrounding.

Overall, there exists broad consensus that self-driving cars implicate the manufacturer of the vehicle more than its operator. That has different implications for a company like GM, which manufactures and sells cars, than Google, which has indicated that it doesn’t have plans to make cars, only the technology that runs them.

Still, since the law is set by precedents pursued by legal action, other interpretations of self-driving car liability are possible. A different interpretation might compare operating autonomous test cars to taking dangerous or experimental equipment on city roads. There’s an argument to be made that a pedestrian death at the hands of an autonomous car, even one that would have been unavoidable, is no different from a human-driven car with a new, experimental combustion engine that malfunctions and blows up on a city road or interstate.

Product Liability v. Personal Liability

Liability for incidents involving self-driving cars is a developing area of law and policy that will determine who is liable when a car causes physical damage to persons or property.

As autonomous cars shift the responsibility of driving from humans to autonomous car technology, there is a need for existing liability laws to evolve in order to fairly identify the appropriate remedies for damage and injury. As higher levels of autonomy are commercially introduced, the insurance industry stands to see greater proportions of commercial and product liability lines, while personal automobile insurance shrinks.

In a white paper titled “Marketplace of Change: Automobile Insurance in the Era of Autonomous Vehicles,” KPMG estimated that personal auto accounted for 87% of loss insurance in the United States, while commercial auto accounted for 13% in 2013. By 2040, personal auto is projected to fall to 58%, while commercial auto rises to 28% and products liability gains 14%. This reflects the view that personal liability will fall as the responsibility of driving shifts to the vehicle. In addition, with the view that the overall pie representing losses covered by liability policies will shrink as autonomous cars cause fewer accidents.

Availability of Crash Data and Fixing Liability

University of South Carolina law professor Bryant Walker Smith as noted that with automated systems, considerably more data will typically be available than with human-driver crashes, allowing more reliable and detailed assessment of liability. He also predicted that comparisons between how an automated system responds and how a human would have or should have responded will be used to help determine fault.

The challenge in this new ecosystem with regards to fixing liability is that some of the potentially liable parties may also have disproportionate control over the sensor data. There is a risk that one of these parties may alter the data to steer the liability decision in its favour, using the wireless and USB interfaces that vehicles have.

That means we must not only record tamper-free sensor data, but also any interactions with the vehicle, perhaps through mediums such as blockchain technology.

Dystopian End to Public Roads for Private Citizens?

There is an argument that autonomous cars could erode citizens’ rights to the public streets. Given sufficient economic incentive to pursue public-private partnerships between municipalities and technology companies, cities, counties, and states might choose to adopt industry-friendly regulatory policy in exchange for changes to the urban environment.

Eventually, should autonomous cars become widespread, it might become more expedient just to close certain roads to pedestrians, bicyclists, and human drivers so that computer cars can operate at maximum efficiency. It’s happened before: Jaywalking laws were essentially invented to transform streets into places for cars.

Uber, Google, and other influential companies with substantial interests in the field of autonomous driving might see this vulnerability as a sign that it’s time to get more serious about legal protection of their interests and this might be to the detriment of the private rights and interests of citizens with regards to their roads.

Read more

Mackinnon’s “Consent of The Networked” Deconstruction (Part III)

Posted on July 7, 2019November 12, 2019 by Prateek Surisetti

SERIES INTRODUCTION

Rebecca MacKinnon’s “Consent of the Networked: The Worldwide Struggle for Internet Freedom” is an interesting read on free speech, on the internet, in the context of a world where corporations are challenging the sovereignty of governments. Having read the book, I will be familiarizing readers with some of the themes and ideas discussed in MacKinnon’s work.

In Part I, we discussed censorship in the context of authoritarian governments.

In Part II, we discussed the practices of democratic governments vis-à-vis online speech.

In Part III, we will discuss the influence of corporations on online speech.

Essentially, the discussion will revolve around the interactions between the three stakeholders: netizens, corporations providing internet-based products and governments (both autocratic and democratic). Each of the stakeholders have varied interests or agendas and work with or against in each other based on the situation.

Governments wish to control corporations’ online platforms to pursue political agendas and corporations wish to attract users and generating profits, while also having to acquiesce to government demands to access markets. The ensuing interactions, involving corporations and governments, affect netizens’ online civil liberties across the world.

CORPORATIONS

The relevance of corporations, in the context of online free speech, deserves considerable analysis.

First, corporations might be motivated to clandestinely breach customer’s trust to fulfil other aims. If customer is made aware about such occurrences, she might opt out of using the product. For instance, corporations might violate customer privacy through illegal collection of data.

But even without deceptive activities, the activities of corporations raise quite a few concerns.

The Power of Code

Activities on the cyberspace are within the boundaries of software programmes or code. The architecture of the platforms we use are far better influencers of activity than any law. In the real world, our inability to breathe underwater isn’t because any law prohibits us, but because the laws of science prevent us. Similarly, programmes are the laws of science in cyberspace. We have already learnt that corporations exercise immense influence over netizens. But programmers rarely consider inculcating protection of civil liberties into their design.

Consequences for Political Speech

Now, let us peruse a few incidents that highlight the issues with corporation programmers being oblivious to the value of free speech. The consequences are especially grave for political activists.

Political speech is intended to reach the masses. Therefore, activists necessarily publish content on platforms that are popular amongst the public. Often however, these platforms aren’t designed for activism and therefore, said platform’s policies aren’t consistent with the interests of activists. On such platforms, political content is liable to be blocked for reasons on the lines of “graphic violence” or “copyright violation”. For instance, Flickr had removed political content against the Egyptian regime because Flickr’s policy disallowed publication of content created by others, even if the original publishers have consented to re-publication. There wasn’t any deception involved and the corporation was merely enforcing its own guidelines, but the consequences were detrimental to the cause of Egyptian dissent.

Programmers in California, unaware of the various lived experiences across the world, are taking decisions that affect people worldwide. The values espoused by the decision makers aren’t always as significant in other contexts.  Mark Zuckerberg may wax eloquent about “radical transparency” and his belief that complete transparency will “make for a more tolerant society in which people eventually accept that everybody sometimes does bad or embarrassing things”, but a Chinese journalist publishing anonymous posts criticizing her government’s policies might differ.

Indeed, Facebook had promoted itself as an agent of political change and activism, but terminated a popular activist page on grounds of being managed by fake accounts (activists created accounts with details of fictional people to avoid incarceration by the authoritarian government), which was against its policy.

Digital Sovereigns, Accountability and Creative Commons

MacKinnon develops the discussion further by conceptualizing “Digital Sovereigns”. The term refers to corporations, such as Google and Facebook, which exercise influence over the cyberspace in a manner that resembles authoritarian sovereigns in the real space. For instance, Facebook has a set of policies and once users’ “consent” to its policies, Facebook exercises considerable influence without users having any tangible representation in the formulation and alteration of policies that users are governed by.

The existence of Digital Sovereigns raises the issue of legitimacy.

Corporations exercise considerable influence over netizens, but unlike democratic governments, aren’t accountable to them. Therefore, legitimacy is a concern as corporations, similar to authoritarian governments, are being allowed to exercise tremendous control without any accountability measures.  In MacKinnon’s words “political activists are increasingly hostage to the whims of corporate self-governance”.

Facebook and Google might have dedicated and competent departments to regulate content on their platforms, but it is fundamentally problematic that unelected and therefore, unaccountable entities (corporation employees) are acting as judge, jury and executioner and taking decisions that has considerable impact on online speech.

Further, even when corporations adopt policies that are beneficial to the values of free speech, users continue to remain at the mercy of the corporation. Parallels can be drawn to a dictatorial regime with a benevolent leader.

Therefore, MacKinnon suggests the passage of legislation requiring private entities to stick to baseline regulatory rules. Corporations shouldn’t be allowed to tweak their policies according their whims and fancies merely because users have consented. Further, consumers should pressurize corporations towards accountability just like activists have driven authoritarian governments towards accountability.

While certain corporations exercise considerable control, the rise of the “Digital Commons” has limited the influence of corporations. The Digital Commons covers a set of platforms that allow people to share and develop open source software. Such platforms allow netizens to develop software that suits their needs. Thereby, the netizens’ dependence on corporations is reduced. Examples of such initiatives include the Linux software, the General Public License programme and the “Creative Commons”.

The corporations’ coders/programmers produce influential software, but there also exist netizens who code (as in the case of the Digital Commons) and consequently, provide alternate platforms. Therefore, control is exercised by different sets of programmers and the corporations influence over its customers is indirectly proportional to the popularity of the netizen produced software.

Corporations & Governments

Now, let us look at corporations’ activities under governmental pressure.

Given that the internet empowers the citizenry by providing a platform to express themselves, governments seek to control it. Facebook and Google provide a global network for the people to voice their concerns against the government and even take action. Hence, especially in the context of authoritarian regimes, governments value control over such platforms to quell dissent.

Corporations are interested in the requirements of customers to further their business. But at the same time, corporations need to comply with the requirements of governments (especially, authoritarian governments) to access the market under the said government’s control. For instance, Google wasn’t able to function in China without the Chinese government’s patronage. Even though the Chinese government denied involvement, Google had to exit China due to computer attacks that were of a military grade.

Further, the users of the corporations’ products may suffer if the relationship between the government and the corporation is opaque. Essentially, the issue with customers being unaware of the details pertaining to the relationship between the government and the corporations could lead to greater restrictions (through manipulation of products) over the customers’ freedoms than she had consented to. We have already analysed the same in the context of Chinese censorship.

Here, let us look at examples of responses from corporations to government pressure.

Even if corporation policies are attuned to protecting user privacy, the effect of government influence can’t be underestimated. Research In Motion, the manufacturer of BlackBerry cell-phones, was known for its messaging platform. The platform was designed to be inaccessible even by BlackBerry itself, but U.A.E, Saudi Arabia and India have been pressing relentlessly. The corporation hasn’t been responsive regarding whether it acceded to the demands.

In China, Yahoo, under pressure from the Chinese government, aided in the imprisonment of a Chinese critic of the government. Shi Tao had sent documents criticizing the Chinese government via Yahoo’s email services and Yahoo revealed details essential for his conviction. After all, the journalist had “agreed” to Yahoo’s Terms & Conditions, which allowed Yahoo to disclose data when required by law.  Also, if Yahoo resisted, its employees would have had to risk arrest.

MacKinnon suggests raising awareness amongst users of the significance of civil liberties in order to improve the situation. Once users recognize the importance of online speech, it is expected that users will pressurize corporations to restrain from acceding to such government “requests” and being more transparent with their activities vis-à-vis the government.

To Regulate or Not to Regulate

Keeping these issues in mind, can programmers be made to be sensitive to civil liberties through legislation?

In this regard, corporations argue that regulation of software will lead to stifling of innovation. For instance, if there was a legislation to protect consumer privacy that prohibited use of user data, programmers would be restricted from designing solutions that are tailored for individual consumer needs. Programmers wouldn’t be able to use location for finding restaurants in the vicinity. Advertisements wouldn’t reach their intended audience and thereby, cause inefficiencies.

Further, corporations argue that legislators aren’t suited to deal with the complexities of technological innovation. Corporations contend that legislator’s lack of expertise in technology would likely lead to decisions that would be counterproductive to all stakeholders involved.

Notwithstanding the corporations’ contentions and the fact that democratic governments aren’t devoid of blemishes in the realm of online speech regulation, at least, there exist accountability measures to bring them to task. While I have attempted to provide a brief overview of the debates surrounding regulation, the jury is still out on the question of regulation.

Again, MacKinnon suggests an alternate route of raising public awareness around civil liberties as a possible solution. When the market doesn’t inherently value civil liberties and legislators aren’t suited for intervening, awakening consciousness in consumers would push corporations to respect civil liberties and build protections into the design of their products.

Even in status quo, $3 trillion of the $25 trillion U.S. market is sourced from socially responsible investors and with greater awareness, investors might place greater value upon protection of civil liberties.

Conclusion

Here, we have attempted to (a) understand the extent of influence that corporations, through code, exercise over netizens, (b) comprehend the consequences of foreign corporations exercising as much influence (c) analyse the concerns of legitimacy surrounding “Digital Sovereigns” and (d) understand the debate surrounding governmental cyberspace regulation.

MacKinnon discussed Net Neutrality as well, but I have dealt with the same themes here.

 

Image taken from here.

Read more

Network Neutrality Around The World : A Basic Overview

Posted on July 26, 2018November 12, 2019 by Prateek Surisetti

Network Neutrality () refers to a network wherein participants are effectively blind to the nature of data flowing through the network. Another way of defining NN is a network wherein participants are restricted from differential treatment of data flow. Please understand that the definitions provided above are, in cliché speak, two sides of the same coin. Even if a participant can distinguish the nature of data flowing through a network, the participant is considered to be effectively blind, if said participant doesn’t interfere with the data flow. I have discussed the basic concepts and issues surrounding NN here.

A particular manner of interference includes discriminatory pricing, wherein Internet Service Providers provide different bandwidths to content providers (Netflix, YouTube, et cetera) based upon the extent of payment received.

In India, the debate took off when Airtel charged customers for voice calls over apps like WhatsApp, Skype and such OTT (Over-The-Top) services. This was bound to happen as most Internet Service Providers also control traditional services such as telephone services, whose profits are diminishing due to content providers like Skype (e.g. Airtel).

TRAI, or Telecommunication Regulatory Authority of India, after releasing a consultation paper in the March of 2015, prohibited discriminatory pricing through the “Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016”. For theoretical clarity, we need to understand that the absence of discriminatory pricing doesn’t necessarily mean that the network is neutral, but the majority of the debate has revolved on a discriminatory pricing versus NN basis and therefore, prohibition of discriminatory pricing has been characterized as NN. Essentially, interference with data traffic is antithetical to network neutrality, but traffic could be interfered with either on the basis of discriminatory pricing (allowing those paying higher a greater bandwidth) or through other mechanisms too (e.g. blocking flow of data for vested interests such as political or commercial concerns).

Various experts in the field had suggested that the Indian government to go the US way, i.e. to effectuate rules to regulate after viewing how the absence of NN rules plays out, rather than regulating from the beginning itself. But TRAI thought otherwise.

In the USA, ISPs are required to declare their network management practices and performance characteristics, in addition to there being a restriction on blocking lawful content and unreasonable discrimination. Though this is a step forward, the Federal Communications Commission (the US regulatory body) has left mobile network service providers out of the ambit of its regulations, for unknown reasons.

The EU, or the European Union, now requires ISPs to completely disclose all their practices pertaining to Network Management. The simple principle they followed was that customers should know what they are getting into and therefore, the regime focusses on reducing information asymmetry. Essentially, they the field has been left open for further regulation in the event that anti-competitive practices set in, in the future.

At the moment, only India, Chile, Netherlands and Brazil unequivocally espouse the principles of Net Neutrality in their laws.

Chile, despite having a very low internet penetration of around 28%, has called for an out and out application of Net Neutrality. Further, it has even banned “zero rating”. Zero rating is how successful and financially mighty content providers such as Facebook form an agreement with ISPs to provide their end users with free access (neither the ISP nor the content provider charges a fee) to their websites. Though NN would allow upcoming companies to compete better with established content providers such as Facebook, it also reduces chances of increased internet penetration (which would otherwise have reached its populace free of cost). Though outright imposition of NN laws might seem like a beneficial step, considering Chile’s low internet penetration, I feel it would have been better to start regulating a little later.

Netherlands, on the other hand, has almost 100% internet penetration. In this case, it makes more sense to adopt a neutral internet. Nevertheless, there are other considerations, of a distinct kind, at play over here. Most of the large content providers are based in the US and therefore, it would have put European companies at a disadvantage if NN was not pushed for. Additionally, another disadvantage of furthering NN has been increased tariffs for voice calls.

The Brazilian government has restricted corporations from not only charging deferentially, but also from collecting metadata. Furthermore, corporations are not allowed to do so and can be held accountable even if they store the data abroad. So, companies such as Yahoo and Bing, based in the US, can also be held accountable. However, enforcement concerns have been raised.

CONCLUSION

Over the course of this piece, we have reviewed the various degrees of NN protection afforded by various jurisdictions. We can classify the jurisdictions into 2 categories:

  1. Absolute NN Regime (e.g. India, Chile).
  2. No restrictions with focus on reducing information asymmetry (e.g. EU).

Finally, I would like to direct your attention to www.thisisnetneutrality.org, which provides a beautiful interactive map/infographic that displays NN information for jurisdictions across the world. Apart from categorizing countries into those with NN protections, those considering protection and those with no protections, the infographic provides additional information regarding each individual jurisdiction too.

Chief References:

Refer here, here, here and here.

 

Image from here.

Read more

Is Protecting Internet Intermediaries and Forgetting their Users Wrong

Posted on March 1, 2018March 1, 2018 by Tech Law Forum @ NALSAR

Ed Note: The following is a guest post by Abhijeet Singh Rawaley, a student of NALSAR University of Law.

The law surrounding online intermediary liability in cases concerning copyright infringement has posed a major interpretive challenge in Indian jurisprudence. The division bench of the Hon’ble High Court of Delhi in its 2016 December judgment attempted to resolve the same in MySpace Inc. v. Super Cassettes Industries Ltd. While the case dealt with a host of issues in copyright law, this post shall limit its analysis and critique the judgment on its discussion and holding concerning the role played by online intermediaries. It is devoted to understanding as to how we can create a framework of internet governance that not only protects intermediaries where they deserve and merit protection from liability, but also makes them more accountable and responsible actors who wield significant command over a valuable media such as the internet. The interpretative impediment in the case arose due to the prima facie discord between Section 79 and the proviso to Section 81 of the Information Technology Act, 2000.

Both the provisions were brought in through the landmark amendment of 2008 (that came into effect in 2009) which transplanted the “safe harbour” provisions and the “notice-and take-down regime” into the Indian law dealing with the internet. The seminal significance of the exemption granted to online intermediaries for any third-party content hosted by them cannot be disputed. Intermediaries are the channels, forums and platforms through which the internet becomes what it is. They are legally defined in Section 2 (1) (w) of the IT Act as entities which “receive, store or transmit” information on behalf of any other person through the internet. The section then provides a non-exhaustive illustrative list of “telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes” as intermediaries.  In Rishabh Dara’s words, online “intermediaries are widely recognized as essential cogs in the wheel of exercising the right to freedom of expression on the Internet”. Provided that they fulfil certain due-diligence conditions, Section 79 exempts intermediaries by precluding their liability arising due to any third-party content hosted by them. Taking the significance of IT Act to a very high position, its Section 81 enacts that the IT Act “shall have effect notwithstanding anything inconsistent therewith contained in any other law for the time being in force,” However, the exemption granted by Section 79 and the non obstante contained in Section 81 is apparently taken for a toss by the proviso to Section 81. This proviso crafts what seems to be a very special protection over and above the IT Act for the specific operation of the rights conferred by the Copyright Act, 1957 and the Patents Act, 1970. The sections were misconstrued to mean that the intermediaries, regardless of their due-diligence, could still be held liable for copyright and patent infringing third party content hosted by them.

In specific reference to technology law, the case of MySpace v. SCIL principally dealt with the question of whether the proviso to Section 81 negates and nullifies the exemption granted to intermediaries by Section 79. The Court held that it did not because, as per it, Section 79 granted only a “measured privilege” in favour of an online intermediary and not a “blanket immunity from liability.” It however did not classify the actors involved on the landscape of internet on the basis of their personalities. While Section 79 only exempts intermediaries, it leaves the door open for third-party uploaders to be liable for the copyright infringing content uploaded by them. This nonetheless implies that an intermediary will be liable for the wrongful content that it itself uploads.

Rationalizing the insertion of the proviso to Section 81, the Court observed that but for it, the copyright holders “would have been unable to pursue legal recourse” to protect their copyrights. This goes on to show how the Court without explicitly referring to it, applied the common law principle of ubi jus ibi remedium (where there is a right, there is a remedy) to the virtual space. It harmoniously read Section 79 with the proviso to mean that the former restricts claims of copyright (and concomitantly the remedies that flow from it) against intermediaries as a specific class of actors on the internet. Of course, this exclusion of claims against intermediaries does not happen by itself. The scheme of the law manifests a quid pro quo where intermediaries can avail the benefit of protection under Section 79 only by fulfilling the conditions set out in Sections 79 (2) and (3) and abiding by the due-diligence framework created in the Information Technology (Intermediaries Guidelines) Rules, 2011 as clarified in 2013. This framework of due diligence builds on the principle of self-regulation and mandates intermediaries to put in place specific “rules and regulations, privacy policy and user agreement for access-or usage of the intermediary’s computer resource” by its users. It makes intermediaries the arbiters of complaint notices received from any person who may be aggrieved with such information hosted by them which comes within the four corners of Rule 3 (2) of these rules.

Apart from covering cases of intellectual property infringement, the ambit of information subject to control under the 2011 Rules ranges from blasphemy to defamation to obscenity to such information which may be invasive of another’s privacy, to such that may threaten the unity, integrity, defence, security or sovereignty of India.  What is most problematic about the process to control the information coming within the ambit of Rule 3 (2) is that the ‘notice and takedown’ regime established by Rule 4 does not mandatorily require the forwarding of a notice received to the user who is alleged to have uploaded the objectionable content violative of Rule 3 (2). The regime to disable the objectionable content coming within the purview of Rule 3 (2) only suggests that the intermediary may work either with the uploader of information or the complainant. Since the involvement of the complainant is achieved simpliciter at the first instance of receipt of its complaint, there is no mandatory countervailing obligation on the intermediaries to approach their users who upload the allegedly objectionable content. Hence, the regime to protect the affected person’s rights fails to take into account the uploader’s perspective and thus makes a mockery of the cherished principle of audi alterum partem (hear the other side). While the requirement to hear the other side may have little value in grave cases which concern crimes such as publishing content which may be pornographic or invasive of someone’s privacy, it is nonetheless important in cases concerning copyright infringement. When the uploader of allegedly copyright infringing content is not given a chance to respond to a take-down notice, his possible plea of ‘fair use’ is altogether ignored. Therefore, it is submitted that the notice and take-down regime should be overhauled to extend its coverage to mandatorily take into account the response of uploaders of content to the notices received. This would only go on to ensure that the virtual web space becomes more fair to the different stakeholders that it caters to. An argument might be made that the ordinary contractual remedies may provide some say to the uploader of content (user of the intermediary) to allege breach of his contract with the intermediary.  But, the pursuit of those remedies will entail only a post-facto engagement between the two important stakeholders and will be digitally very inefficient considering how publication and removal of content on the internet can otherwise be efficaciously administered through reworking the take-down mechanism.

Hence, while we do focus on protecting intermediaries from being liable for third-party content hosted by them, we must also not ignore their larger than life existence or dominance on the internet. While the judgment in MySpace v. SCIL is indeed a laudable one when it comes to protecting intermediaries, it only does a lip service when it tangentially touches upon the aspect of protecting their users who upload the information. While the Court recognized how intermediaries provide the channels of information, one must not forget that these channels would be worth nothing sans the information that they host. There is a need to make intermediaries more responsible and accountable to protect and promote the speech and expression of internet users. Therefore, while our support to exempting intermediaries from being liable for third-party content should be resolute and dutiful, it must not happen at the cost of muting and ignoring their users who are the ordinary people using the internet.

Read more
  • Previous
  • 1
  • …
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • Next

Subscribe

Recent Posts

  • Lawtomation: ChatGPT and the Legal Industry (Part II)
  • Lawtomation: ChatGPT and the Legal Industry (Part I)
  • “Free Speech is not Free Reach”: A Foray into Shadow-Banning
  • The Digital Personal Data Protection Bill: A Move Towards an Orwellian State?
  • IT AMENDMENT RULES 2022: An Analysis of What’s Changed
  • The Telecommunications Reforms: A Step towards a Surveillance State (Part II)
  • The Telecommunications Reforms: A Step towards a Surveillance State (Part I)
  • Subdermal Chipping – A Plain Sailing Task?
  • A Comparative Analysis of Adtech Regulations in India Vis-a-Vis Adtech Laws in the UK
  • CERT-In Directions on Cybersecurity, 2022: For the Better or Worse?

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2023 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme