Network Neutrality () refers to a network wherein participants are effectively blind to the nature of data flowing through the network. Another way of defining NN is a network wherein participants are restricted from differential treatment of data flow. Please understand that the definitions provided above are, in cliché speak, two sides of the same coin. Even if a participant can distinguish the nature of data flowing through a network, the participant is considered to be effectively blind, if said participant doesn’t interfere with the data flow. I have discussed the basic concepts and issues surrounding NN here.
Month: July 2018
Huawei v ZTE: SEPs, Injunctions and the Points of Interface between the ECJ Case and Indian Jurisprudence: Part I
[Ed Note: This post is the first part of a two part series authored by Vaibhav Laddha, a student of NALSAR University of Law.]
Technology product markets today are inherently international. Products designed in Germany may be manufactured in Korea or China and sold in India. This cross-cutting global nature of technological products has created a need for standardisation to ensure technical interoperability. Some standards which ensure this are WiFi (wireless networking), MP3 (digital content encoding), 4G (wireless telecommunications), etc. These standards reduce communication costs and increase efficiency. For this reason, various standard setting organisations (SSOs) have been formed who primarily facilitate coordination between different stakeholders in a market by setting standards.
Huawei v ZTE: SEPs, Injunctions and the Points of Interface between the ECJ Case and Indian Jurisprudence : Part II
[Ed Note: This post is the second part of a two part series authored by Vaibhav Laddha, a student of NALSAR University of Law. The first part can be found here.]
The Indian telecommunications market is one of the largest in the world, and therefore becomes an important market for the key participants in the telecommunications industry. Indian jurisprudence on FRAND practices for SEPs is underdeveloped at this stage, with a handful of decisions by the Delhi High Court and the Competition Commission of India. The rules that govern SEP have not been clearly defined, and the positions adopted by the Delhi High Court and the Competition Commission of India have differed greatly.
YouTube and Censorship
Since February, YouTube has been shutting down channels that talk about marijuana in any shape or form. Around the second week of May, the disappearance of several channels at once sent users into a panic. When they reached out to the company, YouTube provided arbitrary reasons for this, citing the violation of its community guidelines. However, the videos in question seem to be adhering to them, making their removal all the more questionable. Even channels such as Marijuana Televisión, a Spanish channel that focuses on medicinal use of marijuana and has no association with usage of the drug in a negative way, was struck down. Several content creators have spoken out against these arbitrary shutdowns.
A user by the name of Paul Joseph Watson uploaded a video critiquing the music video of Childish Gambino’s song, ‘This is America’, claiming that a social justice narrative that attempted to fit into the popular ‘black lives matter’ movement was propagated without paying heed to key facts that contradicted the narrative of the music video. YouTube blocked the video on the grounds that it contained ‘content that may be inappropriate or offensive’. It’s worth noting that the original music video never received a warning of any kind, and is going strong on YouTube with over 329 million views. Watson took to Twitter to announce that his video was blocked. While YouTube reinstated the video later, it raised questions on what kinds of videos YouTube censored.
In 2017, PragerU sued the company for supposedly censoring conservative videos, citing their rights to upload content under the First Amendment. However, the lawsuit went in favour of YouTube, with the judge ruling that YouTube did not operate as a ‘state actor’ that was subject to the First Amendment. The primary question that the court dealt with was whether private companies like YouTube fell under the ambit of the First Amendment. District Judge Lucy Koh cited LLyod Corp v Tanner, which held that a mall was allowed to prevent citizens from distributing handbills in its compound, as a precedent to make the decision, departing from the decision in Marsh v Alabama, in which an appellant was allowed to distribute religious texts in a privately-owned sector. It was further stated in the judgment that YouTube was not a ‘public forum’ for speech to be protected under the First Amendment.
Not classifying YouTube as a public forum is problematic, as in recent times, YouTube and other social media have been taken to be important mediums for the public to discuss and debate issues at large. This viewpoint has been echoed by the Supreme Court in the case of Packingham v North Carolina, where it observed that the relationship between the First Amendment and the Internet can no longer be considered static and that the courts could not arbitrarily ban people from using parts of the Internet. Social media sites have grown to be an integral part of people’s lives, and as such, constitute a ground for discussion on important topics. It is therefore, unreasonable to exclude them from being classified as public forums in lieu of their important stature in society in this day and age.
While it is true that companies like YouTube are privately owned companies that must protect their business interests and reputation, if the law gives them the ability to regulate content to the extent that they strike out information that is contrary to their views, they are essentially being given a free hand to establish a stringent set of rules as to what content is allowed on the site and what isn’t. One argument commonly tossed around is that if social media sites were considered to be public forums, then their own community guidelines function as violations of the First Amendment, as regulating what people say is the very antithesis of the purpose of the law. However, this argument is flawed for the sole reason that there needs to be regulation in its most basic form as to what is allowed on the site. This can include removal of hate speech, misleading or discriminatory information and other such malicious content. The removal of information should not be extended to respectful, well thought out opinions on contemporary issues.
George Washington once said, “If freedom of speech is taken away, then dumb and silent we may be led, like sheep to the slaughter.” While he may have said it within the context of the formation of a constitutional framework for America, his words ring true in contemporary times as well, as it is one of the most essential qualities of the modern age. Social media is a place where all of us can express whatever we want, and instead of unreasonable restrictions on what thoughts we can project, balanced regulation can be achieved at the most basic level. If opinions are stifled and one side of the debate is cut out, it leaves an unfinished picture, which can bring about conjecture and assumptions at the very least, and misconduct of the highest order at the worst.
Regulation of Artificial Intelligence : The Way Ahead
The above examples are just a few of the more glamourous applications of what is called “artificial intelligence” (AI). AI is a simulation of human intelligence processes undertaken by computer systems. These processes include learning, computing, reasoning and the like. AI – powered machines are programmed through use of mathematical algorithms that can discover patterns and generate insights from data they are exposed to. These algorithms enable them to perform certain tasks that have been mathematically “fed” into their “brain”, thereby dictating their working. The global AI market is estimated to grow at 36.1% from 2016 to 2024 to reach a valuation of 3,061 Billion USD. Thus, the staggering potential of AI technology can never be understated, as will also be seen through the scope and impact of its applications.
There are three main categories of AI called Narrow AI, General AI and Super AI – in increasing order of complexity and automation capacity. Narrow AI, that is, machines which fulfill a particular purpose only, is surprisingly pervasive in our daily lives – right from the “Cortana” on Windows Operating Systems to the “Dr. Watson” present in IBM. General AI, largely not achieved yet, refers to machines that can perform several automated tasks at human capacity but are still guided through human control. An example is Tony Stark’s “Jarvis” in the Iron Man movies. Super AI, also not yet achieved, refers to fully automated robots with capacity beyond human ability – in other words, superhumans, like Arnold Schwarzenegger in “Terminator”.
With high potential of use invariably comes high potential of misuse, as has also been illustrated by Cambridge University’s ground – breaking report co -authored by 26 renowned experts on technology. Apart from technological errors, AI powered machines are at a constant risk of being hacked and used maliciously. For instance, the program of a driverless car can be tampered with to make it a kill – machine. AI can also be used to create highly realistic fake audio and video to induce certain consequences such as the alleged use of pro – Trump robots during the previous elections and their impact. Further, AI machines make our day – to – day lives more vulnerable to invasion of privacy as it becomes that much easier to secretly keep a watch on us all the time – an example being AI- powered machines being used for state sponsored underhand surveillance, as is allegedly happening in China. To cut a long story short, the actual and potential ways to misuse AI is unending. This is where the possibility of regulation arises.
It is important to note that there already exist laws which regulate use of AI like those on privacy, data protection and cybersecurity. For example, even under the current regulatory framework, using an AI- powered machine to snoop into people’s houses will attract punishment under the relevant laws. Thus, the presence of existing regulation narrows the regulatory scope of AI. In this light, requirement of regulation now arises to either cope better with the current situation or to cope with a completely new situation unforeseen by the existing framework.
Looking at the present stage of development AI is in, the need of the hour is specific rules to cope better with the current situation, just like there are safety guidelines for hazardous industries or permissible lead levels in foodstuffs. Similar industry – centric rules like technical guidelines and disclosure requirements based on the nature of particular AIs will go a long way in ensuring accountability today. Such targeted regulation has several advantages, as has been seen with USA’s driverless – car regulatory policy. Firstly, specific regulation means an explicit statement by the law that it has an eye on AI which will deter potential misusers. Secondly, regulation will help achieve quality control. For example, it will ensure quality driverless cars that will actually help reduce accidents by eliminating human errors associated with driving. Thirdly, and most importantly, regulation will increase people’s faith in AI. A person is more likely to sit in a driverless car which he knows has to legally meet certain safety standards rather than one which does not, since the former gives him a better assurance of safety. Further, if people become more secure about the AI machines they are presently using, the popularity of AI will only increase, leading to win-win situation. Thus, the specific regulation will actually boost the AI industry rather than pull it down.
The requirement of specific rules to beef up the current framework thus being established, the question now is whether at this stage in time, additional rules are required in order to be prepared for a change in the current situation, that is, development of General and Super AI – machines which operate at or beyond human capacity. In this regard, there are fears that superhuman robots might go out of human control and prove to be an existential threat to humanity, much like the way Dr. Banner’s experiments in the “Hulk” movie turned out. Proponents of this “existential threat” theory have compared AI experiments to “children playing with a bomb”. As Elon Musk puts it, if AI goes out of human control, it would be an “immortal dictator from which we can never escape.” What Musk and several others, including Stephen Hawking, are trying to say is that because of the destructive potential of uncontrolled AI, its regulation should be proactive rather than reactive to be on the safe side. It is thus argued that the mere possibility of out – of – control robots (whenever they might develop) is enough to regulate now, the risk of unpreparedness being too great.
On the other hand, opponents of the “existential threat” theory like Mark Zuckerberg and bill Gates argue that firstly, such a situation is much too far-fetched and uncertain. Secondly, even if we decide to regulate, there is simply nothing to specifically regulate against. Such regulation is like shooting arrows in the dark. Legislation requires some kind of a base and that is absent here, since there is no concrete idea of when such robots will develop and what their characteristics will be. We cannot overreact and make laws, especially when regulation is coming at the cost of curbing innovation. Any law regulating AI will have the effect of limiting the freedom to experiment and curtailing technological growth, just like imposing sanctions on news channels has a chilling effect on the freedom of speech and expression. Further diminishing the need to regulate is the fact that the wide but uncertain scope of AI induces an inbuilt chilling effect already. For instance, to a scientist, irrespective of the legal framework, the fear that he might create armed cyborgs already constitutes a chilling effect in itself. Further, each country will try to outdo the other in AI development, thus creating a prisoner’s dilemma situation that makes domestic regulation all the more difficult. Thus, the “existential threat” theory does not prompt regulation today.
The above arguments present a regulatory dilemma. While on one hand it is imperative to be prepared before it is too late in case the “existential threat” theory comes true, there is no concrete way to know what exactly to prepare against and when. In other words, it is too early to regulate AI now but there is no saying when it will become too late to regulate AI.
The answer to this dilemma is to take the middle ground, that is, develop broad general principles (like say, mandatory presence of a “kill switch” in every AI machine) and wait for the right time to come up with specifications, this “right time” being when technological progress clearly points towards development of advanced forms of AI with a clue as to what their specific characteristics will be. Existing principles along these lines include Asimov’s evergreen “Three Laws of Robotics” and the recently coined Alisomar Principles, which are a set of 23 guidelines to curb potential AI – related harm signed by about 1,200 AI researchers and over 2,300 others. Greater, more inclusive collaboration at the international level is essential to formulate more such principles that will ensure preparedness to cope with the future.
To conclude, presently, AI should be regulated in two ways – through specific industry – centric rules and universal general principles. While the former will help tighten the existing framework and ensure greater accountability and safety of the public, the latter will keep mankind prepared for the future developments that might take place in the AI field considering the fact that the consequences of neglecting possible AI development are too severe. Such a balanced regulatory framework will thus help cope with the both the realities as well as the implications of AI progress.
Other supporting links –
https://analyticsindiamag.com/russia-prepares-future-wars-array-ai-based-arsenal/
https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html
http://issues.org/33-4/perspective-should-artificial-intelligence-be-regulated/