[Ed Note: The following post is part of the TLF Editorial Board Test 2019-20. It has been authored by Rhea Reddy, a third year student of NALSAR University of Law.]
Recently, Facebook announced its plans to develop a full-body virtual reality system. The company aims to create life-like avatars of users to provide for a more immersive social media and gaming experience. These detailed avatars will be brought into VR simulations so that users can play sports or interact with each other in the (digital) flesh. The avatars are intended to be anatomically correct, down to the last detail of muscle and skin. They would further replicate the real-time movements of users, along with their clothing and facial expressions. Though this technology may be a long way from being implemented, it would be prudent to discuss its legal implications because of the threat it poses, particularly to democratic processes.
VR has already raised concerns about privacy and data collection by companies, which can be read about here. However, in this post, the focus is on how Facebook’s proposed technology would help propagate fake news and stifle dissent within a country. The unchecked spreading of fake news, primarily through social media, is currently a major concern for many countries. This has reached the point where there have been multiple instances of people being lynched based on rumours forwarded through Whatsapp. Facebook’s failures to tackle fake news are also widely known, with it not even having fact-checking partners in countries like Hungary. In addition, its algorithm traps users in ‘filter bubbles’ based on the content they had previously engaged with, tailoring their news feed to their interests. Users would then be able to access only the news and other information that conform to their existing views. This opinion-based segregation makes it difficult for legitimate journalism to counter the spread of fake news as the potential for exposure to conflicting viewpoints is greatly reduced.
Virtual Reality would only add to this fake news problem. Ordinarily, people form memories and opinions based on real-life experiences and observations. VR, however, blurs the divide between real life and simulation by immersing users in an experience they perceive to be real. In other words, VR isn’t real but it feels real. For this reason, it can have long term psychological impacts on users even after they leave the virtual world. Facebook’s proposed technology goes one step further by proposing to develop life-like simulations of users and environments. By presenting users with an ‘objective’ reality, it would have an intensive immersion effect that may be used to manipulate the mind, emotions, and consequently, the behaviour of users. Fake news would then be thought of as ‘objectively’ experienced in real life, rather than just on a screen.
In this way, the proposed VR technology can control what users experience and, thereby, effectively sell them a particular message. Life-like avatars of people ranging from politicians to celebrities can be manipulated to propagate a false message or a certain viewpoint. Before long, news events could be simulated using the avatars of news anchors. With Facebook looking to emulate even the user’s body language and body movements, these simulations will be nearly indistinguishable from the real users. This problem is exacerbated when malicious hackers make use of VR to propagate their agenda. For instance, Russia has already managed to interfere with the election of the POTUS by spreading articles on social media. If it subsequently manages to obtain an influential individual’s body-mapped information, it can use such information to manipulate the masses. Therefore, the proposed technology would allow for greater damage to be inflicted upon democratic processes.
With the question becoming what constitutes reality itself and due to the extent of harm that may be caused by this proposed technology, the need for its regulation becomes all the more necessary. Without legal sanctions, one cannot hope for Facebook to remove fake news on its own, especially since it has previously refused to do so. However, any attempts at prescriptive legislation that aim to block content before it is even posted would threaten the right to freedom of speech. This is harmful because it may allow for the censorship of legitimate journalism before it can be cross-verified as real news, thereby impeding the discovery of truth through open discussions. This, in turn, may lead to self-censorship and have a chilling effect on citizens in the country. Therefore, prescriptive legislation should not be used to address such a complex issue.
In the past, governments have not responded adequately to the challenges created by new technology. As a case in point, existing laws in India are insufficient to deal with the fake news problem. Instead of being mandated to comply with positive controls, social media platforms have been provided a safe harbour by Section 79 of the IT Act, 2000. This section protects companies like Facebook from liability for any actions committed by their users unless they are made aware of a particular post on their platform. In addition, the companies are only to observe an ambiguous requirement of due diligence while discharging their duties under this Act. Further, such companies are required to censor content only when directed to do so by a court. Therefore, censorship of fake news could only happen after delayed bureaucratic and legal processes. However, with the quick spread of misinformation, and the intensity of immersion and manipulation by VR technology, this delayed reactive process is largely ineffective.
Due to the inadequacy of current reactive legislation, there is a need for more effective regulation. But, since such regulation can potentially be misused, great care must be taken before introducing laws regulating fake news in democratic countries. This need for caution makes itself more apparent when observing previous attempts to regulate fake news in countries such as Singapore, Germany, and Russia. Singaporean law-makers attempted to deal with fake news by forcing corrections to be added to online content that they deemed to be false. These corrections would not affect the original content of the articles, but would instead add the facts next to the falsehood. But how would this apply to videos with life-like simulations? Moreover, even if textual disclaimers could be inserted into every fake simulation being distributed on all platforms, mere text would not be very effective against the emotional impact of purposefully evocative virtual reality simulations. Further, in Germany and Singapore, authorities have found it difficult to differentiate misinformation and hate speech from satire. In Russia, the government is even allowed to block sites and delete articles with which it disagreed by branding them as ‘fake news’. Therefore, the current restrictive framework existing across the world is largely inadequate and allows for governments to take on authoritarian characteristics.
In this way, regulation may allow the government to become an arbiter of the truth, giving authorities the power to control what is shown on social media platforms. The Netflix series, Black Mirror, has already depicted a preview of this future of VR and augmented reality [AR]. In its episode titled ‘Men against Fire’, it focuses on AR technology that makes soldiers see their ‘enemies’ as aggressive mutants given the name ‘roaches’. In reality, these roaches were just terrified citizens who were deemed to be genetically inferior by the government. By programming them to look like the enemy, this technology allowed the government to make unknowing soldiers key pawns in the genocide. Even though this is a drastic example, it does show how VR in the hands of governments, religious and cultural authorities, etc. has the potential to obscure the truth and further a particular agenda.
Until a more comprehensive regulatory framework that includes checks to authoritarian tendencies comes into force, certain measures may be adopted to improve societal resistance to fake news. Firstly, governments could limit the liability protections offered to intermediaries. Among other things, this may be done by requiring companies to censor fake news on their own or by holding them accountable for the defamation of individuals on their sites. Increased liability may then encourage intermediaries to better screen the content they permit on their platforms. Secondly, companies could be committed to editing any simulated avatars of users in such a manner so as to ensure that they cannot be confused with the real versions. Lastly, the encouragement of media literacy could also be pursued. A project launched in Italy aimed to teach citizens, as part of the country’s high school education curriculum, how to identify suspect URLs or reach out to experts online. This can help citizens themselves become more aware of potential falsehoods.
In conclusion, advancements in VR may allow for videos to be manipulated by changing how real people appear to behave. These fake videos have the potential to lead to inter and intra-state conflicts. In the guise of protecting citizens from such videos, governments may resort to the dangerous weapon of censorship. For these reasons, solutions to prevent the spread of fake news at its outset must be devised, without compromising the right to free speech or devolving the country into an authoritarian regime. Only then would Virtual Reality not threaten the collapse of objective reality.