Welcome to our fortnightly newsletter, where our reporters Kruttika Lokesh and Dhananjay Dhonchak put together handpicked stories from the world of tech law! You can find other issues here.
Zoom sued by shareholder for ‘overstating’ security claims
In 2006, Clive Humby, a British mathematician said with incredible foresight that “data is the new oil”. Fast forward to 2019, we see how data has singularly been responsible for big-tech companies getting closer to and surpassing the trillion-dollar net worth mark. The ‘big 4’ tech companies, Google, Apple, Facebook and Amazon have incredibly large reserves of data both in terms of data collection (owing to the sheer number of users each company retains) and in terms of access to data that is collected through this usage. With an increasing number of applications and avenues for data to be used, the requirement of standardizing the data economy manifests itself strongly with more countries recognizing the need to have specific laws concerning data.
Standards may be defined as technical rules and regulations that ensure the smooth working of an economy. They are required to increase compatibility and interoperability as they set up the framework within which agents must work. With every new technology that is invented the question arises as to how it fits with existing technologies. This question is addressed by standardization. By determining the requirements to be met for safety, quality, interoperability etc., standards establish the molds in which the newer technologies must fit in. Standardization is one of the key reasons for the success of industrialization. Associations of standardization have helped economies function by assuring consumers that the products being purchased meet a certain level of quality. The ISO (International Standards Organization), BIS (Bureau of Indian Standards), SCC (Standards Council of Canada), BSI (British Standards Institute) are examples of highly visible organisations that stamp their seal of approval on products that meet the publicly set level of requirements as per their regulations. There are further standard-setting associations that specifically look into the regulation of safety and usability of certain products, such as food safety, electronics, automobiles etc. These standards are deliberated upon in detail and are based on a discussion with sectoral players, users, the government and other interested parties. Given that they are generally arrived at based on a consensus, the parties involved are in a position to benefit by working within the system.
Currently, the data economy functions without much regulation. Apart from laws on data protection and a few other regulations concerning storage, data itself remains an under-regulated commodity. While multiple jurisdictions are recognizing the need to have laws concerning data usage, collection and storage, it is safe to say that the legal world still needs to catch-up.
In this scenario, standardization provides a useful solution as it seeks to ensure compliance by emphasizing mutual benefit, as opposed to laws which would penalize non-adherence. A market player in the data economy is bound to benefit from standardization as they have readily accessible information regarding the compliance standards for the technology they are creating. By standardizing methods for collection, use, storage and sharing of data the market becomes more open because of increased availability of information, which benefits the players by removing entry barriers. Additionally, a standard-mark pertaining to data collection and usage gives consumers the assurance that the data being shared be used in a safe and quality-tested manner, thereby increasing their trust in the same. Demand and supply tend to match as there is information symmetry in the form of known standards between the supplier and consumer of data.
As per Rational Choice theory an agent in the economy who has access to adequate information (such as an understanding of costs and benefits, existence of alternatives) and who acts on the basis of self-interest, would pick that choice available to them that maximizes their gains. Given this understanding, an agent in the data economy would have higher benefits if there is increased standardization as the same would create avenues to access and usage in the market that is currently heading towards an oligopoly.
The internet has revolutionized the manner in which we share data. It has phenomenally increased the amount of data available on the platform. Anyone who has access to the internet can deploy any sort of data on to the same – be it an app, a website, visual media etc. With internet access coming to be seen as an almost essential commodity, its users and the number of devices connected to the Internet will continue to grow. Big Data remained a buzzword for a good part of this decade (2010’s), and with Big Data getting even bigger, transparency is often compromised as a result. Users are generally unaware of how the data collected from them is stored, used or who has access to it. Although, sometimes terms and conditions concerning certain data and its collection specify these things, it is overlooked more often than not, with the result that users remain in the dark.
There are 3 main areas where standardization would help the data economy –
With the increasing application of processed information to solve our everyday problems, the data economy is currently booming; however, large parts of this economy are controlled by a limited number of players. Standardization in this field would ensure that we move towards increased competition instead of a data oligopoly, ensuring increased competition that will ultimately lead to the faster and healthier growth of the data economy.
The Delhi High Court ordered Facebook, Google and Instagram to remove search result, posts and any content containing allegations of sexual harassment against artist Subodh Gupta. These include blocking/removal of social media posts, articles and Google Search result links. The allegations were made about a year ago, by an unknown co-worker of Gupta on an anonymous Instagram account ‘Herdsceneand’. These allegations were also posted on Facebook and circulated by news reporting agencies. An aggrieved Subodh Gupta then filed a civil defamation suit, stating these allegations to be false and malicious. Noting the seriousness of the allegations, the Court passed an ex-parte order asking the Instagram account holder, Instagram, Facebook and Google to take down this content. The Court has now directed Facebook to produce the identity of the person behind the account ‘Herdsceneand’ in a sealed cover.
A student from National Law School of India, Bengaluru filed a petition in the Kerala high court seeking a ban on the mobile application – Telegram. The reason cited for this petition is that the app has no checks and balances in place. There is no government regulation, no office in place and the lack of encryption keys ensures that the person sending the message can not be traced back. It was only in June this year that telegram refused to hand over the chat details of the ISIS module to the National Investigation Agency. As compared to apps such as Watsapp, Telegram has a greater degree of secrecy. One of the features Telegram boasts of is the ‘secret chat’ version which notifies users if someone has taken a screenshot, disables the user from forwarding of messages etc. Further, there are fewer limits on the number of people who can join a channel and this makes moderation on the dissemination of information even more difficult. It is for this reason that telegram is dubbed as the ‘app of choice’ for many terrorists. It is also claimed that the app is used for transmitting vulgar and obscene content including child pornography. Several countries such as Russia and Indonesia have banned this app due to safety concerns.
In a significant ruling, the European Court of Justice ruled that Facebook can be ordered to take down posts globally, and not just in the country that makes the request. It extends the reach of the EU’s internet-related laws beyond its own borders, and the decision cannot be appealed further. The ruling stemmed from a case involving defamatory comments posted on the platform about an Austrian politician, following which she demanded that Facebook erase the original comments worldwide and not just from the Austrian version worldwide. The decision raises the question of jurisdiction of EU laws, especially at a time when countries are outside the bloc are passing their own laws regulating the matter.
The Digital Trade Agreement was signed by USA and Japan on October 7, 2019. The Agreement is an articulation of both the nations’ stance against data localization. The trade agreement cemented a cross-border data flow. Additionally, it allowed for open access to government data through Article 20. Articles 12 and 13 ensures no restrictions of electronic data across borders. Further, Article 7 ensures that there are no customs on digital products which are electronically transmitted. Neither country’s parties can be forced to share the source code while sharing the software during sale, distribution, etc. The first formal articulation of the free flow of digital information was seen in the Data Free Flow with Trust (DFFT), which was a key feature of the Osaka Declaration on Digital Economy. The agreement is in furtherance of the Trump administration’s to cement America’s standing as being tech-friendly, at a time when most other countries are introducing reforms to curb the practices of internet giants like Google and Facebook, and protect the rights of the consumers. American rules, such as Section 230 of the Communications Decency Act shields companies from any lawsuits related to content moderation. America, presently appears to hope that their permissive and liberal laws will become the framework for international laws.
In the previous part, I dealt with the certain privacy concerns that may arise with respect to voice sampling and how various jurisdictions have approached the same. In this part, I will be critiquing the manner in which the Supreme Court in Ritesh Sinha has imparted legislative power onto itself, is by the terming the absence of legislative authorization for voice sampling of accused persons as a procedural anomaly, and extending its power in filling such assumed voids by invoking not only the principle of ejusdem generis, but also citing the “principle of imminent necessity”.
This strangely arises since reference is made to Ram Babu Misra, where it had earlier looked into Section 73 of the Indian Evidence Act, 1872 and whether the same afforded the Magistrate the power to direct the accused to give her specimen writing even during the course of investigation. In absence of such a provision, such powers were denied. Subsequently, section 311A (vide Code of Criminal Procedure (Amendment) Act, 2005 later afforded the Magistrate the power to direct any person to submit specimen signatures or handwriting. In this regard, the Supreme Court in Sukh Ram, held that the powers provided by the Amendment were prospective and not retrospective in nature and therefore such direction was impermissible since they were not provided for.
In the present case, the Supreme Court notes that “procedure is the handmaid, not the mistress, of justice and cannot be permitted to thwart the fact-finding course in litigation”. This is prima facie problematic given the relevance of the maxim in civil matters in resolving dilemmas by by-passing procedure in the interest of justice. In criminal matters, the State holds an instrument of enquiry against the accused, with the balance of powers weighing heavily against the individual. The jurisprudential trend of privileging crime control interests and merely opposing oppression or coercion in cases which would affect the reliability of the evidence, has thus continued. It would be relevant to look at the right against self-incrimination, explored by Abhinav Sekhri in his article ‘The right against self-incrimination in India: the compelling case of Kathi Kalu Oghad’, to be one that had originally arisen as a protection against the State by placing procedural safeguards and substantive remedies.
In this case, the Court refers to Puttaswamy, to hold that the right to privacy must “bow down to compelling public interest”. However, in Puttaswamy, Justice Chandrachud had cited A K Roy vs Union of India whereby, the Constitution Bench of the Supreme Court recognised that “…[p]rocedural safeguards are the handmaids of equal justice and …, [that] the power of the government is colossal as compared with the power of an individual…”, (emphasis mine) that preventive detention finds its basis in law, and thus is permissible under the Constitution.
Indeed, Maneka’s reference to R.C. Cooper in understanding permissible restrictions of personal liberty is of assistance, noting that abrogation of the rights of individuals must fulfil the tests of reasonableness. Irrespective of whether the demand of an individual’s voice sample is a permissible violation vide the individual’s right to privacy guaranteed under the Constitution, the order itself must find a basis in law. Alas, the same cannot be said for the present matter.
As this is a policy decision, entrusted to the State, it is curious to see how Courts have time and again found justification in intruding the halls of the Legislature. The same was also recognised in the Puttaswamy judgment where deference to the wisdom of law enacting or law enforcing bodies was sought. Silence postulates a realm of privacy, wrote Justice Chandrachud. While the same is not an absolute right, it is for the Courts to protect the individual from the State’s powers, to adjudge whether the laws and actions consist of legitimate aims of the State, and not for the Courts to provide power became an arm of the State itself. The part of the Kharak Singh judgment which was upheld, had recognised the importance of the existence of a “law” to term something as either constitutional or unconstitutional, and thus termed the relevant regulation as unconstitutional.
Presently, it is the Court which has taken on such a burden to create the law encroaching on the accused’s rights. This is even after alluding to the Legislature’s possible choice to be “oblivious and despite express reminders chooses not to include voice sample”, and only provide for a few tests (though in Selvi, the Court recognised the impropriety and impracticality to look into Legislative intent given the lack of “access to all the materials which would have been considered by the Parliament”).
Curiously, in affording the Judicial Magistrate the power to order voice sampling for “the purpose of investigation into a crime”, there is ambiguity at what stage this power can be invoked, the manner in which it can be invoked, and who can invoke the same. Ordinarily, medical examinations under 53/53A/54 of the Cr PC have been read to be done at the instance of “the investigating officer or even the arrested person himself…[or] at the direction of the jurisdictional court.” We may also look at Section 53 of the Cr PC, as per which medical examination can occur only when there is sufficient material on record to justify the same, and is impermissible otherwise.
Finally, the Court has not only failed to illustrate the existence of an imminent necessity, to make such an alteration or confer such a power, it has failed to explain in what context can Courts invoke such a maxim and has not developed the same in detail. One might note, that the principle of necessity is one generally afforded to individuals in cases of private defence or in cases of emergencies, excusing individuals from acts that would ordinarily make them liable of certain crimes. Curiously, there is no mention of an affidavit from the side of the police administration, no studies have been cited. Mere legislative delay as a justification for imminent necessity in light of certain advancements does not seem sound.
In light of the same, given Navtej, NALSA, and Puttaswamy, and the failure of the Legislature to amend at least the Special Marriage Act to recognize the rights of LGBTQI individuals to marry, and be with their individual of choice, should not the same have also provided for? Can the same be taken as a justification to abrogate digital privacy rights in the world of evolving technologies, by mandating backdoors? At what stage does Legislature’s refusal also amount to Legislature’s lax? Does this apply only for social developments or technological developments? If the Legislature was in fact, aware of voice exemplars (as has been observed), and chose not to incorporate the same into the relevant sections and clauses, can the same be read as legislative delay or refusal? Whether this aspect of the judgment, invoking “imminent necessity”, will be read into to provide justification for some other transformation is yet to be seen.
The Court had a path available to it through Selvi and indeed Justice Desai, had charted through the same invoking precedents which permitted such a reading. However, the Court in this reference judgment seems to have (unnecessarily) gone the extra mile by mention of this principle of imminent necessity. Whereas the former is a matter of difference in opinion, the latter is a clear bypass of the Legislature’s powers at the Court’s own pleasure. We may take heed to Justice H.R. Khanna’s dissent, in the ADM Jabalpur case, that when the means don’t matter, when procedure is no longer insisted upon, the ends can only lead us to arbitrariness, a place devoid of personal liberty.
I conclude by noting Lord Camden’s dictum in Entick vs Carrington (which we would now find through our Article 21 protection: “No person shall be deprived of his life or personal liberty except according to procedure established by law” (emphasis mine) (also read into the right against self-incrimination through Selvi):
If it is law, it will be found in our books. If it is not to be found there, it is not law.
Nearly threescore ago, in Kathi Kalu Oghad, the eleven judge-bench of the Supreme Court of India decided on the question of the extent of constitutional protections against self-incrimination (vide Article 20(3)). The Supreme Court therein deviated from the notion of self-incrimination being one inclusive of “every positive volitional act which furnishes evidence” laid down in M.P. Sharma, and recognised a distinction between “to be a witness” and “to furnish evidence”. The present judgment arose on a difference in opinion in the division bench of the Supreme Court in Ritesh Sinha, regarding the permissibility of ordering an accused to provide their voice sample. In this part, I will talk about voice sampling and its interactions with privacy, and look at how different jurisdictions have looked at voice spectography – whether the same would be violative of the individual’s right to privacy and their right against self-incrimination. Finally, I will make a short point on technological developments and their interaction with criminal law. In the next part I will be dealing with the Court’s failure to simply rely upon Selvi to expand the definition, and instead how it created the doctrine of “imminent necessity” (a principle generally present in criminal law for private defence!) to justify the Court’s intervention into the halls of the Legislature in light of “contemporaneous realities/existing realities on the ground”.
The Investigating Authority seized the mobile phone from Dhoom Singh, allegedly in association with the accused-appellant Ritesh Sinha, and wanted to verify whether the recorded conversation was between both the individuals and thus needed the voice sample of the appellant to verify the same. Accordingly, summons was issued, and the present appellant was ordered to give his voice sample. This was subsequently challenged before the Delhi High Court who negatived his challenge. Aggrieved by the same, an appeal was filed before the Supreme Court, as a result of split verdict, the same was referred to a larger bench. The opinions by Justice Desai and Justice Aslam in the division bench have been sufficiently explored earlier by Gautam Bhatia and Abhinav Sekhri. Therein, both Justices were of one mind on voice sampling not being violative of the right against self-incrimination, with differences on the permissibility of voice sampling, considering an absence of an explicit provision permitting the same.
In this reference judgment, Chief Justice Ranjan Gogoi traces the history of rights against self-incrimination by referencing (then) Chief Justice B.P. Sinha’s observations that documents which by themselves do not incriminate but are “only materials for comparison in order to lend assurance to the Court that its inference based on other pieces of evidence is reliable” and would not be violative of Article 20(3).
Recognising the limitation under section 53 and section 53A of the Code of Criminal Procedure, 1973, reference is made to the 87th Law Commission Report which suggested that an amendment to the Identification of Prisoners Act, 1920 to specifically empower a Judicial Magistrate to compel an accused person to give a voice print. No such action has been taken in that regard.
In Selvi, ‘personal liberty’ in the context of self-incrimination, was understood as being one whereby involuntariness is avoided, summing up this right to three points: (1) preventing custodial violence, and other third-degree methods to protect the dignity and bodily integrity of the person being examined, to serve as “a check on police behaviour during the course of investigation”. (2) To put the onus of proof on the prosecution, and (3) to ensure reliability of evidence, that involuntary statements could result in misleading “the judge and the prosecutor… resulting in a miscarriage of justice …[with] false statements …likely to cause delays and obstructions in the investigation efforts”. The third point is consistent with the majority view in Kathi Kalu Oghad, which found “specimen handwriting or signature or finger impressions by themselves…[to not be testimony since they are] wholly innocuous because they are unchangeable…[that they] are only materials for comparison in order to lend assurance to the Court that its inference based on other pieces of evidence is reliable.” While there was a hesitation to read everything under the sun as “such other tests” in Selvi, it was recognised that that through an invocation of ejusdem generis, the same could be extended to other physical examinations, but not other examinations which involve testimonial acts. In this regard, we may consider Gautam Bhatia’s analysis of Selvi which digs deep into this issue. As an aside, beyond the question of the content of either the “said” or the “statement” itself, it would be of assistance to also look at the nature of police systems, whereby even in a post-Miranda setting in the US, the reality and nature of voluntariness is suspect.
The position of viewing exemplars by themselves to not be statements is consistent with various courts. That is, handwriting, signature, etc., existing within, or from the individual, the individual is not considered to have been made to give that which cannot otherwise be seen since the evidence is not altered irrespective of compulsion to give the same.
In Levack, the Supreme Court of Appeal in South Africa held that sound (and consequently voice exemplars), firstly, could be considered as a ‘distinguishing feature’ under Section 37(1)(c) of the Criminal Procedure Act of 1977. Secondly, that voice exemplars being ‘autoptic evidence’, derived from the accused’s own bodily features could be distinguished as not being testimonial or communicative in nature.
This echoes the view taken by the Supreme Court of the United States (SCOTUS) in the case of Dionisio, recognizing that voice samples (exemplars), for the purposes of identification, as not being violative of the individual’s rights against self-incrimination enshrined under the Fourth and Fifth Amendments. Since they are were mere physical characteristics, being attained as mere identifiers and not for their testimonial or communicative content (See also Gilbert and Wade). Further, relying on Katz, where it held that Fourth Amendment protections would not be offered “for what ‘a person knowingly exposes to the public…’”. Therefore, “[n]o person can have a reasonable expectation that others will not know the sound of his voice, any more than he can reasonably expect that his face will be a mystery to the world.”.
In Jalloh vs. Germany, the Strasbourg Court observed that the right against self-incrimination guaranteed under Article 6(1) would not extend to material obtained through the use of compulsory powers from the accused person which have an “existence independent of the will of the suspect such as, inter alia, documents acquired pursuant to a warrant, breath, blood, urine, hair or voice samples and bodily tissue for the purpose of DNA testing”. (emphasis mine).
The failure of legal systems to consider technological changes which may assist in collection of evidence or other crime control uses is termed as a ‘pacing problem’, and is comprised of two dimensions – Firstly, the basis of existing legal frameworks on a static rather than dynamic view of society and technology. Secondly, the slowing down of legal institutions with respect to their capacity to adjust to changing technologies.
The Legislature’s failure to provide for handwriting samples for two decades even after the Supreme Court and Law Commission’s mention of the same has been noted by Abhinav Sekhri. Admittedly, the benefits of voice sampling for identification are evident, and have even been used before. However, this judgment fails to clarify under which section such power has been conferred. If the same were to exist under the Identification of Prisoners Act, there may be some semblance of relief through section 7, which mandates destruction or handing over of such measurements and photographs to individuals in certain cases.
The DNA Bill, as introduced in the Lok Sabha allows for removal of DNA collected on certain conditions (vide Section 31(2)-(3), however, even then, it is one that occurs only on police report, order of the court or a written request (method varying on the basis of the incident), contrary to other jurisdictions or even section 7 of the Identification of Prisoners Act, the status quo is thus of retainment, and not automatic removal.
In trying to keep up with technological advancements, the Court has thus failed to recognise the importance of procedure in criminal matters and instead produced procedural uncertainty; it is even more curious to note that Selvi which would have been sufficient justification was not invoked even once in this case.
In 2018, Anthony Clement Rubin and Janani Krishnamurthy filed PILs before the Madras High Court, seeking a writ of Mandamus to “declare the linking of Aadhaar of any one of the Government authorized identity proof as mandatory for the purpose of authentication while obtaining any email or user account.” The main concern of the petitioners was traceability of social media users, which would be facilitated by linking their social media accounts with a government identity proof; this in turn could help combat cybercrime. The case was heard by a division bench of the Madras HC, and the scope was expanded to include curbing of cybercrime with the help of online intermediaries. In June 2019, the Internet Freedom Foundation became an intervener in the case to provide expertise in the areas of technology, policy, law and privacy. Notably, Madras HC dismissed the prayer asking for linkage of social media and Aadhaar, stating that it violated the SC judgement on Aadhaar which held that Aadhaar is to be used only for social welfare schemes.
Facebook later filed a petition before the SC to transfer the case to the Supreme Court. Currently, the hearing before the SC has been deferred to 13 September 2019 and the proceedings at the Madras HC will continue. Multiple news sources reported that the TN government, represented by the Attorney General of India K.K. Venugopal, argued for linking social media accounts and Aadhaar before the SC. However, Medianama has reported that the same is not being considered at the moment and the Madras HC has categorically denied it.
Adding to the chaos and despair for the Rohingyas, the Bangladeshi government banned the use of mobile phones and also restricted mobile phone companies from providing service in the region. The companies have been given a week to comply with these new rules. The reason cited for this ban was that refugees were misusing their cell phones for criminal activities. The situation in the region has worsened over the past two years and the extreme violation of Human Rights is termed to be reaching the point of Genocide according to UN officials. This ban on mobile phones, would further worsen the situation in Rohingya by increasing their detachment with the rest of the world, thus making their lives at the refugee camp even more arduous.
Alphabet Inc.’s Google and YouTube will be paying a $170 million penalty to the Federal Trade Commission. It will be paid to settle allegations that YouTube collected the personal information of children by tracking their cookies and earning millions through targeted advertisements without parental consent. The FTC Chairman, Joe Simons, condemned the company for publicizing its popularity with children to potential advertisers, while blatantly violating the Children’s Online Privacy Protection Act. The company has claimed to advertisers, that it does not comply with any child privacy laws since it doesn’t have any users under the age of 13. Additionally, the settlement mandates that YouTube will have to create policies to identify content that is aimed at children and notify creators and channel owners of their obligations to collect consent from their parents. In addition, YouTube has already announced that it will be launching YouTube Kids soon which will not have targeted advertising and will have only child-friendly content. Several prominent Democrats in the FTC have criticized the settlement, despite it being the largest fine on a child privacy case so far, since the penalty is seen as a pittance in contrast to Google’s overall revenue.
Recently, researcher Sanyam Jain located online unsecured servers that contained phone numbers for over 419 million Facebook users, including users from US, UK and Vietnam. In some cases, they were able to identify the user’s real name, gender and country. The database was completely unsecured and could be accessed by anybody. The leak increases the possibility of sim-swapping or spam call attacks for the users whose data has been leaked. The leak has happened despite Facebook’s statement in April that it would be more dedicated towards the privacy of its users and restrict access to data to prevent data scraping. Facebook has attempted to downplay the effects of the leak by claiming that the actual leak is only 210 million, since there are multiple duplicates in the data that was leaked, however Zack Whittaker, Security Editor at TechCrunch has highlighted that there is little evidence of such duplication. The data appears to be old since recently the company has changed its policy such that it users can no longer search for phone numbers. Facebook has claimed that there appears to be no actual evidence that there was a serious breach of user privacy.
Addressing the growing data protection concerns Mozilla Firefox will now block third party tracking cookies and crypto miners by its Enhanced Tracking Protection feature. To avail this feature users will have to update to Firefox 69, which enforces stronger security and privacy options by default. Browser’s ‘Enhanced Tracking Protection’ will now remain turned on by default as part of the standard setting, however users will have the option to turn off the feature for particular websites. Mozilla claims that this update will not only restrict companies from forming a user profile by tracking browsing behaviour but will also enhance the performance, User Interface and battery life of the systems running on Windows 10/mac OS.
Delhi airport would be starting a three-month trial of the facial recognition system in its T3 terminal. This system is called the Biometric Enabled Seamless Travel experience (BEST). With this technology, passenger’s entry would be automatically registered at various points such as check-in, security etc. Portuguese company- toolbox has provided the technical and software support for this technology. Even though this system is voluntary in the trial run the pertinent question of whether it will remain voluntary after it is officially incorporated is still to be answered. If the trial run is successful, it will be officially incorporated.
In one of the first cases of its kind a British court ruled that police use of live facial recognition systems is legal and does not violate privacy and human rights. The case, brought by Cardiff resident Ed Bridges, alleged that his right to privacy had been violated by the system which he claimed had recorded him at least twice without permission, and the suit was filed to hold the use of the system as being violative of human rights including the right to privacy. The court arrived at its decision after finding that “sufficient legal controls” were in place to prevent improper use of the technology, including the deletion of data unless it concerned a person identified from the watch list.
This post, authored by Mr. Srikanth Lakshmanan, is part of TLF’s blog series on Account Aggregators. Other posts can be found here.
Mr. Srikanth Lakshmanan is the founder of CashlessConsumer, a consumer collective working on digital payments to increase awareness, understand technology, represent consumers in digital payments ecosystem to voice perspectives, concerns with a goal of moving towards a fair cashless society with equitable rights.
On 28th June 2019, the National Crime Records Bureau (NCRB) released a Request for Proposal for an Automated Facial Recognition System (AFRS) which is to be used by the police officers in detecting potential criminals and suspects across the country.
The AFRS has potential use in areas like modernising the police force, information gathering, and identification of criminals, suspects, missing persons and personal verification.
In 2018, the Ministry of Civil Aviation launched a facial recognition system to be used for airport entry called “DigiYatra”. The AFRS system is built on similar lines but has a much wider coverage and different purpose. States in India have taken steps to introduce Facial Recognition Systems to detect potential criminals, with Telangana launching its system in August 2018.
The Automated Facial Recognition System (AFRS) will be a mobile and web application which will be hosted and managed by the National Crime Records Bureau (NCRB) data centre but will be used by all police stations across the country.
The AFRS works by comparing the image of an unidentified person captured through CCTV footage to the image which has been kept at the data centre of the NCRB. This will allow the data centre to match the images and detect potential criminals and suspects.
The system has the potential to match facial images with changes in facial expressions, angle, lightening, direction, beard, hairstyle, glasses, scars, tattoos and marks.
The NCRB has proposed to integrate AFRS with multiple existing databases: these include the Crime and Criminal Tracking Network & Systems (CCTNS) which was introduced post Mumbai attacks in 2009 as a nationwide integrated database to criminal incidents by connecting FIR registrations, investigations and chargesheets of police stations and higher offices, the Integrated Criminal Justice System (ICJS) which is a computer network which enables judicial practitioners and agencies to electronically access and share information and Khoya Paya Portal which is a portal used to detect missing children.
In September 2017, the Supreme Court in the historic judgment of K.S. Puttaswamy vs. Union of India declared the right to privacy as a fundamental right under Article 21 of the Indian Constitution. The Supreme Court asserted that the government must cautiously balance individual privacy and the legitimate concerns of the state, even if national security is at stake. The Court also asserted that any invasion of privacy must satisfy the triple test i.e. need (legitimate state concern), proportionality (least invasive manner) and legality (backed by law) to ensure that a fair and reasonable procedure is undertaken without any selective targeting and profiling.
Privacy infringement without legal sanction and through executive action would be violative of the fundamental right to privacy and would disregard the Supreme Court directive. Cyber experts are of the view that such a system could be used as a tool of government abuse and risk the privacy of the citizens and since the country lacks a data protection law, the citizens would become vulnerable to privacy abuse.
Moreover, investigating agencies in the United States like the FBI operate probably the largest facial recognition system in the world. Cyber experts and international institutions have criticised the Chinese government for using surveillance system and facial recognition to keep an eye on the Uighur community in China. However, there have been claims that this system has an accuracy of hardly 2%, which makes it unreliable and cities like London are facing calls to discontinue this system to safeguard the privacy of its citizens.
Finally, such a tracking system impinges upon human dignity by treating every person as a potential criminal or suspect. There are no clear guidelines as to where such cameras are to be placed. The cameras will put every individual under surveillance and even the innocent ones would be tracked. Such surveillance would create fear amongst the citizens which has long term implications.
A rise in the crime rate poses a daunting challenge in front of the investigating agencies and robust measures must be undertaken to counter it. However, such measures should be ably backed by law and should not impinge upon the dignity and the right to privacy of the citizens.
The Data Protection Law drafted by the Justice Srikrishna Committee should be enacted by the Parliament to give legal sanction to such surveillance. Furthermore, the AFRS should be used cautiously to prevent any violation of the fundamental right to privacy.
AFRS system has the potential to bring a paradigm shift in the criminal justice system if its use is well-intentioned and within the democratic framework which ensures right to privacy and limited state surveillance.
The purpose of this series is to analyze the bare text of the Data Principal Rights espoused in the Bill (Chapter VI), namely the Right to Confirmation and Access, Right to Correction, Right to Data Portability and the Right to be Forgotten, in light of the text used in the European legislations to espouse the same values. Each post will deal with each of the above rights.
Part I of the series can be accessed here.
INTRODUCTION TO POST
Over the course of the ensuing section, I shall contrast the text of the Confirmation and Access provisions of the (PDPB) Personal Data Protection Bill (India) (S. 24) with the corresponding provisions of the (GDPR) General Data Protection Regulation (European Union) (Art. 15).
For the purposes of convenience, I have reproduced the relevant provisions below. (Emphasis supplied)
Personal Data Protection Bill (India)
“24. Right to confirmation and access. —
(1) The data principal shall have the right to obtain from the data fiduciary—
(a) confirmation whether the data fiduciary is processing or has processed personal data of the data principal;
(b) a brief summary of the personal data of the data principal being processed or that has been processed by the data fiduciary;
(c) a brief summary of processing activities undertaken by the data fiduciary with respect to the personal data of the data principal, including any information provided in the notice under section 8 in relation to such processing activities.
(2) The data fiduciary shall provide the information as required under this section to the data principal in a clear and concise manner that is easily comprehensible to a reasonable person.…
General Data Protection Regulation (European Union)
Right of access by the data subject
(a) the purposes of the processing;
(b) the categories of personal data concerned;
(c) the recipients or categories of recipient to whom the personal data have been or will be disclosed, in particular recipients in third countries or international organisations;
(d) where possible, the envisaged period for which the personal data will be stored, or, if not possible, the criteria used to determine that period;
(e) the existence of the right to request from the controller rectification or erasure of personal data or restriction of processing of personal data concerning the data subject or to object to such processing;
(f) the right to lodge a complaint with a supervisory authority;
(g) where the personal data are not collected from the data subject, any available information as to their source;
(h) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.
The right provides “data subjects”/ “data principals” (varying terms used by the GDPR and PDPB respectively for referring to natural persons to whom the data relates to) with the authority to demand from “controllers”/ “data fiduciaries” (varying terms used by the GDPR and PDPB respectively for referring to entities which determine the purpose and means of processing of data), dealing with the data subject’s personal data, certain information pertaining to the personal data. The right ensures that there exists lesser information asymmetry between those to whom the personal data pertains and those who are processing or controlling said data. Refer here for a summary.
At first glance, the Indian draft-legislation’s provision “Right to Confirmation and Access” (S. 24) might seem to be rather abstract and vague in comparison to its European counterpart, but closer inspection reveals that both are quite similar. While the GDPR provides guidelines within a mostly self-contained provision, the PDPB’s S. 24 cross-references S. 8, which contains the list of necessary information disclosure obligations placed on the “data fiduciary”.
Though there exists considerable degree of similarity, in text, between both the jurisdictions, certain distinctions in orientations are quite evident from the language of the provisions.
The Indian Bill, admirably, places explicit emphasis on the accessibility of disclosures. S. 24 (2) mandates that the disclosures be “easily comprehensible”. Wherever there exists a power imbalance, those with access to expertise and other resources are better placed to abuse the system through indulging in complex legalities. Such statutory protections reduce the likelihood of resource-rich (access to expertise & infrastructure) “fiduciaries” utilizing complexity to overwhelm citizens incapable of processing technical information.
Furthermore, the Indian draft-legislation requires a “brief summary” (necessarily disclosing the statutorily prescribed information), as opposed to its European counterpart, which doesn’t place any such requirement. The legislative intent behind the same seems to be consistent with the logic of accessibility (prevent provision of information that cannot be processed meaningfully) mentioned above.
Listing the specific data that needs to be disclosed could enable “fiduciaries” to utilize the provision as an avenue to avoid disclosure of other unlisted, but relevant information. I submit that an additional sub-section requiring disclosure of all relevant information over and above the statutorily mandated disclosures (a general overarching clause, in addition to the prescribed disclosure requirements) would have tilted the balance favourably towards data privacy.
Additionally, the Indian Bill doesn’t seem to be placing as much significance on profiling (processing of personal data for analyzing or predicting data subject’s behavior, characteristics, location, etc.; the GDPR’s Art. 4(4) and PDPB’s S 2 (33) define the term in varying detail but essentially, the definitions are of similar import) as its European counterpart. Though the PDPB refers to profiling and allied restrictions across the Bill, it lacks mention in Chapter VI (Data Principal Rights). Even upon analyzing the entirety of the documents, the EU legislation tends to be placing greater restrictions on profiling than PDPB. The Indian Bill, has instead, preferred allowing profiling subject to an assessment (S. 33: “Data Protection Impact Assessment”) carried out by the Data Protection Authority of India (established under Chapter X of the Bill).
Lastly, the European legislation (Art. 20(4)) clarifies that the request for information as a matter of right cannot be in abrogation of other’s “rights and freedoms”. Though S. 27(2) of the PDPB refers to balancing of rights in the context of “Right to Be Forgotten”, S. 24 doesn’t refer to any form of weighing of rights. Given that there could be numerous varied instances of legitimate conflicting rights, allowing the judiciary to decide on a case by case basis seems to point towards prudence.
Image taken from here.