Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Tag: Privacy

Facial Recognition and Data Protection: A Comparative Analysis of laws in India and the EU (Part I)

Posted on April 3, 2021December 27, 2024 by Tech Law Forum NALSAR

[This two-part post has been authored by Riddhi Bang and Prerna Sengupta, second year students at NALSAR University of Law, Hyderabad. Part II can be found here]

With the wave of machine learning and technological development, a new system that has arrived is the Facial Recognition Technology (FRT). From invention to accessibility, this technology has grown in the past few years. Facial recognition comes under the aegis of biometric data which includes distinctive physical characteristics or personal traits of a person that can be used to verify the individual. FRT primarily works through pattern recognition technology which detects and extracts patterns from data and matches it with patterns stored in a database by creating a biometric ‘template’. This technology is being increasingly deployed, especially by law enforcement agencies and thus raises major privacy concerns. This technology also attracts controversy due to potential data leaks and various inaccuracies. In fact, in 2020, a UK Court of Appeal ruled that facial recognition technology employed by law enforcement agencies, such as the police, was a violation of human rights because there was “too broad a discretion” given to police officers in implementing the technology. It is argued that despite the multifarious purposes that this technology purports to serve, its use must be regulated.

Read more

Managing Regulatory Turbulence: Of Privacy, Consent and Drones

Posted on October 29, 2020November 13, 2020 by Tech Law Forum @ NALSAR

[Samraat Basu is a technology and data protection lawyer and Naveen Jain is a corporate lawyer specialising in M&A and PE/VC funding.]

The Indian regulatory landscape regarding the use of remotely piloted and autonomous drones has been evolving over the last few years. In June, the Government of India released the draft Unmanned Aircraft System Rules, 2020 (“UAS Rules”) to regulate the use of drones.

Read more

Building safe consumer data infrastructure in India: Account Aggregators in the financial sector (Part II)

Posted on December 30, 2019November 1, 2020 by Tech Law Forum @ NALSAR

TLF is proud to bring you a two-part guest post authored by Ms. Malavika Raghavan, Head, Future of Finance Initiative and Ms. Anubhutie Singh, Policy Analyst, Future of Finance Initiative at Dvara Research. This is the second part of a two-part series that undertakes an analysis of the technical standards and specifications present across publicly available documents on Account Aggregators. Previously, the authors looked at the motivations for building AAs and some consumer protection concerns that emerge in the Indian context.

Account Aggregators (AA) appear to be an exciting new infrastructure, for those who want to enable greater data sharing in the Indian financial sector. The key data being shared will extensive personal information about individuals like us – detailing our most intimate and sensitive financial transactions and potentially non-financial data too. This places individuals at the heart of these technical systems. Should the systems be breached, misused or otherwise exposed to unauthorised access the immediate casualty will be the privacy of the people whose information is compromised. Of course, this will also have an impact on data quality across the financial sector.

Read more

Building safe consumer data infrastructure in India: Account Aggregators in the financial sector (Part I)

Posted on December 30, 2019August 11, 2022 by Tech Law Forum @ NALSAR

TLF is proud to bring you a two-part guest post authored by Ms. Malavika Raghavan, Head, Future of Finance Initiative and Ms. Anubhutie Singh, Policy Analyst, Future of Finance Initiative at Dvara Research. Following is the first part of a two-part series that undertakes an analysis of the Account Aggregator system. Click here for the second part.

The Reserve Bank of India (RBI) released Master Directions on Non-Banking Financial Companies – Account Aggregators (Master Directions) in September 2016, and licences for India’s first Account Aggregators (AAs) were issued last year. From these guidelines and related documents, we understand that the purpose of Account Aggregator (AA) is to collect and share:

Read more

Data Protection of Deceased Individuals: The Legal Quandry

Posted on December 5, 2019December 13, 2019 by Tech Law Forum @ NALSAR

This post has been authored by Purbasha Panda and Lokesh Mewara, fourth and fifth years from NLU Ranchi. It discusses the data protection laws for deceased individuals, and the legal justifications for post-mortem privacy. 

Post-mortem privacy is defined as the right of a person to preserve and control what formulates his/her reputation after death. It is inherently linked with the idea of dignity after death. The first type of opinion with respect to post-mortem privacy raises the question of how there can be a threat to the reputation of a person if he no longer exists. However, there is another school of thought which argues  that when a person’s public persona or reputation is harmed after death, he might not be defamed but the ante-mortem person could. Another question that comes up, is that when a person dies, does the interest of the dead person that survives become the interest of others or is it actually his interests alone that are protected or is it both the possible scenarios?

Private law justification of post-mortem privacy

There is an English principle called “Action personalis moritur cum persona”which means that a personal cause of action dies with the person implying a negative attitude towards death. However certain EU states following the civilian tradition have allowed protection of data of the deceased. Article 40(1) of the French Data Protection Act regulates the processing of data after an individual’s death. As per the article, individuals can give instructions to data controllers providing general or specific indications about retention, erasure and communication of personal data after their death.

In the US case of In Re Ellsworth[1], yahoo as a webmail provider refused access to the surviving family of a US marine killed in action. Yahoo argued that the privacy policy of the company aims to protect the privacy of third parties who have interacted with the deceased individual’s account. The family on the other hand, argued that they should be able to see the emails he sent to them, as well as the emails he sent to others since Yahoo follows a policy of deleting the account once the account user dies. The family argued that pursuant to this policy, there would be an imminent danger that emails would be lost forever. The court allowed Yahoo to stick to their privacy policy and it did not allow login and password access of the deceased individuals account but instead gave an alternate option of providing the family with a CD containing copies of emails in the account of the dead person. The ratio, in this case, raises certain questions with respect to where proprietary rights with respect to the content of a mail are placed. Is it transfer of property rights or is there any other mode to transfer content of the email to the legal heirs? One could view that since the deceased is the author of those emails, copyright could be vested with him. Subsequently, this could be transferred to his legal heirs, giving them a right to approach the court to access the emails. Another view could be that Yahoo was vested with proprietary rights in the email which could be made available to the family members on court order. There are certain practical problems with granting rights on the content of an email.

Justice Edward Stuart in the case of Fairstar Heavy Transport N.V v. Adkins[2] attempted to hypothesize a possible right to property over the contents of an email. This case dealt with a request of an employer to access content of emails on personal computer of his ex-employee relating to business affairs of his company. The question that came before the Queen’s bench was “Whether the claimant had any proprietary rights over content of the emails?”. This case held that the contents of an email cannot be subjected to proprietary rights and therefore the employer does not have an enforceable proprietary claim over the content of the e-mails. The court while trying to decide existence of a possible proprietary right over the contents came up with five possible methods of construing such proprietary rights. The first method would be that the title over the content of the email remains throughout with the creator or his principal. The second method would be that upon an email being sent title of the content would pass to the recipient (drawing from the analogy of vesting of title in passing of a letter according to the principles of transfer of property). The third method would be that the recipient of an email has a license to use the content of an email for any legitimate purpose consistent with the circumstance in which it was sent. The fourth method would be that the sender of the email has a license to retain the content and use it for any legitimate purposes and finally the last method would be that the title over the content of the email is shared between the sender and all the recipients in the chain. The court analysed the veracity of existence of each of these methods in construing a possible right to property over information.

The court held that the implication of adopting the first method would be that the creator of an email would be able to assert his title against the content of the world. The court opined that implication of this option would be strange and would have far-reaching impractical consequences. The court opined that if a possible title over the content of an email remains with the creator, then such vesting of title must allow the creator to use the very same title in all possible forms, which means it should also allow the creator to exercise the title by asking recipients down the chain to delete the content of the email. However, such exercise of the title is not feasible or practical, making this very option quite redundant. The court also rejected the second method. It rejected this method on the ground that if at any given point of time an email is forwarded to multiple recipients, the question of who had the title over its content at any given point of time would be extremely confusing. The third and fourth method mix the existence of proprietary right over the content of an email with nature of use of such information that is whether it’s use is for legitimate purposes or illegitimate purposes. The court held that the nature of use of information should not be an important consideration for exercising a proprietary right of control. The fifth option was also rejected on the ground of compelling impracticality.

The advent of digital will in India: future of data protection of deceased individuals?

If we look at the “Information Technology Act, 2000” then Section 1(4) of the IT Act,2000 read with the First Schedule of the IT Act provides that the IT Act is not applicable to a will defined under clause (h) of section 2 Indian Succession Act, 1925 including any other testamentary dispositions. If we look at digital wills in foreign jurisdictions, then the most talked about legislation would be the “Fiduciary Access to digital assets and Digital Accounts Act”. This piece of legislation is enacted by Delaware, which became the first state in the United States allowing executors of a digital will the same authority to take control of a digital asset. If we look at the 2016 Delaware Code, it basically revolves around the concept of ‘digital assets’ and the idea of ‘fiduciary’, as someone who could be trusted with the digital asset. The legislation defines “digital asset” as data, text, emails, audio, video, images, sounds, social media content, health care records , health insurance records, computer resource codes, computer programs and software, user names, passwords created, generated, sent, communicated, shared, received or stored by electronic means on a digital device.The legislation also defines a “fiduciary’ as a personal representative appointed by a registrar of wills or an agent under durable personal power of attorney. It provides that a fiduciary may exercise control over any and all rights in digital assets and digital accounts of an account holder to the extent permitted under state law or federal law.

Data Protection Bill

The Data Protection Bill, 2018 provides for the “right to be forgotten” under Section 27. It refers to the ability of individuals to limit, de-link, delete, or correct the disclosure of personal information on the internet that is misleading, embarrassing, irrelevant, or anachronistic. Now, upon an individual passing away, his sensitive personal data is up on line and if there is no regulation, his rights will be infringed as many times as the data fiduciary wants and the person does not have any remedy as the bill does not take into consideration the case of deceased individuals. The dynamic nature of data is such that it cannot be deleted on its own once the person is dead. The other provisions which are there for living individuals can be applied in cases of deceased individuals as well. UnderSection 10of the Personal Data Protection Bill, 2018, the data fiduciary can store data only for a limited period of time and can use the information only for the purpose it was taken for. If the data principle wants to amend any information or remove any information, he has the right to do so and the data fiduciary without any law cannot prohibit the person to do so. The current data protection regime fails to recognize and fulfil the needs for protection of digital rights. It is pertinent to consider whether the concept of a “digital asset” and “fiduciary” as present in Delaware legislation can be emulated in India. Protection of data post death involves questions of digital succession as well as intellectual property rights which is inheritable and this has to be taken into consideration while framing a legislation pertaining to post-mortem privacy. The number of internet users is estimated to be 566 million as of December 2018, registering an annual growth of 18%.Considering the growth of internet use in India, it is pertinent to have a proper legal framework for protection of data of deceased individuals.

[1]In re Estate of Ellsworth, No. 2005-296, 651- DE (Mich. Prob. Ct. May 11, 2005).

[2]Fairstar Heavy Transport NV v. Adkins. [2012] EWHC 2952 (TCC).

Read more

Metadata by TLF: Issue 6

Posted on October 10, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Delhi HC orders social media platforms to take down sexual harassment allegations against artist

The Delhi High Court ordered Facebook, Google and Instagram to remove search result, posts and any content containing allegations of sexual harassment against artist Subodh Gupta. These include blocking/removal of social media posts, articles and Google Search result links. The allegations were made about a year ago, by an unknown co-worker of Gupta on an anonymous Instagram account ‘Herdsceneand’. These allegations were also posted on Facebook and circulated by news reporting agencies. An aggrieved Subodh Gupta then filed a civil defamation suit, stating these allegations to be false and malicious. Noting the seriousness of the allegations, the Court passed an ex-parte order asking the Instagram account holder, Instagram, Facebook and Google to take down this content. The Court has now directed Facebook to produce the identity of the person behind the account ‘Herdsceneand’ in a sealed cover. 

Further Reading:

  1. Trisha Jalan, Right to be Forgotten: Delhi HC orders Google, Facebook to remove sexual harassment allegations against Subodh Gupta from search results, Medianama (1 October 2019).
  2. Akshita Saxen, Delhi HC Orders Facebook, Google To Take Down Posts Alleging Sexual Harassment by Artist Subodh Gupta [Read Order], LiveLaw.in (30 September 2019).
  3. Aditi Singh, Delhi HC now directs Facebook to reveal identity of person behind anonymous sexual harassment allegations against Subodh Gupta,  Bar & Bench (10 October 2019).
  4. The Wire Staff, Subodh Gupta Files Rs. 5-Crore Defamation Suit Against Anonymous Instagram Account, The Wire (1 October 2019)
  5. Dhananjay Mahapatra, ‘MeToo’ can’t become a ‘sullying you too’ campaign: Delhi HC, Times of India (17 May 2019).
  6. Devika Agarwal, What Does ‘Right to be Forgotten’ Mean in the Context of the #MeToo Campaign, Firstpost (19 June 2019).

Petition filed in Kerala High Court seeking a ban on ‘Telegram’

A student from National Law School of India, Bengaluru filed a petition in the Kerala high court seeking a ban on the mobile application – Telegram. The reason cited for this petition is that the app has no  checks and balances in place. There is no government regulation, no office in place and the lack of encryption keys ensures that the person sending the message can not be traced back. It was only in June this year that telegram refused to hand over the chat details of the ISIS module to the National Investigation Agency.  As compared to apps such as Watsapp, Telegram has a greater degree of secrecy. One of the features Telegram boasts of is the ‘secret chat’ version which notifies users if someone has taken a screenshot, disables the user from forwarding of messages etc. Further, there are fewer limits on the number of people who can join a channel and this makes moderation on the dissemination of information even more difficult. It is for this reason that telegram is dubbed as the ‘app of choice’ for many terrorists. It is also claimed that the app is used for transmitting vulgar and obscene content including child pornography. Several countries such as Russia and Indonesia have banned this app due to safety concerns. 

Further Reading:

  1. Soumya Tiwari, Petition in Kerala High Court seeks ban on Telegram, cites terrorism and child porn, Medianama (7 October 2019).
  2. Brenna Smith, Why India Should Worry About the Telegram App, Human Rights Centre (17 February 2019).
  3. Benjamin M., Why Are So Many Countries Banning Telegram?, Dogtown Media (11 May 2019).
  4. Vlad Savov, Russia’s Telegram ban is a big convoluted mess, The Verge (17 April 2018).
  5. Megha Mandavia, Kerala High Court seeks Centre’s views on plea to ban Telegram app, The Economic Times (4 October 2019). 
  6. Livelaw News Network, Telegram Promotes Child Pornography, Terrorism’ : Plea In Kerala HC Seeks Ban On Messaging App, Livelaw.in (2 October 2019).

ECJ rules that Facebook can be ordered to take down content globally

In a significant ruling, the European Court of Justice ruled that Facebook can be ordered to take down posts globally, and not just in the country that makes the request. It extends the reach of the EU’s internet-related laws beyond its own borders, and the decision cannot be appealed further. The ruling stemmed from a case involving defamatory comments posted on the platform about an Austrian politician, following which she demanded that Facebook erase the original comments worldwide and not just from the Austrian version worldwide. The decision raises the question of jurisdiction of EU laws, especially at a time when countries are outside the bloc are passing their own laws regulating the matter.

Further Reading:

  1. Adam Satariano, Facebook Can Be Forced to Delete Content Worldwide, E.U.’s Top Court Rules, The New York Times (3 October 2019).
  2. Chris Fox, Facebook can be ordered to remove posts worldwide, BBC News (3 October 2019).
  3. Makena Kelly, Facebook can be forced to remove content internationally, top EU court rules, The Verge (3 October 2019).
  4. Facebook must delete defamatory content worldwide if asked, DW (3 October 2019).

USA and Japan sign Digital Trade Agreement

The Digital Trade Agreement was signed by USA and Japan on October 7, 2019. The Agreement is an articulation of both the nations’ stance against data localization. The trade agreement cemented a cross-border data flow. Additionally, it allowed for open access to government data through Article 20. Articles 12 and 13 ensures no restrictions of electronic data across borders. Further, Article 7 ensures that there are no customs on digital products which are electronically transmitted. Neither country’s parties can be forced to share the source code while sharing the software during sale, distribution, etc. The first formal articulation of the free flow of digital information was seen in the Data Free Flow with Trust (DFFT), which was a key feature of the Osaka Declaration on Digital Economy. The agreement is in furtherance of the Trump administration’s to cement America’s standing as being tech-friendly, at a time when most other countries are introducing reforms to curb the practices of internet giants like Google and Facebook, and protect the rights of the consumers. American rules, such as Section 230 of the Communications Decency Act shields companies from any lawsuits related to content moderation. America, presently appears to hope that their permissive and liberal laws will become the framework for international laws. 

Further Reading:

  1.     Aditi Agarwal, USA, Japan sign Digital Trade Agreement, stand against data localisation, Medianama (9 October 2019).
  2.     U.S.-Japan Digital Trade Agreement Text, Office of the United States Trade Representative (7 October 2019).
  3.   Paul Wiseman, US signs limited deal with Japan on ag, digital trade,Washington Post (8 October 2019).
  4.   FACT SHEET U.S.-Japan Digital Trade Agreement, Office of the United States Trade Representative (7 October 2019).
  5. David McCabe and Ana Swanson, U.S. Using Trade Deals to Shield Tech Giants From Foreign Regulators, The New York Times (7 October 2019).

Read more

Compelled to Speak: The Right to Remain Silent (Part II)

Posted on September 13, 2019September 13, 2019 by Tech Law Forum @ NALSAR

This is the second part of a two-part post by Benjamin Vanlalvena, a final year law student at NALSAR University of Law. In this post, he critiques a recent judgement by the Supreme Court which allowed Magistrates to direct an accused to give voice samples during investigation, without his consent. Part 1 can be found here.

Judicial discipline and the doctrine of imminent necessity

In the previous part, I dealt with the certain privacy concerns that may arise with respect to voice sampling and how various jurisdictions have approached the same. In this part, I will be critiquing the manner in which the Supreme Court in Ritesh Sinha has imparted legislative power onto itself, is by the terming the absence of legislative authorization for voice sampling of accused persons as a procedural anomaly, and extending its power in filling such assumed voids by invoking not only the principle of ejusdem generis, but also citing the “principle of imminent necessity”.

This strangely arises since reference is made to Ram Babu Misra, where it had earlier looked into Section 73 of the Indian Evidence Act, 1872 and whether the same afforded the Magistrate the power to direct the accused to give her specimen writing even during the course of investigation. In absence of such a provision, such powers were denied. Subsequently, section 311A (vide Code of Criminal Procedure (Amendment) Act, 2005 later afforded the Magistrate the power to direct any person to submit specimen signatures or handwriting. In this regard, the Supreme Court in Sukh Ram, held that the powers provided by the Amendment were prospective and not retrospective in nature and therefore such direction was impermissible since they were not provided for.

In the present case, the Supreme Court notes that “procedure is the handmaid, not the mistress, of justice and cannot be permitted to thwart the fact-finding course in litigation”. This is prima facie problematic given the relevance of the maxim in civil matters in resolving dilemmas by by-passing procedure in the interest of justice. In criminal matters, the State holds an instrument of enquiry against the accused, with the balance of powers weighing heavily against the individual. The jurisprudential trend of privileging crime control interests and merely opposing oppression or coercion in cases which would affect the reliability of the evidence, has thus continued. It would be relevant to look at the right against self-incrimination, explored by Abhinav Sekhri in his article ‘The right against self-incrimination in India: the compelling case of Kathi Kalu Oghad’, to be one that had originally arisen as a protection against the State by placing procedural safeguards and substantive remedies.

In this case, the Court refers to Puttaswamy, to hold that the right to privacy must “bow down to compelling public interest”. However, in Puttaswamy, Justice Chandrachud had cited A K Roy vs Union of India whereby, the Constitution Bench of the Supreme Court recognised that “…[p]rocedural safeguards are the handmaids of equal justice and …, [that] the power of the government is colossal as compared with the power of an individual…”, (emphasis mine) that preventive detention finds its basis in law, and thus is permissible under the Constitution.

Indeed, Maneka’s reference to R.C. Cooper in understanding permissible restrictions of personal liberty is of assistance, noting that abrogation of the rights of individuals must fulfil the tests of reasonableness. Irrespective of whether the demand of an individual’s voice sample is a permissible violation vide the individual’s right to privacy guaranteed under the Constitution, the order itself must find a basis in law. Alas, the same cannot be said for the present matter.

As this is a policy decision, entrusted to the State, it is curious to see how Courts have time and again found justification in intruding the halls of the Legislature. The same was also recognised in the Puttaswamy judgment where deference to the wisdom of law enacting or law enforcing bodies was sought. Silence postulates a realm of privacy, wrote Justice Chandrachud. While the same is not an absolute right, it is for the Courts to protect the individual from the State’s powers, to adjudge whether the laws and actions consist of legitimate aims of the State, and not for the Courts to provide power became an arm of the State itself. The part of the Kharak Singh judgment which was upheld, had recognised the importance of the existence of a “law” to term something as either constitutional or unconstitutional, and thus termed the relevant regulation as unconstitutional.

Presently, it is the Court which has taken on such a burden to create the law encroaching on the accused’s rights. This is even after alluding to the Legislature’s possible choice to be “oblivious and despite express reminders chooses not to include voice sample”, and only provide for a few tests (though in Selvi, the Court recognised the impropriety and impracticality to look into Legislative intent given the lack of “access to all the materials which would have been considered by the Parliament”).

Curiously, in affording the Judicial Magistrate the power to order voice sampling for “the purpose of investigation into a crime”, there is ambiguity at what stage this power can be invoked, the manner in which it can be invoked, and who can invoke the same. Ordinarily, medical examinations under 53/53A/54 of the Cr PC have been read to be done at the instance of “the investigating officer or even the arrested person himself…[or] at the direction of the jurisdictional court.” We may also look at Section 53 of the Cr PC, as per which medical examination can occur only when there is sufficient material on record to justify the same, and is impermissible otherwise.

Finally, the Court has not only failed to illustrate the existence of an imminent necessity, to make such an alteration or confer such a power, it has failed to explain in what context can Courts invoke such a maxim and has not developed the same in detail. One might note, that the principle of necessity is one generally afforded to individuals in cases of private defence or in cases of emergencies, excusing individuals from acts that would ordinarily make them liable of certain crimes. Curiously, there is no mention of an affidavit from the side of the police administration, no studies have been cited. Mere legislative delay as a justification for imminent necessity in light of certain advancements does not seem sound.

In light of the same, given Navtej, NALSA, and Puttaswamy, and the failure of the Legislature to amend at least the Special Marriage Act to recognize the rights of LGBTQI individuals to marry, and be with their individual of choice, should not the same have also provided for? Can the same be taken as a justification to abrogate digital privacy rights in the world of evolving technologies, by mandating backdoors? At what stage does Legislature’s refusal also amount to Legislature’s lax? Does this apply only for social developments or technological developments? If the Legislature was in fact, aware of voice exemplars (as has been observed), and chose not to incorporate the same into the relevant sections and clauses, can the same be read as legislative delay or refusal? Whether this aspect of the judgment, invoking “imminent necessity”, will be read into to provide justification for some other transformation is yet to be seen.

Conclusion

The Court had a path available to it through Selvi and indeed Justice Desai, had charted through the same invoking precedents which permitted such a reading. However, the Court in this reference judgment seems to have (unnecessarily) gone the extra mile by mention of this principle of imminent necessity. Whereas the former is a matter of difference in opinion, the latter is a clear bypass of the Legislature’s powers at the Court’s own pleasure. We may take heed to Justice H.R. Khanna’s dissent, in the ADM Jabalpur case, that when the means don’t matter, when procedure is no longer insisted upon, the ends can only lead us to arbitrariness, a place devoid of personal liberty.

I conclude by noting Lord Camden’s dictum in Entick vs Carrington (which we would now find through our Article 21 protection: “No person shall be deprived of his life or personal liberty except according to procedure established by law” (emphasis mine) (also read into the right against self-incrimination through Selvi):

If it is law, it will be found in our books. If it is not to be found there, it is not law.

 

Click here for Part II.

Read more

Compelled to Speak: The Right to Remain Silent (Part I)

Posted on September 13, 2019September 13, 2019 by Tech Law Forum @ NALSAR

This is the first part of a two-part post by Benjamin Vanlalvena, a final year law student at NALSAR University of Law. In this post, he critiques a recent judgement by the Supreme Court which allowed Magistrates to direct an accused to give voice samples during investigation, without his consent. Part II can be found here.

Nearly threescore ago, in Kathi Kalu Oghad, the eleven judge-bench of the Supreme Court of India decided on the question of the extent of constitutional protections against self-incrimination (vide Article 20(3)). The Supreme Court therein deviated from the notion of self-incrimination being one inclusive of “every positive volitional act which furnishes evidence” laid down in M.P. Sharma, and recognised a distinction between “to be a witness” and “to furnish evidence”. The present judgment arose on a difference in opinion in the division bench of the Supreme Court in Ritesh Sinha, regarding the permissibility of ordering an accused to provide their voice sample. In this part, I will talk about voice sampling and its interactions with privacy, and look at how different jurisdictions have looked at voice spectography – whether the same would be violative of the individual’s right to privacy and their right against self-incrimination. Finally, I will make a short point on technological developments and their interaction with criminal law. In the next part I will be dealing with the Court’s failure to simply rely upon Selvi to expand the definition, and instead how it created the doctrine of “imminent necessity” (a principle generally present in criminal law for private defence!) to justify the Court’s intervention into the halls of the Legislature in light of “contemporaneous realities/existing realities on the ground”.

Facts

The Investigating Authority seized the mobile phone from Dhoom Singh, allegedly in association with the accused-appellant Ritesh Sinha, and wanted to verify whether the recorded conversation was between both the individuals and thus needed the voice sample of the appellant to verify the same. Accordingly, summons was issued, and the present appellant was ordered to give his voice sample. This was subsequently challenged before the Delhi High Court who negatived his challenge. Aggrieved by the same, an appeal was filed before the Supreme Court, as a result of split verdict, the same was referred to a larger bench. The opinions by Justice Desai and Justice Aslam in the division bench have been sufficiently explored earlier by Gautam Bhatia and Abhinav Sekhri. Therein, both Justices were of one mind on voice sampling not being violative of the right against self-incrimination, with differences on the permissibility of voice sampling, considering an absence of an explicit provision permitting the same.

Voice Sampling and Privacy

In this reference judgment, Chief Justice Ranjan Gogoi traces the history of rights against self-incrimination by referencing (then) Chief Justice B.P. Sinha’s observations that documents which by themselves do not incriminate but are “only materials for comparison in order to lend assurance to the Court that its inference based on other pieces of evidence is reliable” and would not be violative of Article 20(3).

Recognising the limitation under section 53 and section 53A of the Code of Criminal Procedure, 1973, reference is made to the 87th Law Commission Report which suggested that an amendment to the Identification of Prisoners Act, 1920 to specifically empower a Judicial Magistrate to compel an accused person to give a voice print. No such action has been taken in that regard.

In Selvi, ‘personal liberty’ in the context of self-incrimination, was understood as being one whereby involuntariness is avoided, summing up this right to three points: (1) preventing custodial violence, and other third-degree methods to protect the dignity and bodily integrity of the person being examined, to serve as “a check on police behaviour during the course of investigation”. (2) To put the onus of proof on the prosecution, and (3) to ensure reliability of evidence, that involuntary statements could result in misleading “the judge and the prosecutor… resulting in a miscarriage of justice …[with] false statements …likely to cause delays and obstructions in the investigation efforts”. The third point is consistent with the majority view in Kathi Kalu Oghad, which found “specimen handwriting or signature or finger impressions by themselves…[to not be testimony since they are] wholly innocuous because they are unchangeable…[that they] are only materials for comparison in order to lend assurance to the Court that its inference based on other pieces of evidence is reliable.” While there was a hesitation to read everything under the sun as “such other tests” in Selvi, it was recognised that that through an invocation of ejusdem generis, the same could be extended to other physical examinations, but not other examinations which involve testimonial acts. In this regard, we may consider Gautam Bhatia’s analysis of Selvi which digs deep into this issue. As an aside, beyond the question of the content of either the “said” or the “statement” itself, it would be of assistance to also look at the nature of police systems, whereby even in a post-Miranda setting in the US, the reality and nature of voluntariness is suspect.

The position of viewing exemplars by themselves to not be statements is consistent with various courts. That is, handwriting, signature, etc., existing within, or from the individual, the individual is not considered to have been made to give that which cannot otherwise be seen since the evidence is not altered irrespective of compulsion to give the same.

In Levack, the Supreme Court of Appeal in South Africa held that sound (and consequently voice exemplars), firstly, could be considered as a ‘distinguishing feature’ under Section 37(1)(c) of the Criminal Procedure Act of 1977. Secondly, that voice exemplars being ‘autoptic evidence’, derived from the accused’s own bodily features could be distinguished as not being testimonial or communicative in nature.

This echoes the view taken by the Supreme Court of the United States (SCOTUS) in the case of Dionisio, recognizing that voice samples (exemplars), for the purposes of identification, as not being violative of the individual’s rights against self-incrimination enshrined under the Fourth and Fifth Amendments. Since they are were mere physical characteristics, being attained as mere identifiers and not for their testimonial or communicative content (See also Gilbert and Wade). Further, relying on Katz, where it held that Fourth Amendment protections would not be offered “for what ‘a person knowingly exposes to the public…’”. Therefore, “[n]o person can have a reasonable expectation that others will not know the sound of his voice, any more than he can reasonably expect that his face will be a mystery to the world.”.

In Jalloh vs. Germany, the Strasbourg Court observed that the right against self-incrimination guaranteed under Article 6(1) would not extend to material obtained through the use of compulsory powers from the accused person which have an “existence independent of the will of the suspect such as, inter alia, documents acquired pursuant to a warrant, breath, blood, urine, hair or voice samples and bodily tissue for the purpose of DNA testing”. (emphasis mine).

The Pacing Problem

The failure of legal systems to consider technological changes which may assist in collection of evidence or other crime control uses is termed as a ‘pacing problem’, and is comprised of two dimensions – Firstly, the basis of existing legal frameworks on a static rather than dynamic view of society and technology. Secondly, the slowing down of legal institutions with respect to their capacity to adjust to changing technologies.

The Legislature’s failure to provide for handwriting samples for two decades even after the Supreme Court and Law Commission’s mention of the same has been noted by Abhinav Sekhri. Admittedly, the benefits of voice sampling for identification are evident, and have even been used before. However, this judgment fails to clarify under which section such power has been conferred. If the same were to exist under the Identification of Prisoners Act, there may be some semblance of relief through section 7, which mandates destruction or handing over of such measurements and photographs to individuals in certain cases.

The DNA Bill, as introduced in the Lok Sabha allows for removal of DNA collected on certain conditions (vide Section 31(2)-(3), however, even then, it is one that occurs only on police report, order of the court or a written request (method varying on the basis of the incident), contrary to other jurisdictions or even section 7 of the Identification of Prisoners Act, the status quo is thus of retainment, and not automatic removal.

In trying to keep up with technological advancements, the Court has thus failed to recognise the importance of procedure in criminal matters and instead produced procedural uncertainty; it is even more curious to note that Selvi which would have been sufficient justification was not invoked even once in this case.

 

Click here for Part II.

Read more

Metadata by TLF: Issue 4

Posted on September 10, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Facebook approaches SC in ‘Social Media-Aadhaar linking case’

In 2018, Anthony Clement Rubin and Janani Krishnamurthy filed PILs before the Madras High Court, seeking a writ of Mandamus to “declare the linking of Aadhaar of any one of the Government authorized identity proof as mandatory for the purpose of authentication while obtaining any email or user account.” The main concern of the petitioners was traceability of social media users, which would be facilitated by linking their social media accounts with a government identity proof; this in turn could help combat cybercrime. The case was heard by a division bench of the Madras HC, and the scope was expanded to include curbing of cybercrime with the help of online intermediaries. In June 2019, the Internet Freedom Foundation became an intervener in the case to provide expertise in the areas of technology, policy, law and privacy. Notably, Madras HC dismissed the prayer asking for linkage of social media and Aadhaar, stating that it violated the SC judgement on Aadhaar which held that Aadhaar is to be used only for social welfare schemes. 

Facebook later filed a petition before the SC to transfer the case to the Supreme Court. Currently, the hearing before the SC has been deferred to 13 September 2019 and the proceedings at the Madras HC will continue. Multiple news sources reported that the TN government, represented by the Attorney General of India K.K. Venugopal, argued for linking social media accounts and Aadhaar before the SC. However, Medianama has reported that the same is not being considered at the moment and the Madras HC has categorically denied it.

Further Reading:

  1. Aditi Agrawal, SC on Facebook transfer petition: Madras HC hearing to go on, next hearing on September 13, Medianama (21 August 2019).
  2. Nikhil Pahwa, Against Facebook-Aadhaar Linking, Medianama (23 August 2019).
  3. Aditi Agrawal, Madras HC: Internet Freedom Foundation to act as an intervener in Whatsapp traceability case, Medianama (28 June 2019).
  4. Aditi Agrawal, Kamakoti’s proposals will erode user privacy, says IIT Bombay expert in IFF submission, Medianama (27 August 2019).
  5. Prabhati Nayak Mishra, TN Government Bats for Aadhaar-Social Media Linking; SC Issues Notice in Facebook Transfer Petition, LiveLaw (20 August 2019).
  6. Asheeta Regidi, Aadhaar-social media account linking could result in creation of a surveillance state, deprive fundamental right to privacy, Firstpost (21 August 2019).

Bangladesh bans Mobile Phones in Rohingya camps

Adding to the chaos and despair for the Rohingyas, the Bangladeshi government banned the use of mobile phones and also restricted mobile phone companies from providing service in the region. The companies have been given a week to comply with these new rules. The reason cited for this ban was that refugees were misusing their cell phones for criminal activities. The situation in the region has worsened over the past two years and the extreme violation of Human Rights is termed to be reaching the point of Genocide according to UN officials. This ban on mobile phones, would further worsen the situation in Rohingya by increasing their detachment with the rest of the world, thus making their lives at the refugee camp even more arduous.

Further Reading:

  1. Nishta Vishwakarma, Bangladesh bans mobile phones services in Rohingya camps, Medianama (4 September 2019).
  2. Karen McVeigh, Bangladesh imposes mobile phone blackout in Rohingya refugee camp, The Guardian (5 September 2019).
  3. News agencies, Bangladesh bans mobile phone access in Rohingya camps, Aljazeera (3 September 2019).
  4. Ivy Kaplan, How Smartphones and Social Media have Revolutionised Refugee Migration, The Globe Post (19 October 2018).
  5. Abdul Aziz, What is behind the rising chaos in Rohingya camps, Dhakka Tribune (24 March 2019).

YouTube to pay 170 million penalty for collecting the data of children without their consent

Alphabet Inc.’s Google and YouTube will be paying a $170 million penalty to the Federal Trade Commission. It will be paid to settle allegations that YouTube collected the personal information of children by tracking their cookies and earning millions through targeted advertisements without parental consent. The FTC Chairman, Joe Simons, condemned the company for publicizing its popularity with children to potential advertisers, while blatantly violating the Children’s Online Privacy Protection Act. The company has claimed to advertisers, that it does not comply with any child privacy laws since it doesn’t have any users under the age of 13. Additionally, the settlement mandates that YouTube will have to create policies to identify content that is aimed at children and notify creators and channel owners of their obligations to collect consent from their parents. In addition, YouTube has already announced that it will be launching YouTube Kids soon which will not have targeted advertising and will have only child-friendly content. Several prominent Democrats in the FTC have criticized the settlement, despite it being the largest fine on a child privacy case so far, since the penalty is seen as a pittance in contrast to Google’s overall revenue.

Further Reading:

  1. Avie Schenider, Google, YouTube To Pay $170 Million Penalty Over Collecting Kids’ Personal Info, NPR (4 September 2019).
  2. Diane Bartz, Google’s YouTube To Pay $170 Million Penalty for Collecting Data on Kids, Reuters (4 September 2019).
  3. Natasha Singer and Kate Conger, Google Is Fined $170 Million for Violating Children’s Privacy on YouTube, New York Times (4 September 2019).
  4. Peter Kafka, The US Government Isn’t Ready to Regulate The Internet. Today’s Google Fine Shows Why, Vox (4 September 2019).

Facebook Data Leak of Over 419 Million Users

Recently, researcher Sanyam Jain located online unsecured servers that contained phone numbers for over 419 million Facebook users, including users from US, UK and Vietnam. In some cases, they were able to identify the user’s real name, gender and country. The database was completely unsecured and could be accessed by anybody. The leak increases the possibility of sim-swapping or spam call attacks for the users whose data has been leaked. The leak has happened despite Facebook’s statement in April that it would be more dedicated towards the privacy of its users and restrict access to data to prevent data scraping. Facebook has attempted to downplay the effects of the leak by claiming that the actual leak is only 210 million, since there are multiple duplicates in the data that was leaked, however Zack Whittaker, Security Editor at TechCrunch has highlighted that there is little evidence of such duplication. The data appears to be old since recently the company has changed its policy such that it users can no longer search for phone numbers. Facebook has claimed that there appears to be no actual evidence that there was a serious breach of user privacy.

Further Reading:

  1. Zack Whittaker, A huge database of Facebook users’ phone numbers found online, TechCrunch (5 September 2019).
  2. Davey Winder, Unsecured Facebook Server Leaks Data Of 419 Million Users, Forbes (5 September 2019).
  3. Napier Lopez, Facebook leak contained phone numbers for 419 million users, The Next Web (5 September 2019).
  4. Kris Holt, Facebook’s latest leak includes data on millions of users, The End Gadget (5 September 2019).

Mozilla Firefox 69 is here to protect your data

Addressing the growing data protection concerns Mozilla Firefox will now block third party tracking cookies and crypto miners by its Enhanced Tracking Protection feature. To avail this feature users will have to update to Firefox 69, which enforces stronger security and privacy options by default. Browser’s ‘Enhanced Tracking Protection’ will now remain turned on by default as part of the standard setting, however users will have the option to turn off the feature for particular websites. Mozilla claims that this update will not only restrict companies from forming a user profile by tracking browsing behaviour but will also enhance the performance, User Interface and battery life of the systems running on Windows 10/mac OS.

Further Readings

  1. Jessica Davies, What Firefox’s anti-tracking update signals about wider pivot to privacy trend, Digiday (5 September 2019).
  2. Jim Salter, Firefox is stepping up its blocking game, ArsTechnica (9 June 2019).
  3. Ankush Das, Great News! Firefox 69 Blocks Third Party Cookies, Autoplay Videos & Cryptominers by Default, It’s Foss (5 September 2019).
  4. Sean Hollister, Firefox’s latest version blocks third-party trackers by default for everyone, The Verge (3 September 2019).
  5. Shreya Ganguly, Firefox will now block third-party tracking cookies and cryptomining by default for all users, Medianama (4 September 2019).

Delhi Airport T3 terminal to use ‘Facial Recognition’ technology on a trial basis

Delhi airport would be starting a three-month trial of the facial recognition system in its T3 terminal. This system is called the Biometric Enabled Seamless Travel experience (BEST). With this technology, passenger’s entry would be automatically registered at various points such as check-in, security etc. Portuguese company- toolbox has provided the technical and software support for this technology. Even though this system is voluntary in the trial run the pertinent question of whether it will remain voluntary after it is officially incorporated is still to be answered. If the trial run is successful, it will be officially incorporated.

Further Reading:

  1. Soumyarendra Barik, Facial Recognition tech to debut at Delhi airport’s T3 terminal; on ‘trial basis’ for next three months, Medianama (6 September 2019).
  2. PTI, Delhi airport to start trial run of facial recognition system at T3 from Friday, livemint (5 September 2019).
  3. Times Travel Editor, Delhi International Airport installs facial recognition system for a 3 month trial, times travel (6 September 2019).
  4. Renée Lynn Midrack, What is Facial Recognition, lifewire (10 July 2019).
  5. Geoffrey A. Fowler, Don’t smile for surveillance: Why airport face scans are a privacy trap, The Washington Post (10 June 2019).

UK Court approves use of facial recognition systems by South Wales Police

In one of the first cases of its kind a British court ruled that police use of live facial recognition systems is legal and does not violate privacy and human rights. The case, brought by Cardiff resident Ed Bridges, alleged that his right to privacy had been violated by the system which he claimed had recorded him at least twice without permission, and the suit was filed to hold the use of the system as being violative of human rights including the right to privacy. The court arrived at its decision after finding that “sufficient legal controls” were in place to prevent improper use of the technology, including the deletion of data unless it concerned a person identified from the watch list.

Further Reading:

  1. Adam Satariano, Police Use of Facial Recognition Is Accepted by British Court, New York Times (4 September 2019).
  2. Owen Bowcott, Police use of facial recognition is legal, Cardiff high court rules, The Guardian (4 September 2019).
  3. Lizzie Dearden, Police used facial recognition technology lawfully, High Court rules in landmark challenge, The Independent (4 September 2019).
  4. Donna Lu, UK court backs police use of face recognition, but fight isn’t over, New Scientist (4 September 2019).

Read more

Explainer on Account Aggregators

Posted on August 15, 2019December 4, 2020 by Tech Law Forum @ NALSAR

This post has been authored by Vishal Rakhecha, currently in his 4th year at NALSAR University of Law, Hyderabad, and serves as an introduction for TLF’s upcoming blog series on Account Aggregators. 

A few days back, Nandan Nilekani unveiled an ‘industry-body’ for Account Aggregators (AAs), by the name of ‘Sahamati.’ He claimed that AAs would revolutionise the field of fintech, and would give users more control over their financial data, while also making the transfer of financial information (FI) a seamless process. But what exactly are AAs, and how do they make transfer of FI seamless?

Read more
  • 1
  • 2
  • Next

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme