Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Category: Internet Governance

Delhi HC’s order in Swami Ramdev v. Facebook: A hasty attempt to win the ‘Hare and Tortoise’ Race

Posted on January 6, 2020January 6, 2020 by Tech Law Forum @ NALSAR

This post has been authored by Aryan Babele, a final year student at Rajiv Gandhi National University of Law (RGNUL), Punjab and a Research Assistant at Medianama.

On 23rd October 2019, the Delhi HC delivered a judgment authorizing Indian courts to issue “global take down” orders to Internet intermediary platforms like Facebook, Google and Twitter for illegal content as uploaded, published and shared by users. The Delhi HC delivered the judgment on the plea filed by Baba Ramdev and Patanjali Ayurved Ltd. requesting the global takedown of certain videos which were alleged to be defamatory in nature.

Read more

Standardizing the Data Economy

Posted on October 17, 2019December 13, 2019 by Tech Law Forum @ NALSAR

This piece has been authored by Namratha Murugeshan, a final year student at NALSAR University of Law and member of the Tech Law Forum.

In 2006, Clive Humby, a British mathematician said with incredible foresight that “data is the new oil”. Fast forward to 2019, we see how data has singularly been responsible for big-tech companies getting closer to and surpassing the trillion-dollar net worth mark. The ‘big 4’ tech companies, Google, Apple, Facebook and Amazon have incredibly large reserves of data both in terms of data collection (owing to the sheer number of users each company retains) and in terms of access to data that is collected through this usage. With an increasing number of applications and avenues for data to be used, the requirement of standardizing the data economy manifests itself strongly with more countries recognizing the need to have specific laws concerning data.

What is standardization?

Standards may be defined as technical rules and regulations that ensure the smooth working of an economy. They are required to increase compatibility and interoperability as they set up the framework within which agents must work. With every new technology that is invented the question arises as to how it fits with existing technologies. This question is addressed by standardization. By determining the requirements to be met for safety, quality, interoperability etc., standards establish the molds in which the newer technologies must fit in. Standardization is one of the key reasons for the success of industrialization. Associations of standardization have helped economies function by assuring consumers that the products being purchased meet a certain level of quality. The ISO (International Standards Organization), BIS (Bureau of Indian Standards), SCC (Standards Council of Canada), BSI (British Standards Institute) are examples of highly visible organisations that stamp their seal of approval on products that meet the publicly set level of requirements as per their regulations. There are further standard-setting associations that specifically look into the regulation of safety and usability of certain products, such as food safety, electronics, automobiles etc. These standards are deliberated upon in detail and are based on a discussion with sectoral players, users, the government and other interested parties. Given that they are generally arrived at based on a consensus, the parties involved are in a position to benefit by working within the system.

Standards for the data economy

Currently, the data economy functions without much regulation. Apart from laws on data protection and a few other regulations concerning storage, data itself remains an under-regulated commodity. While multiple jurisdictions are recognizing the need to have laws concerning data usage, collection and storage, it is safe to say that the legal world still needs to catch-up.

In this scenario, standardization provides a useful solution as it seeks to ensure compliance by emphasizing mutual benefit, as opposed to laws which would penalize non-adherence. A market player in the data economy is bound to benefit from standardization as they have readily accessible information regarding the compliance standards for the technology they are creating. By standardizing methods for collection, use, storage and sharing of data the market becomes more open because of increased availability of information, which benefits the players by removing entry barriers. Additionally, a standard-mark pertaining to data collection and usage gives consumers the assurance that the data being shared be used in a safe and quality-tested manner, thereby increasing their trust in the same. Demand and supply tend to match as there is information symmetry in the form of known standards between the supplier and consumer of data.

As per Rational Choice theory an agent in the economy who has access to adequate information (such as an understanding of costs and benefits, existence of alternatives) and who acts on the basis of self-interest, would pick that choice available to them that maximizes their gains. Given this understanding, an agent in the data economy would have higher benefits if there is increased standardization as the same would create avenues to access and usage in the market that is currently heading towards an oligopoly.

How can the data economy be standardized?

The internet has revolutionized the manner in which we share data. It has phenomenally increased the amount of data available on the platform. Anyone who has access to the internet can deploy any sort of data on to the same – be it an app, a website, visual media etc. With internet access coming to be seen as an almost essential commodity, its users and the number of devices connected to the Internet will continue to grow. Big Data remained a buzzword for a good part of this decade (2010’s), and with Big Data getting even bigger, transparency is often compromised as a result. Users are generally unaware of how the data collected from them is stored, used or who has access to it. Although, sometimes terms and conditions concerning certain data and its collection specify these things, it is overlooked more often than not, with the result that users remain in the dark.

There are 3 main areas where standardization would help the data economy –

  1. Data Collection
  2. Data Access
  3. Data Analysis

 

  1. Data Collection – Standardizing the process of data collection has a supply and demand side benefit. On the supply side, the collection of data across various platforms such as social media, personal use devices, networking devices etc., would be streamlined based on the purpose for which they are being harvested. Simpler language of terms and condition, broad specifications of data collection would help the user make an informed choice about whether they want to allow data collection. Thereby, this would seeking permissions from the user by way of categorizing data collection and making the same known to the user. On the demand side, this streamlined data collection would help with accumulating high-quality data as required for specific usage by those collecting it. This would also make for effective compliance with as is required by a significant number of data protection laws across the globe. Purpose limitation is a two-element principle. It says that data must be collected from a user for “explicit, specified and legitimate” purposes only and that data should be processed and used only in a manner that is compatible with the purpose it is collected for. This helps purpose limitation because once data providers are aware of how their data is going to be used, they can make a legitimate claim to check the usage of it by data collectors and seek stricter compliance requirements.

 

  1. Data Access – Standardizing data access would go a long way in breaking down the oligopoly of the 4 big tech companies over data by creating mechanisms for access to the same. As of now, there is no simple method for data sharing across databases and amongst industry players. With monetization of data rising with increasing fervor, access and exchange will be crucial to ensure that the data economy does not stagnate or have exceedingly high barriers to entry. Further, by setting standards for the access to data the stakeholders will be able to participate in discussions regarding the architecture of data access.

 

  1. Data Analytics – This is the domain that remains in the exclusive control of big tech companies. While an increasing number of entities are adopting data analytics, big tech companies have access to enormous amounts of data that has given them a head start. Deep Blue, Alexa, Siri are examples of the outcome of data analytics by IBM, Amazon and Apple respectively. Data analytics is the categorization and processing of data collected and involves putting to use the data resource to achieve the goal of creating newer technologies to cater to the needs of people. Data analytics requires investment that is often significantly beyond the reach of the general population. However, data analytics is extremely important to ensure that the data economy survives. By consistently searching for the next big thing in data analytics, we have seen the advent of Big Data, Artificial Intelligence and Machine Learning (a subset of AI) so far, indicating that investments in data collection and processing pay-off. Further, data analytics has a larger implication on how we tend to work and what aspects of our life we let technology take over. The search for smarter technologies and algorithms will ensure that the data economy thrives and consequently have an impact on the market economy. Standardization of this infrastructure would ensure fairer access norms and usage of collected data.

With the increasing application of processed information to solve our everyday problems, the data economy is currently booming; however, large parts of this economy are controlled by a limited number of players. Standardization in this field would ensure that we move towards increased competition instead of a data oligopoly, ensuring increased competition that will ultimately lead to the faster and healthier growth of the data economy.

Read more

Metadata by TLF: Issue 6

Posted on October 10, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Delhi HC orders social media platforms to take down sexual harassment allegations against artist

The Delhi High Court ordered Facebook, Google and Instagram to remove search result, posts and any content containing allegations of sexual harassment against artist Subodh Gupta. These include blocking/removal of social media posts, articles and Google Search result links. The allegations were made about a year ago, by an unknown co-worker of Gupta on an anonymous Instagram account ‘Herdsceneand’. These allegations were also posted on Facebook and circulated by news reporting agencies. An aggrieved Subodh Gupta then filed a civil defamation suit, stating these allegations to be false and malicious. Noting the seriousness of the allegations, the Court passed an ex-parte order asking the Instagram account holder, Instagram, Facebook and Google to take down this content. The Court has now directed Facebook to produce the identity of the person behind the account ‘Herdsceneand’ in a sealed cover. 

Further Reading:

  1. Trisha Jalan, Right to be Forgotten: Delhi HC orders Google, Facebook to remove sexual harassment allegations against Subodh Gupta from search results, Medianama (1 October 2019).
  2. Akshita Saxen, Delhi HC Orders Facebook, Google To Take Down Posts Alleging Sexual Harassment by Artist Subodh Gupta [Read Order], LiveLaw.in (30 September 2019).
  3. Aditi Singh, Delhi HC now directs Facebook to reveal identity of person behind anonymous sexual harassment allegations against Subodh Gupta,  Bar & Bench (10 October 2019).
  4. The Wire Staff, Subodh Gupta Files Rs. 5-Crore Defamation Suit Against Anonymous Instagram Account, The Wire (1 October 2019)
  5. Dhananjay Mahapatra, ‘MeToo’ can’t become a ‘sullying you too’ campaign: Delhi HC, Times of India (17 May 2019).
  6. Devika Agarwal, What Does ‘Right to be Forgotten’ Mean in the Context of the #MeToo Campaign, Firstpost (19 June 2019).

Petition filed in Kerala High Court seeking a ban on ‘Telegram’

A student from National Law School of India, Bengaluru filed a petition in the Kerala high court seeking a ban on the mobile application – Telegram. The reason cited for this petition is that the app has no  checks and balances in place. There is no government regulation, no office in place and the lack of encryption keys ensures that the person sending the message can not be traced back. It was only in June this year that telegram refused to hand over the chat details of the ISIS module to the National Investigation Agency.  As compared to apps such as Watsapp, Telegram has a greater degree of secrecy. One of the features Telegram boasts of is the ‘secret chat’ version which notifies users if someone has taken a screenshot, disables the user from forwarding of messages etc. Further, there are fewer limits on the number of people who can join a channel and this makes moderation on the dissemination of information even more difficult. It is for this reason that telegram is dubbed as the ‘app of choice’ for many terrorists. It is also claimed that the app is used for transmitting vulgar and obscene content including child pornography. Several countries such as Russia and Indonesia have banned this app due to safety concerns. 

Further Reading:

  1. Soumya Tiwari, Petition in Kerala High Court seeks ban on Telegram, cites terrorism and child porn, Medianama (7 October 2019).
  2. Brenna Smith, Why India Should Worry About the Telegram App, Human Rights Centre (17 February 2019).
  3. Benjamin M., Why Are So Many Countries Banning Telegram?, Dogtown Media (11 May 2019).
  4. Vlad Savov, Russia’s Telegram ban is a big convoluted mess, The Verge (17 April 2018).
  5. Megha Mandavia, Kerala High Court seeks Centre’s views on plea to ban Telegram app, The Economic Times (4 October 2019). 
  6. Livelaw News Network, Telegram Promotes Child Pornography, Terrorism’ : Plea In Kerala HC Seeks Ban On Messaging App, Livelaw.in (2 October 2019).

ECJ rules that Facebook can be ordered to take down content globally

In a significant ruling, the European Court of Justice ruled that Facebook can be ordered to take down posts globally, and not just in the country that makes the request. It extends the reach of the EU’s internet-related laws beyond its own borders, and the decision cannot be appealed further. The ruling stemmed from a case involving defamatory comments posted on the platform about an Austrian politician, following which she demanded that Facebook erase the original comments worldwide and not just from the Austrian version worldwide. The decision raises the question of jurisdiction of EU laws, especially at a time when countries are outside the bloc are passing their own laws regulating the matter.

Further Reading:

  1. Adam Satariano, Facebook Can Be Forced to Delete Content Worldwide, E.U.’s Top Court Rules, The New York Times (3 October 2019).
  2. Chris Fox, Facebook can be ordered to remove posts worldwide, BBC News (3 October 2019).
  3. Makena Kelly, Facebook can be forced to remove content internationally, top EU court rules, The Verge (3 October 2019).
  4. Facebook must delete defamatory content worldwide if asked, DW (3 October 2019).

USA and Japan sign Digital Trade Agreement

The Digital Trade Agreement was signed by USA and Japan on October 7, 2019. The Agreement is an articulation of both the nations’ stance against data localization. The trade agreement cemented a cross-border data flow. Additionally, it allowed for open access to government data through Article 20. Articles 12 and 13 ensures no restrictions of electronic data across borders. Further, Article 7 ensures that there are no customs on digital products which are electronically transmitted. Neither country’s parties can be forced to share the source code while sharing the software during sale, distribution, etc. The first formal articulation of the free flow of digital information was seen in the Data Free Flow with Trust (DFFT), which was a key feature of the Osaka Declaration on Digital Economy. The agreement is in furtherance of the Trump administration’s to cement America’s standing as being tech-friendly, at a time when most other countries are introducing reforms to curb the practices of internet giants like Google and Facebook, and protect the rights of the consumers. American rules, such as Section 230 of the Communications Decency Act shields companies from any lawsuits related to content moderation. America, presently appears to hope that their permissive and liberal laws will become the framework for international laws. 

Further Reading:

  1.     Aditi Agarwal, USA, Japan sign Digital Trade Agreement, stand against data localisation, Medianama (9 October 2019).
  2.     U.S.-Japan Digital Trade Agreement Text, Office of the United States Trade Representative (7 October 2019).
  3.   Paul Wiseman, US signs limited deal with Japan on ag, digital trade,Washington Post (8 October 2019).
  4.   FACT SHEET U.S.-Japan Digital Trade Agreement, Office of the United States Trade Representative (7 October 2019).
  5. David McCabe and Ana Swanson, U.S. Using Trade Deals to Shield Tech Giants From Foreign Regulators, The New York Times (7 October 2019).

Read more

Metadata by TLF: Issue 5

Posted on September 25, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

RBI Releases Discussion Paper on Guidelines for Payment Gateways and Payment Aggregators

The RBI on 17th September released a discussion paper on comprehensive guidelines for the activities of payment aggregators and payment gateway providers. It was acknowledged that payment aggregators and payment gateways form a crucial link in the flow of transactions and therefore need to be regulated. The RBI has suggested that these entities be governed by the Payment and Settlement Systems Act, 2007 which requires all  ‘payment systems’ (as defined in the Act) to be authorised by the RBI. Additionally, different frameworks have been proposed for regulating payment aggregators and payment gateways, and full and direct regulation has been discussed in detail. This would entail payment aggregators and gateway services to fully comply with any guidelines issued by the RBI.

Further Reading:

  1. Trisha Jalan, RBI proposes regulation, licensing of payment aggregators and gateways, Medianama (18 September 2019).
  2. Full regulation by RBI will require payment gateways, aggregators to be incorporated in India, The Hindu (18 September 2019).
  3. Shayan Ghosh, RBI could bring payment aggregators, gateways under direct supervision, livemint (18 September 2019).
  4. RBI paper on payment gateways: Maintain Rs. 100 crore net worth or wind up operations, moneycontrol, (19 September 2019).

Twitter removes more than ten thousand accounts across six countries

Political turmoil and instability in countries is majorly aggravated by the internet and various portals online. In light of this crisis, Twitter has decided to remove more than ten thousand accounts across six countries. These accounts were found to be actively spreading unrest in countries which were already in the wrath of a political turmoil. Twitter removed more than four thousand accounts in United Arab Emirates and China, around thousand in Ecuador, and more than two hundred in Spain.

Twitter has been making an active effort since the past one year to identify and remove accounts which were agitating sensitive issues in countries facing crisis. Online portals even have the power to sway the election processes in Democratic countries. In order to curb these impending threats, Twitter has been removing certain accounts on its platform. Even though thousands of new accounts are created everyday and several people have termed this removal process as arduous and never ending, these measures have to be taken.

Further Reading:

  1. Trisha Jalan, Twitter removes 10,000 accounts from six countries for political information operations, Medianama (23 September 2019).
  2. Ingrid Lunder, Twitter discloses another 10,000 accounts suspended for fomenting political discord globally, Tech crunch (20 September, 2019).
  3. Abrar-al-Hiti, Twitter reportedly removes over 10,000 accounts that discourage voting, Cnet (2 November 2018).
  4. Christopher Bing, Twitter deletes over 10,000 accounts, that sought to discourage voting, Reuters (3 November 2018).

California passes AB 5 Bill requiring business to hire workers as employees

California legislators approved a landmark Bill on 11 September, 2019 that has the potential to disrupt the gig economy. The Bill known as “AB 5” requires companies like Uber and Lyft to treat contract workers as employees, which gives hundreds of thousands of California workers basic labour rights for the first time. Apart from its immediate impact, the move by the California legislature might set off a domino effect in New York, Washington State and Oregon, where stalled moves to reclassify drivers might witness renewed momentum. The move has been criticised by ride-hailing firms Uber and Lyft which built their businesses on inexpensive labour, and the companies have warned that recognizing drivers as employees could destroy their businesses.

Further Reading:

  1. Kate Conger and Noam Scheiber, California Bill Makes App-Based Companies Treat Workers as Employees, New York Times (11 September 2019).
  2. Manish Singh, California passes landmark bill that requires Uber and Lyft to treat their driver as employees, Tech Crunch (11 Septemer 2019).
  3. Rosie Perper, California passes landmark bill to treat contract workers as employees, sending it to the governor for signature, Business Insider (11 September 2019).
  4. Alexia Fernandez Campbell, California just passed a landmark law to regulate Uber and Lyft, Vox (18 September 2019).
  5. Andrew J. Hawkins, California just dropped a bomb on the gig economy — what’s next?, The Verge (September 18, 2019).

Microsoft Announces Change in Policies

Microsoft has stated that most large tech law companies, will change the manner in which content is moderated on their social media platforms, irrespective of the US Congress implementing new laws. Their Chief Legal Officer and President, Brad Smith has indicated that most companies will take initiative, irrespective of U.S. Lawmakers. The statement has been made in light of the recent Christchurch shootings which were livestreamed on most social media platforms. Further, major tech companies are responding to the changes in laws around the world. S. 230 of the U.S. Communications Decency Act, 1996 presently protects these companies from being sued on the basis of the content that is uploaded by its users. Microsoft itself has claimed that it has refused the government’s requests for facial recognition software due to the fear that it may be misused. The President of Microsoft has called for other tech companies as well to stop following the “if it’s legal, its acceptable approach” since companies need to start refusing selling their products to certain clients, irrespective of the legality of the action. However, ACLU, senior legislative council has accused Microsoft of continuing to sell software that can track faces and fear in real-time, leading to violation of privacy.

Further Reading:

  1. Sheila Dang, Microsoft’s Brad Smith: Tech companies won’t wait for U.S. to act on social media laws, Reuters (13 September 2019).
  2. Alex Hern, Microsoft boss: tech firm.s must stop ‘if it’s legal, it’s acceptable’ approach, The Guardian (20 September 2019).
  3. Tom Simonite, Microsoft’s Top Lawyer Becomes a Civil Rights Crusader, MIT Technology Review (8 September 2019).
  4. Microsoft’s Brad Smith: Tech Companies Won’t Wait For U.S. To Act On Social Media Laws, Communications Today (15 September 2019).

Read more

Metadata by TLF: Issue 4

Posted on September 10, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Facebook approaches SC in ‘Social Media-Aadhaar linking case’

In 2018, Anthony Clement Rubin and Janani Krishnamurthy filed PILs before the Madras High Court, seeking a writ of Mandamus to “declare the linking of Aadhaar of any one of the Government authorized identity proof as mandatory for the purpose of authentication while obtaining any email or user account.” The main concern of the petitioners was traceability of social media users, which would be facilitated by linking their social media accounts with a government identity proof; this in turn could help combat cybercrime. The case was heard by a division bench of the Madras HC, and the scope was expanded to include curbing of cybercrime with the help of online intermediaries. In June 2019, the Internet Freedom Foundation became an intervener in the case to provide expertise in the areas of technology, policy, law and privacy. Notably, Madras HC dismissed the prayer asking for linkage of social media and Aadhaar, stating that it violated the SC judgement on Aadhaar which held that Aadhaar is to be used only for social welfare schemes. 

Facebook later filed a petition before the SC to transfer the case to the Supreme Court. Currently, the hearing before the SC has been deferred to 13 September 2019 and the proceedings at the Madras HC will continue. Multiple news sources reported that the TN government, represented by the Attorney General of India K.K. Venugopal, argued for linking social media accounts and Aadhaar before the SC. However, Medianama has reported that the same is not being considered at the moment and the Madras HC has categorically denied it.

Further Reading:

  1. Aditi Agrawal, SC on Facebook transfer petition: Madras HC hearing to go on, next hearing on September 13, Medianama (21 August 2019).
  2. Nikhil Pahwa, Against Facebook-Aadhaar Linking, Medianama (23 August 2019).
  3. Aditi Agrawal, Madras HC: Internet Freedom Foundation to act as an intervener in Whatsapp traceability case, Medianama (28 June 2019).
  4. Aditi Agrawal, Kamakoti’s proposals will erode user privacy, says IIT Bombay expert in IFF submission, Medianama (27 August 2019).
  5. Prabhati Nayak Mishra, TN Government Bats for Aadhaar-Social Media Linking; SC Issues Notice in Facebook Transfer Petition, LiveLaw (20 August 2019).
  6. Asheeta Regidi, Aadhaar-social media account linking could result in creation of a surveillance state, deprive fundamental right to privacy, Firstpost (21 August 2019).

Bangladesh bans Mobile Phones in Rohingya camps

Adding to the chaos and despair for the Rohingyas, the Bangladeshi government banned the use of mobile phones and also restricted mobile phone companies from providing service in the region. The companies have been given a week to comply with these new rules. The reason cited for this ban was that refugees were misusing their cell phones for criminal activities. The situation in the region has worsened over the past two years and the extreme violation of Human Rights is termed to be reaching the point of Genocide according to UN officials. This ban on mobile phones, would further worsen the situation in Rohingya by increasing their detachment with the rest of the world, thus making their lives at the refugee camp even more arduous.

Further Reading:

  1. Nishta Vishwakarma, Bangladesh bans mobile phones services in Rohingya camps, Medianama (4 September 2019).
  2. Karen McVeigh, Bangladesh imposes mobile phone blackout in Rohingya refugee camp, The Guardian (5 September 2019).
  3. News agencies, Bangladesh bans mobile phone access in Rohingya camps, Aljazeera (3 September 2019).
  4. Ivy Kaplan, How Smartphones and Social Media have Revolutionised Refugee Migration, The Globe Post (19 October 2018).
  5. Abdul Aziz, What is behind the rising chaos in Rohingya camps, Dhakka Tribune (24 March 2019).

YouTube to pay 170 million penalty for collecting the data of children without their consent

Alphabet Inc.’s Google and YouTube will be paying a $170 million penalty to the Federal Trade Commission. It will be paid to settle allegations that YouTube collected the personal information of children by tracking their cookies and earning millions through targeted advertisements without parental consent. The FTC Chairman, Joe Simons, condemned the company for publicizing its popularity with children to potential advertisers, while blatantly violating the Children’s Online Privacy Protection Act. The company has claimed to advertisers, that it does not comply with any child privacy laws since it doesn’t have any users under the age of 13. Additionally, the settlement mandates that YouTube will have to create policies to identify content that is aimed at children and notify creators and channel owners of their obligations to collect consent from their parents. In addition, YouTube has already announced that it will be launching YouTube Kids soon which will not have targeted advertising and will have only child-friendly content. Several prominent Democrats in the FTC have criticized the settlement, despite it being the largest fine on a child privacy case so far, since the penalty is seen as a pittance in contrast to Google’s overall revenue.

Further Reading:

  1. Avie Schenider, Google, YouTube To Pay $170 Million Penalty Over Collecting Kids’ Personal Info, NPR (4 September 2019).
  2. Diane Bartz, Google’s YouTube To Pay $170 Million Penalty for Collecting Data on Kids, Reuters (4 September 2019).
  3. Natasha Singer and Kate Conger, Google Is Fined $170 Million for Violating Children’s Privacy on YouTube, New York Times (4 September 2019).
  4. Peter Kafka, The US Government Isn’t Ready to Regulate The Internet. Today’s Google Fine Shows Why, Vox (4 September 2019).

Facebook Data Leak of Over 419 Million Users

Recently, researcher Sanyam Jain located online unsecured servers that contained phone numbers for over 419 million Facebook users, including users from US, UK and Vietnam. In some cases, they were able to identify the user’s real name, gender and country. The database was completely unsecured and could be accessed by anybody. The leak increases the possibility of sim-swapping or spam call attacks for the users whose data has been leaked. The leak has happened despite Facebook’s statement in April that it would be more dedicated towards the privacy of its users and restrict access to data to prevent data scraping. Facebook has attempted to downplay the effects of the leak by claiming that the actual leak is only 210 million, since there are multiple duplicates in the data that was leaked, however Zack Whittaker, Security Editor at TechCrunch has highlighted that there is little evidence of such duplication. The data appears to be old since recently the company has changed its policy such that it users can no longer search for phone numbers. Facebook has claimed that there appears to be no actual evidence that there was a serious breach of user privacy.

Further Reading:

  1. Zack Whittaker, A huge database of Facebook users’ phone numbers found online, TechCrunch (5 September 2019).
  2. Davey Winder, Unsecured Facebook Server Leaks Data Of 419 Million Users, Forbes (5 September 2019).
  3. Napier Lopez, Facebook leak contained phone numbers for 419 million users, The Next Web (5 September 2019).
  4. Kris Holt, Facebook’s latest leak includes data on millions of users, The End Gadget (5 September 2019).

Mozilla Firefox 69 is here to protect your data

Addressing the growing data protection concerns Mozilla Firefox will now block third party tracking cookies and crypto miners by its Enhanced Tracking Protection feature. To avail this feature users will have to update to Firefox 69, which enforces stronger security and privacy options by default. Browser’s ‘Enhanced Tracking Protection’ will now remain turned on by default as part of the standard setting, however users will have the option to turn off the feature for particular websites. Mozilla claims that this update will not only restrict companies from forming a user profile by tracking browsing behaviour but will also enhance the performance, User Interface and battery life of the systems running on Windows 10/mac OS.

Further Readings

  1. Jessica Davies, What Firefox’s anti-tracking update signals about wider pivot to privacy trend, Digiday (5 September 2019).
  2. Jim Salter, Firefox is stepping up its blocking game, ArsTechnica (9 June 2019).
  3. Ankush Das, Great News! Firefox 69 Blocks Third Party Cookies, Autoplay Videos & Cryptominers by Default, It’s Foss (5 September 2019).
  4. Sean Hollister, Firefox’s latest version blocks third-party trackers by default for everyone, The Verge (3 September 2019).
  5. Shreya Ganguly, Firefox will now block third-party tracking cookies and cryptomining by default for all users, Medianama (4 September 2019).

Delhi Airport T3 terminal to use ‘Facial Recognition’ technology on a trial basis

Delhi airport would be starting a three-month trial of the facial recognition system in its T3 terminal. This system is called the Biometric Enabled Seamless Travel experience (BEST). With this technology, passenger’s entry would be automatically registered at various points such as check-in, security etc. Portuguese company- toolbox has provided the technical and software support for this technology. Even though this system is voluntary in the trial run the pertinent question of whether it will remain voluntary after it is officially incorporated is still to be answered. If the trial run is successful, it will be officially incorporated.

Further Reading:

  1. Soumyarendra Barik, Facial Recognition tech to debut at Delhi airport’s T3 terminal; on ‘trial basis’ for next three months, Medianama (6 September 2019).
  2. PTI, Delhi airport to start trial run of facial recognition system at T3 from Friday, livemint (5 September 2019).
  3. Times Travel Editor, Delhi International Airport installs facial recognition system for a 3 month trial, times travel (6 September 2019).
  4. Renée Lynn Midrack, What is Facial Recognition, lifewire (10 July 2019).
  5. Geoffrey A. Fowler, Don’t smile for surveillance: Why airport face scans are a privacy trap, The Washington Post (10 June 2019).

UK Court approves use of facial recognition systems by South Wales Police

In one of the first cases of its kind a British court ruled that police use of live facial recognition systems is legal and does not violate privacy and human rights. The case, brought by Cardiff resident Ed Bridges, alleged that his right to privacy had been violated by the system which he claimed had recorded him at least twice without permission, and the suit was filed to hold the use of the system as being violative of human rights including the right to privacy. The court arrived at its decision after finding that “sufficient legal controls” were in place to prevent improper use of the technology, including the deletion of data unless it concerned a person identified from the watch list.

Further Reading:

  1. Adam Satariano, Police Use of Facial Recognition Is Accepted by British Court, New York Times (4 September 2019).
  2. Owen Bowcott, Police use of facial recognition is legal, Cardiff high court rules, The Guardian (4 September 2019).
  3. Lizzie Dearden, Police used facial recognition technology lawfully, High Court rules in landmark challenge, The Independent (4 September 2019).
  4. Donna Lu, UK court backs police use of face recognition, but fight isn’t over, New Scientist (4 September 2019).

Read more

Sahamati: Self Regulatory Organisation for Financial Data Sharing Ecosystem

Posted on September 6, 2019December 4, 2020 by Tech Law Forum @ NALSAR

This post, authored by Mr. Srikanth Lakshmanan, is part of TLF’s blog series on Account Aggregators. Other posts can be found here. 

Mr. Srikanth Lakshmanan is the founder of CashlessConsumer, a consumer collective working on digital payments to increase awareness, understand technology, represent consumers in digital payments ecosystem to voice perspectives, concerns with a goal of moving towards a fair cashless society with equitable rights. 

Read more

Article 13 of the EU Copyright Directive: A license to gag freedom of expression globally?

Posted on August 9, 2019August 4, 2019 by Tech Law Forum @ NALSAR

The following post has been authored by Bhavik Shukla, a fifth year student at National Law Institute University (NLIU) Bhopal. He is deeply interested in Intellectual Property Rights (IPR) law and Technology law. In this post, he examines the potential chilling effect of the EU Copyright Directive.

 

Freedom of speech and expression is the bellwether of the European Union (“EU”) Member States; so much so that its censorship will be the death of the most coveted human right. Europe possesses the strongest and the most institutionally developed structure of freedom of expression through the European Convention on Human Rights (“ECHR”). In 1976, the ECHR had observed in Handyside v. United Kingdom that a “democratic society” could not exist without pluralism, tolerance and broadmindedness. However, the recently adopted EU Copyright Directive in the Digital Single Market (“Copyright Directive”) seeks to alter this fundamental postulate of the European society by introducing Article 13 to the fore. Through this post, I intend to deal with the contentious aspect of Article 13 of the Copyright Directive, limited merely to its chilling impact on the freedom of expression. Subsequently, I shall elaborate on how the Copyright Directive possesses the ability to affect censorship globally.

Collateral censorship: Panacea for internet-related issues in the EU

The adoption of Article 13 of the Copyright Directive hints towards the EU’s implementation of a collateral censorship-based model. Collateral censorship occurs when a state holds one private party, “A” liable for the speech of another private party, “B”. The problem with such model is that it vests the power to censor content primarily in a private party, namely “A” in this case. The implementation of this model is known to have an adverse effect on the freedom of speech, and the adoption of the Copyright Directive has contributed towards producing such an effect.

The Copyright Directive envisages a new concept of online content sharing service providers (“service providers”), which refers to a “provider… whose main purpose is to store and give access to the public to significant amount of protected subject-matter uploaded by its users…” Article 13(1) of the Copyright Directive states that such service providers shall perform an act of “communication to the public” as per the provisions of the Infosoc Directive. Further, Article 13(2a) provides that service providers shall ensure that “unauthorized protected works” shall not be made available. However, this Article also places service providers under an obligation to provide access to “non-infringing works” or “other protected subject matter”, including those covered by exceptions or limitations to copyright. The Copyright Directive’s scheme of collateral censorship is evident from the functions entrusted to the service providers, wherein they are expected to purge their networks and websites of unauthorized content transmitted or uploaded by third parties. A failure to do so would expose service providers to liability for infringement of the content owner’s right to communication to the public, as provided in the Infosoc Directive.

The implementation of a collateral censorship model will serve as a conduit to crackdown on the freedom of expression. The reason for the same emanates from the existence of certain content which necessarily falls within the grey area between legality and illegality. Stellar examples of this content are memes and parodies. It is primarily in respect of such content that the problems related to censorship may arise. To bolster this argument, consider Facebook, the social media website which boasts 1.49 billion daily active users. As per an official report in 2013, users were uploading 350 million photos a day, the number has risen exponentially today. When intermediaries like Facebook are faced with implementation of the Copyright Directive, it will necessarily require them to employ automated detecting mechanisms for flagging or detecting infringing material, due to the sheer volume of data being uploaded or transmitted. The accuracy of such software in detecting infringing content has been the major point of contention towards its implementation. Even though content like memes and parodies may be flagged as infringing by such software, automated blocking of content is prohibited under Article 13(3) of the Copyright Directive. This brings up the question of human review of such purportedly infringing content. In this regard, first, it is impossible for any human agency to review large tracts of data even after filtration by an automatic system. Second, in case such content is successfully reviewed somehow, a human agent may not be able to correctly decide the nature of such content with respect to its legality.

This scenario shall compel the service providers to resort to taking down the scapegoats of content, memes and parodies, which may even remotely expose them to liability. Such actions of the service providers will certainly censor freedom of expression. Another problem arising from this framework is that of adversely affecting net neutrality. Entrusting service providers with blocking access to content may lead to indiscriminate blocking of certain type of content.

Though the Copyright Directive provides certain safeguards in this regard, they are latent and ineffective. For example, consider access to a “complaints and redress mechanism” provided by Article 13(2b) of the Copyright Directive. This mechanism offers a latent recourse after the actual takedown or blocking of access to certain content. This is problematic because the users are either oblivious to/ unaware of such mechanisms being in place, do not have the requisite time and resources to prove the legality of content or are just fed up of such repeated takedowns. An easy way to understand these concerns is through YouTube’s current unjustified takedown of content, which puts the content owners under the same burdens as expressed above. Regardless of the reason for inaction by the content owners, censorship is the effect.

The EU Copyright Directive’s tryst with the world

John Perry Barlow had stated in his Declaration of the Independence of Cyberspace that “Cyberspace does not lie within your borders”. This statement is true to a large extent. Cyberspace and the internet does not lie in any country’s border, rather its existence is cross-border. Does this mean that the law in the EU affects the content we view in India? It certainly does!

The General Data Protection Regulation (“GDPR”) applies to countries beyond the EU. The global effect of the Copyright Directive is similar, as service providers do not distinguish European services from those of the rest of the world. It only makes sense for the websites in this situation to adopt a mechanism which applies unconditionally to each user regardless of his/ her location. This is the same line of reasoning which was adopted by service providers in order to review user and privacy policies in every country on the introduction of the GDPR. Thus, the adoption of these stringent norms by service providers in all countries alike due to the omnipresence of internet-based applications may lead to a global censorship motivated by European norms.

The UN Special Rapporteur had envisaged that Article 13 would have a chilling effect on the freedom of expression globally. Subsequent to the Directive’s adoption, the Polish government protested against its applicability before the CJEU on the ground that it would lead to unwarranted censorship. Such action is likely to be followed by dissenters of the Copyright Directive, namely Italy, Finland, Luxembourg and the Netherlands. In light of this fierce united front, hope hinges on these countries to prevent the implementation of censoring laws across the world.

Read more

Mackinnon’s “Consent of The Networked” Deconstruction (Part I)

Posted on July 7, 2019November 12, 2019 by Prateek Surisetti

SERIES INTRODUCTION

Rebecca MacKinnon’s “Consent of the Networked: The Worldwide Struggle for Internet Freedom” (2012) is an interesting read on online speech. Having read the book, I will be familiarizing readers with some of the themes discussed in it.

In Part I, we will discuss censorship in the context of authoritarian governments.

In Part II, we will be dealing with the practices of democratic governments vis-à-vis online speech.

In Part III, we shall discuss the influence of corporations on online speech.

Essentially, the discussion will revolve around the interactions between the three stakeholders: netizens, corporations providing internet-based products and governments (both autocratic and democratic). Each of the stakeholders have varied interests or agendas and work with or against in each other based on the situation.

Governments wish to control corporations’ online platforms to pursue political agendas and corporations wish to attract users and generate profits, while also having to acquiesce to government demands to access markets. The ensuing interactions, involving corporations and governments, affect netizens’ online civil liberties across the world.

PART I: AUTHORITARIAN GOVERNMENTS (THE CHINESE MODEL)

“Networked Authoritarianism” is the exercise of authoritarianism, by a government, through the control over the network used by the citizens. MacKinnon explains the phenomenon through an explanation of the Chinese government’s exercise of control over the Chinese networks.

Interestingly, the Chinese citizenry is unaware of the infamous Tiananmen Square protests. The government, with compliant corporates (in order to access Chinese markets, corporations comply), works in an opaque manner to manipulate information reaching the people. The people aren’t even aware of the fact of manipulation!

The government does allow discussion, but within the limits prescribed by it. This is the concept of “Authoritarian Deliberation”. Considerable discussion occurs on the “e-parliament” (a website where the Chinese public is allowed to make suggestions on issues of policy) and the Chinese government has stated that it cares about public opinion, but any discussion that could potentially lead to unrest is screened out. In other words, the government is engendering a false sense of freedom amongst its populace.

Now, let us have a look at the modus operandi of such Chinese censorship.

Modus Operandi

Firstly, The Chinese networks are connected to the global networks through 8 gateways. Each of the gateways contain data filters that restrict websites that contain specific restricted key words. As a slight aside, it is pertinent to note that western corporations, such as Forcepoint and Narus, also provide software that assist authoritarian governments in censorship and surveillance.

Now, the Chinese netizens can access global networks through certain technical means. But there exists a lack of incentive to do so as the Chinese have their own, government compliant, versions of Twitter, Facebook and Google (Weibo; RenRen & Kaixin001: Facebook; Baidu respectively) with which the people are content. Given the size of the Chinese market, investors abound and consequently, there doesn’t exist a dearth of products.

Secondly, as mentioned earlier, the Chinese government forces corporations to manage their platforms in compliance with the government’s standards. Content from offshore servers of non-compliant corporations are blocked by the data filters. But if a corporation intends to work in China, it will have to self-regulate and ensure that platforms are compliant with the censorship policy.

Thirdly, in addition to censorship, the Chinese government also manipulates discussions through “Astroturfing”.  Originally a marketing term, it refers to the practice of paying people a certain fee to propagate views beneficial to the payee. The 50 Cent Army (etymology from fee per post) is a common term used to refer to those paid by the Chinese government.

Apart from Astroturfing, there also exist people who voluntarily spread propaganda on the internet. While the Chinese government can disavow knowledge of their activities, they are given special treatment by the government to carry out their agendas.

Through the approach followed above, the Chinese government has manipulated its populace with wondrous success. From the example above we have learnt that mere access to the internet doesn’t ensure political reform. It depends on the authoritarian government’s ability to manipulate the networks. There exist other examples of other countries successfully preventing unrest through manipulation of speech on its networks.

Censorship in Other Countries

Iran, too, has successfully manipulated networks. The Iranian government was able to restrict communications and debilitate the Green Movement, an uprising against the president at the time. Even if the government isn’t actually monitoring the communications, if enough people believe it is doing so, the government will have achieved its purpose.

The Russian government, instead of using online tools to restrict content, restricts speech through offline methods in the form of defamation laws and threat of physical consequences. Even the Chinese take offline retaliatory measures. We will discuss one such example (Shi Tao) in Part III.

Now, let us look at a few of the approaches or policies that democratic countries have adopted to tackle censorship in repressive regimes.

Approaches to Tackling Authoritarian Censorship

Initially, policies attempted to ensure that netizens were able to access an uncensored internet. Access to an uncensored internet was expected to create political consciousness and consequently, revolution against repressive regimes. Hence, government funding was aimed towards circumvention technology that would facilitate netizens in accessing the uncensored cyberspace. Ironically though, while the public treasury being used to fund circumvention technology, American corporations are aiding censorship by providing the censorship technology to authoritarian regimes.

But there exist other approaches as well. Certain policy experts, with the belief that free speech precedes democracy, are in favour of encouraging citizens, under repressive regimes, to host and develop content. Advocates of this approach argue that such an approach would be more beneficial towards building communities of dissent as opposed to attempting to provide them access to offshore content.  Further, such an approach doesn’t portray the U.S. as an enemy of the authoritarian state, leading to lesser complications, since the content will be generated by the citizens of the repressive state itself.

Lastly, some experts have suggested that democratic countries should make efforts to set their own house in order, instead of interfering with other regimes. Laws, in even the most democratic of countries, could be draconian. For instance, the U.K. was set to allow for disconnection of a user’s internet access, if she or he violates copyright thrice.  And these laws serve as a justification for authoritarian regimes to censor.

Conclusion

Here, using Chinese censorship as an example, we have attempted to understand (a) the concepts of “networked authoritarianism” and “authoritarian deliberation”, (b) the online and offline methods of censorship employed by authoritarian governments (gateway regulation, corporate compliance, “astroturfing”, et cetera) and (c) approaches adopted by democracies to tackle censorship by repressive regimes.

In Part II, we will discuss the effects of actions by democratic governments on online speech.

 

Image taken from here.

Read more

The Dark Web : To Regulate Or Not Regulate, That Is The Question.

Posted on December 29, 2018December 29, 2018 by Shweta Rao

[Ed Note : In an interesting read, Shweta Rao of NALSAR University of Law brings us upto speed on the debate regarding regulation of the mysterious “dark web” and provides us with a possible way to proceed as far as this hidden part of the web is concerned. ]

Human Traffickers, Whistleblowers, Pedophiles, Journalists and Lonely-Hearts Chat-room participants all find a home on the Dark Web, the underbelly of the World Wide Web that is inaccessible to the ordinary netizen.  The Dark Web is a small fraction of the Deep Web, a term it is often confused with, but the distinction between the two is important.

The Dark Web unlike the Deep Web is only accessible through anonymous servers, as distinguished from non-anonymous surface web accessing servers like Google, Bing etc. One such server is The onion router (Tor),one of the most popular servers for accessing the dark web, which derives its name from the similarity of the platform’s multilayered encryption to that of the layers of an onion. Dark Web sites also require users to enter a unique Tor address with an additional security layer of a password input. These access restrictions are what distinguish the Dark Web from the Deep Web, which may be breached into through Surface Web applications. Further, the Deep Web may, due to its discreet nature, seem to occupy a fraction of the World Wide Web, when in actuality, it is estimated to be 4000-5000 times larger than the Surface Web and hosts around 90% of the internet’s web traffic.  The Dark Web, in contrast to these figures, occupies a minuscule amount of space, with less than 45,000 Dark Web sites as recorded in 2015. Thus, the difference between Deep and Dark Web lies not in their respective content, but in the requirements and means of access to these two spaces along with the quantity of web traffic they attract.

The Dark Web has existed nearly as long as the Internet has and begun as a parallel project to the US Department of Defense’s (USDD’s) 1960s ARPANET Project. The USDD allowed the Dark Web to be accessible to the public via the Tor for it to mask its own communications. Essentially, if the Dark Web was used for only USDD communications there would be no anonymity as anyone who made their way into the system would be aware that all communications would be that of the USDD. So, by allowing the public to access it via the Tor, the USDD could use the general traffic of the Dark Web through the Tor to mask its communications under the stampede of information passing through the Tor.

While the Internet became a household name by the late 90’s the Dark Web remained obscure until 2013 when it gained infamy due to the arrest of Ross William Ulbricht ( aka the Dread Pirate Roberts) the operator of the Silk Route, marketplace for illegal goods and services.

While fully regulating a structure such as the Dark Web is a near impossible feat, this arrest has indeed pushed the previously obscure Dark Web into the spotlight, putting prosecutors and law enforcement agencies across the world on the alert. This new-found attention into the workings of the Dark Web is the junction at which the debate for regulation policies emerges.

The debate on the status of surveillance of the Dark Web broadly has two branches. The first branch, which has emerged with more force post the exposure of the Silk Route, advocates for more frequent and stricter probes into the activities of the Dark Web. In contrast, the second branch weighs increased regulation against issues of breach of privacy, which is one of the main reasons behind use of servers such as Tor.

In order to understand the reasoning behind either branch’s stance, it is essential to look at the breakup of the Dark Web and its various uses, each finding its place at different points along the spectrum having legality and illegality as its extremes.

The Dark Web, as mentioned  previously, occupies a faction of the space on the Deep Web with less than 50,000 websites currently functioning, and the number of fully active sites are even lower. Legal activities take up about 55.9% of the total space on the Dark Web whilst the rest of the space contains illegal activities such as counterfeit, child pornography, illegal arms dealing and drug pedaling amongst others. Activities such as whistleblowing and hacking given their contextual scenario-based characteristic would thus not allow themselves to be placed in one or the other category and would fall into a “grey area” of sorts.

With over 50% of the activity on the Dark Web being illegal, the call for increased regulations seems to be reasonable. However, those who are regular residents of this fraction of the internet oft differ. And this hinges on, as mentioned earlier, the issue of privacy.

Privacy has become a buzzword across the globe in the recent past with various nations having to reevaluate the rights their citizens’ information had in the midst of the boom of the data wars. From the General Data Protection Regulations (GDPR) in the EU to the Puttaswamy case in India, across the globe, the Right to Privacy has been thrown into the spotlight. Its relevance only grows with corporations both large and small mining information from users across platforms. Privacy has thus become the need of the hour, and the privacy that the Dark Web provides has been one its biggest USPs. It has harbored anyone one requiring the shield of privacy including political whistleblowers who have in the past have released vital information on violations against citizens in both tyrannical regimes as well as in democracies. Edward Snowden, whose claim to infamy was indeed surrounding privacy and surveillance, had used and still continues to use the Dark Web to ensure the protection of any communications from his location, which is a Moscow airport terminal.

In the age of #FakeNews  targeting the journalism community, the need to protect the private Tor gateway that many journalists use to protect their sensitive information seems to be of paramount importance. But despite what the creators of the Tor would like to believe, the bulk of the active traffic (which differs from the actual number of sites present) in the aforementioned “illegal” branch of the Dark Web is predominantly is that of child pornography distribution.

However, this might not bode the end of the privacy sphere as created by the Tor within the Dark Web. Using the same logic as used by the USDD, given that the increased activity of child pornography and abuse sites is a known factor, it becomes easier for authorities to single out threads of heightened activity within the Dark Web without compromising its integral cloak of privacy. This tactic has been successfully used by the American FBI in the Playpen case where it  singled out the thread of rapid activity created by a  website called Playpen which had over 200,000 active accounts that participated in the creating and viewing of child pornography. The FBI singled out the traffic for this site due to its dynamic activity and once the source of the activity was precisely determined, the FBI in a unprecedented move extracted the Playpen website from the Dark Web onto a federal server and then were able to access the IP addresses of over a 1000 users who were then prosecuted, with the creator of the site having received 30 years of jailtime. This was all done without the privacy of other Tor users being breached.

Thus, whilst Hamlet’s existential question may not have a middle ground to settle on, the status of regulations on the Dark Web could be established by using the past precedent and by using better non-invasive surveillance methods along with international cooperation, in order to respect its true intended purpose.

Read more

TechLaw Symposium at NALSAR University of Law, Hyderabad – Press Note

Posted on October 4, 2018December 4, 2020 by Tech Law Forum @ NALSAR

[Ed Note : The following press note has been authored by Shweta Rao and Arvind Pennathur from NALSAR University of Law. Do watch  this space for more details on the symposium!]

On the 9th of September NALSAR University of Law’s Tech Law Forum conducted its first ever symposium with packed panels discussing a variety of issues under the broad theme of the Right to Privacy. This symposium took place against the backdrop of the recent draft Data Protection Bill and Report released by the Srikrishna Committee.

Read more
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • Next

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme