Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Category: Data Protection

Examining Artificial Intelligence and Privacy in the light of COVID-19

Posted on May 13, 2020November 1, 2020 by Tech Law Forum @ NALSAR

[This post has been authored by Suvam Kumar, a 3rd year student at National Law University, Jodhpur.]

The COVID-19 pandemic has exposed the frailty of mankind’s societies and systems. In spite of tremendous progress made by humans in several fields of life, we are rendered helpless by the rapid and uncontrolled spread of the coronavirus. In these crucial times, the role of Artificial intelligence (“AI”) becomes very important and countries like China, USA, Canada, Australia, and India have leaned on AI to fight the pandemic. The use of AI has also been approved by the World Economic Forum (“WEF”) which has emphasized the role of AI as a panacea to fight this pandemic. However, the widespread use of AI is not without its own challenges and risks. There are serious concerns regarding the application of AI in the health sector especially during a pandemic like COVID-19; however, they can be mitigated by utilizing a legal regime that regulating AI effectively and conscientiously.

Read more

Metadata by TLF: Issue 9

Posted on May 9, 2020December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our reporters Kruttika Lokesh and Dhananjay Dhonchak put together handpicked stories from the world of tech law! You can find other issues here.

Zoom sued by shareholder for ‘overstating’ security claims

Read more

Metadata by TLF: Issue 8

Posted on May 9, 2020December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our reporters Kruttika Lokesh and Dhananjay Dhonchak put together handpicked stories from the world of tech law! You can find other issues here.

Supreme Court quashes RBI circular and permits cryptocurrency trading

Read more

Welcoming The Era of Technology Friendly Laws in India

Posted on January 2, 2020November 1, 2020 by Tech Law Forum @ NALSAR

This brief introduction to regulation of autonomous vehicles has been authored by Khushi Sharma and Aarushi Kapoor, second year students of Hidayatullah National Law University (HNLU), Raipur. [Ed. Note: This article was written before the 2019 Personal Data Protection Bill had been made public. Click here for the new Bill.]

India being the 7th largest manufacturer of commercial vehicles in the year 2017-18, coupled with its automobile sector which is the 4th largest in the world, are key factors driving India’s economic growth. Technological advancement is both a cause and effect of this growth. Innovative minds rule India and the world today to such an unimaginable extent that the very idea of acar running by itself with onejust sitting back and relaxing, now seems a reality. The debut of driverless vehicles in India was at Defexpo 2016 in New Delhi where Novus Drive, a driverless shuttle was introduced. However, a recurring question is ‘Are we really ready yet?’

Read more

Building safe consumer data infrastructure in India: Account Aggregators in the financial sector (Part II)

Posted on December 30, 2019November 1, 2020 by Tech Law Forum @ NALSAR

TLF is proud to bring you a two-part guest post authored by Ms. Malavika Raghavan, Head, Future of Finance Initiative and Ms. Anubhutie Singh, Policy Analyst, Future of Finance Initiative at Dvara Research. This is the second part of a two-part series that undertakes an analysis of the technical standards and specifications present across publicly available documents on Account Aggregators. Previously, the authors looked at the motivations for building AAs and some consumer protection concerns that emerge in the Indian context.

Account Aggregators (AA) appear to be an exciting new infrastructure, for those who want to enable greater data sharing in the Indian financial sector. The key data being shared will extensive personal information about individuals like us – detailing our most intimate and sensitive financial transactions and potentially non-financial data too. This places individuals at the heart of these technical systems. Should the systems be breached, misused or otherwise exposed to unauthorised access the immediate casualty will be the privacy of the people whose information is compromised. Of course, this will also have an impact on data quality across the financial sector.

Read more

Building safe consumer data infrastructure in India: Account Aggregators in the financial sector (Part I)

Posted on December 30, 2019August 11, 2022 by Tech Law Forum @ NALSAR

TLF is proud to bring you a two-part guest post authored by Ms. Malavika Raghavan, Head, Future of Finance Initiative and Ms. Anubhutie Singh, Policy Analyst, Future of Finance Initiative at Dvara Research. Following is the first part of a two-part series that undertakes an analysis of the Account Aggregator system. Click here for the second part.

The Reserve Bank of India (RBI) released Master Directions on Non-Banking Financial Companies – Account Aggregators (Master Directions) in September 2016, and licences for India’s first Account Aggregators (AAs) were issued last year. From these guidelines and related documents, we understand that the purpose of Account Aggregator (AA) is to collect and share:

Read more

Data Protection of Deceased Individuals: The Legal Quandry

Posted on December 5, 2019December 13, 2019 by Tech Law Forum @ NALSAR

This post has been authored by Purbasha Panda and Lokesh Mewara, fourth and fifth years from NLU Ranchi. It discusses the data protection laws for deceased individuals, and the legal justifications for post-mortem privacy. 

Post-mortem privacy is defined as the right of a person to preserve and control what formulates his/her reputation after death. It is inherently linked with the idea of dignity after death. The first type of opinion with respect to post-mortem privacy raises the question of how there can be a threat to the reputation of a person if he no longer exists. However, there is another school of thought which argues  that when a person’s public persona or reputation is harmed after death, he might not be defamed but the ante-mortem person could. Another question that comes up, is that when a person dies, does the interest of the dead person that survives become the interest of others or is it actually his interests alone that are protected or is it both the possible scenarios?

Private law justification of post-mortem privacy

There is an English principle called “Action personalis moritur cum persona”which means that a personal cause of action dies with the person implying a negative attitude towards death. However certain EU states following the civilian tradition have allowed protection of data of the deceased. Article 40(1) of the French Data Protection Act regulates the processing of data after an individual’s death. As per the article, individuals can give instructions to data controllers providing general or specific indications about retention, erasure and communication of personal data after their death.

In the US case of In Re Ellsworth[1], yahoo as a webmail provider refused access to the surviving family of a US marine killed in action. Yahoo argued that the privacy policy of the company aims to protect the privacy of third parties who have interacted with the deceased individual’s account. The family on the other hand, argued that they should be able to see the emails he sent to them, as well as the emails he sent to others since Yahoo follows a policy of deleting the account once the account user dies. The family argued that pursuant to this policy, there would be an imminent danger that emails would be lost forever. The court allowed Yahoo to stick to their privacy policy and it did not allow login and password access of the deceased individuals account but instead gave an alternate option of providing the family with a CD containing copies of emails in the account of the dead person. The ratio, in this case, raises certain questions with respect to where proprietary rights with respect to the content of a mail are placed. Is it transfer of property rights or is there any other mode to transfer content of the email to the legal heirs? One could view that since the deceased is the author of those emails, copyright could be vested with him. Subsequently, this could be transferred to his legal heirs, giving them a right to approach the court to access the emails. Another view could be that Yahoo was vested with proprietary rights in the email which could be made available to the family members on court order. There are certain practical problems with granting rights on the content of an email.

Justice Edward Stuart in the case of Fairstar Heavy Transport N.V v. Adkins[2] attempted to hypothesize a possible right to property over the contents of an email. This case dealt with a request of an employer to access content of emails on personal computer of his ex-employee relating to business affairs of his company. The question that came before the Queen’s bench was “Whether the claimant had any proprietary rights over content of the emails?”. This case held that the contents of an email cannot be subjected to proprietary rights and therefore the employer does not have an enforceable proprietary claim over the content of the e-mails. The court while trying to decide existence of a possible proprietary right over the contents came up with five possible methods of construing such proprietary rights. The first method would be that the title over the content of the email remains throughout with the creator or his principal. The second method would be that upon an email being sent title of the content would pass to the recipient (drawing from the analogy of vesting of title in passing of a letter according to the principles of transfer of property). The third method would be that the recipient of an email has a license to use the content of an email for any legitimate purpose consistent with the circumstance in which it was sent. The fourth method would be that the sender of the email has a license to retain the content and use it for any legitimate purposes and finally the last method would be that the title over the content of the email is shared between the sender and all the recipients in the chain. The court analysed the veracity of existence of each of these methods in construing a possible right to property over information.

The court held that the implication of adopting the first method would be that the creator of an email would be able to assert his title against the content of the world. The court opined that implication of this option would be strange and would have far-reaching impractical consequences. The court opined that if a possible title over the content of an email remains with the creator, then such vesting of title must allow the creator to use the very same title in all possible forms, which means it should also allow the creator to exercise the title by asking recipients down the chain to delete the content of the email. However, such exercise of the title is not feasible or practical, making this very option quite redundant. The court also rejected the second method. It rejected this method on the ground that if at any given point of time an email is forwarded to multiple recipients, the question of who had the title over its content at any given point of time would be extremely confusing. The third and fourth method mix the existence of proprietary right over the content of an email with nature of use of such information that is whether it’s use is for legitimate purposes or illegitimate purposes. The court held that the nature of use of information should not be an important consideration for exercising a proprietary right of control. The fifth option was also rejected on the ground of compelling impracticality.

The advent of digital will in India: future of data protection of deceased individuals?

If we look at the “Information Technology Act, 2000” then Section 1(4) of the IT Act,2000 read with the First Schedule of the IT Act provides that the IT Act is not applicable to a will defined under clause (h) of section 2 Indian Succession Act, 1925 including any other testamentary dispositions. If we look at digital wills in foreign jurisdictions, then the most talked about legislation would be the “Fiduciary Access to digital assets and Digital Accounts Act”. This piece of legislation is enacted by Delaware, which became the first state in the United States allowing executors of a digital will the same authority to take control of a digital asset. If we look at the 2016 Delaware Code, it basically revolves around the concept of ‘digital assets’ and the idea of ‘fiduciary’, as someone who could be trusted with the digital asset. The legislation defines “digital asset” as data, text, emails, audio, video, images, sounds, social media content, health care records , health insurance records, computer resource codes, computer programs and software, user names, passwords created, generated, sent, communicated, shared, received or stored by electronic means on a digital device.The legislation also defines a “fiduciary’ as a personal representative appointed by a registrar of wills or an agent under durable personal power of attorney. It provides that a fiduciary may exercise control over any and all rights in digital assets and digital accounts of an account holder to the extent permitted under state law or federal law.

Data Protection Bill

The Data Protection Bill, 2018 provides for the “right to be forgotten” under Section 27. It refers to the ability of individuals to limit, de-link, delete, or correct the disclosure of personal information on the internet that is misleading, embarrassing, irrelevant, or anachronistic. Now, upon an individual passing away, his sensitive personal data is up on line and if there is no regulation, his rights will be infringed as many times as the data fiduciary wants and the person does not have any remedy as the bill does not take into consideration the case of deceased individuals. The dynamic nature of data is such that it cannot be deleted on its own once the person is dead. The other provisions which are there for living individuals can be applied in cases of deceased individuals as well. UnderSection 10of the Personal Data Protection Bill, 2018, the data fiduciary can store data only for a limited period of time and can use the information only for the purpose it was taken for. If the data principle wants to amend any information or remove any information, he has the right to do so and the data fiduciary without any law cannot prohibit the person to do so. The current data protection regime fails to recognize and fulfil the needs for protection of digital rights. It is pertinent to consider whether the concept of a “digital asset” and “fiduciary” as present in Delaware legislation can be emulated in India. Protection of data post death involves questions of digital succession as well as intellectual property rights which is inheritable and this has to be taken into consideration while framing a legislation pertaining to post-mortem privacy. The number of internet users is estimated to be 566 million as of December 2018, registering an annual growth of 18%.Considering the growth of internet use in India, it is pertinent to have a proper legal framework for protection of data of deceased individuals.

[1]In re Estate of Ellsworth, No. 2005-296, 651- DE (Mich. Prob. Ct. May 11, 2005).

[2]Fairstar Heavy Transport NV v. Adkins. [2012] EWHC 2952 (TCC).

Read more

Metadata by TLF: Issue 7

Posted on November 14, 2019December 20, 2020 by Tech Law Forum @ NALSAR

Welcome to our fortnightly newsletter, where our Editors put together handpicked stories from the world of tech law! You can find other issues here.

Israel spyware ‘Pegasus’ used to snoop on Indian activists, journalists, lawyers

In a startling revelation, Facebook owned messaging app WhatsApp revealed that a spyware known as ‘Pegasus’ has been used to target and surveil Indian activists and journalists. The revelation came to light after WhatsApp filed a lawsuit against the Israeli NSO Group, accusing it of using servers located in the US and elsewhere to send malware to approximately 1400 mobile phones and devices. On its part, the NSO group has consistently claimed that it sells its software only to government agencies, and that it is not used to target particular subjects. The Indian government sought a detailed reply from WhatsApp but has expressed dissatisfaction with the response received, with the Ministry of Electronics and Information Technology stating that the reply has “certain gaps” which need to be further investigated.

Further reading:

  1. Sukanya Shantha, Indian Activists, Lawyers Were ‘Targeted’ Using Israeli Spyware Pegasus, The Wire (31 October 2019).
  2. Seema Chishti, WhatsApp confirms: Israeli spyware was used to snoop on Indian journalists, activists, The Indian Express (1 November 2019).
  3. Aditi Agrawal, Home Ministry gives no information to RTI asking if it bought Pegasus spyware, Medianama (1 November 2019).
  4. Shruti Dhapola, Explained: What is Israeli spyware Pegasus, which carried out surveillance via WhatsApp?, The Indian Express (2 November 2019).
  5. Akshita Saxena, Pegasus Surveillance: All You Want To Know About The Whatsapp Suit In US Against Israeli Spy Firm [Read Complaint], LiveLaw (12 November 2019).

RBI raises concerns over WhatsApp Pay

Adding to the WhatsApp’s woes in India, just after the Israeli spyware Pegasus hacking incident, The RBI has asked the National Payments Corporation of India (NPCI) not to permit WhatsApp to go ahead with the full rollout of its payment service WhatsApp Pay. The central bank has expressed concerns over WhatsApp’s non-compliance with data processing regulations, as current regulations allow for data processing outside India on the condition that it returns to servers located in the country without copies being left on foreign servers.

Further Reading:

  1. Karan Choudhury & Neha Alawadhi, WhatsApp Pay clearance: RBI raises concerns data localisation concerns with NPCI, Business Standard (7 November 2019).
  2. Aditi Agarwal, ‘No payment services on WhatsApp without data localisation’, RBI to SC, Medianama (9 October 2019).
  3. Sujata Sangwan, WhatsApp can’t start payments business in India, YOURSTORY (9 November, 2019).
  4. Yatti Soni, WhatsApp Payments India Launch May Get Delayed Over Data Localisation Concerns, Inc42 (9 October 2019).
  5. Priyanka Pani, Bleak future for messaging app WhatsApp’s payment future in India, IBS Intelligence (9 November 2019).

Kenya passes new Data Protection Law

The Kenyan President, Uhuru Kenyatta recently approved a new data protection law in conformity with the standards set by the European Union. The new bill was legislated after it was found that existing data protection laws were not at par with the growing investments from foreign firms such as Safaricom and Amazon. There was growing concern that tech giants such as Facebook and Google would be able to collect and utilise data across the African subcontinent without any restrictions and consequently violate the privacy of citizens. The new law has specific restrictions on the manner in which personally identifiable data can be handled by the government, companies and individuals, and punishment for violations can to penalties of three million shillings or levying of prison sentences.

Further reading:

  1. Duncan Miriri, Kenya Passes Data Protection Law Crucial for Tech Investments, Reuters (8 November 2019).
  2. Yomi Kazeem, Kenya’s Stepping Up Its Citizens’ Digital Security with a New EU-Inspired Data Protection Law, Quartz Africa (12 November 2019).
  3. Kenn Abuya, The Data Protection Bill 2019 is Now Law. Here is What that Means for Kenyans, Techweez (8 November 2019).
  4. Kenya Adds New Data Regulations to Encourage Foreign Tech Entrants, Pymnts (10 November 2019).

Google gains access to healthcare data of millions through ‘Project Nightingale’

Google has been found to have gained access data to the healthcare data of millions through its partnership with healthcare firm Ascension. The venture, named ‘Project Nightingale’ allows Google to access health records, names and addresses without informing patients, in addition to other sensitive data such as lab results, diagnoses and records of hospitalisation. Neither doctors nor patients need to be told that Google an access the information, though the company has defended itself by stating that the deal amounts to “standard practice”. The firm has also stated that it does not link patient data with its own data repositories, however this has not stopped individuals and rights groups from raising privacy concerns.

Further reading:

  1. Trisha Jalan, Google’s Project Nightingale collects millions of Americans health records, Medianama (12 November 2019).
  2. Ed Pilkington, Google’s secret cache of medical data includes names and full details of millions – whistleblower, The Guardian (12 November 2019).
  3. James Vincent, The problem with Google’s health care ambitions is that no one knows where they end, The Verge (12 November 2019).
  4. Rop Copeland & Sarah E. needlemen, Google’s ‘Project Nightingale’ Triggers Federal Inquiry, Wall Street Journal (12 November 2019).

Law professor files first ever lawsuit against facial recognition in China

Law professor Guo Bing sued the Hangzhou Safari Park after it suddenly made facial recognition registration a mandatory requirement for visitor entrance. The park had previously used fingerprint recognition to allow entry, however it switched to facial recognition as part of the Chinese government’s aggressive rollout of the system meant to boost security and enhance consumer convenience. While it has been speculated that the lawsuit might be dismissed if pursued, it has stirred conversations among citizens over privacy and surveillance issues which it is hoped will result in reform of existing internet laws in the nation.

Further reading:

  1. Xue Yujie, Chinese Professor Files Landmark Suit Against Facial Recognition, Sixth Tone (4 November 2019).
  2. Michael Standaert, China wildlife park sued for forcing visitors to submit to facial recognition scan, The Guardian (4 November 2019).
  3. Kerry Allen, China facial recognition: Law professor sues wildlife park, BBC (8 November 2019).
  4. Rita Liao, China Roundup: facial recognition lawsuit and cashless payments for foreigners, TechCrunch (10 November 2019).

Twitter to ban all political advertising

Twitter has taken the decision to ban all political advertising, in a move that increases pressure on Facebook over its controversial stance to allow politicians to advertise false statements. The policy was announced via CEO Jack Dorsey’s account on Wednesday, and will apply to all ads relating to elections and associated political issues. However, the move may only to prove to have symbolic impact, as political ads on Twitter are just a fraction of those on Facebook in terms of reach and impact.

Further reading:

  1. Julie Wong, Twitter to ban all political advertising, raising pressure on Facebook, The Guardian (30 October 2019).
  2. Makena Kelly, Twitter will ban all political advertising starting in November, The Verge (30 October 2019).
  3. Amol Rajan, Twitter to ban all political advertising, BBC (31 October 2019).
  4. Alex Kantrowitz, Twitter Is Banning Political Ads. But It Will Allow Those That Don’t Mention Candidates Or Bills., BuzzFeed News (11 November 2019).

Read more

Standardizing the Data Economy

Posted on October 17, 2019December 13, 2019 by Tech Law Forum @ NALSAR

This piece has been authored by Namratha Murugeshan, a final year student at NALSAR University of Law and member of the Tech Law Forum.

In 2006, Clive Humby, a British mathematician said with incredible foresight that “data is the new oil”. Fast forward to 2019, we see how data has singularly been responsible for big-tech companies getting closer to and surpassing the trillion-dollar net worth mark. The ‘big 4’ tech companies, Google, Apple, Facebook and Amazon have incredibly large reserves of data both in terms of data collection (owing to the sheer number of users each company retains) and in terms of access to data that is collected through this usage. With an increasing number of applications and avenues for data to be used, the requirement of standardizing the data economy manifests itself strongly with more countries recognizing the need to have specific laws concerning data.

What is standardization?

Standards may be defined as technical rules and regulations that ensure the smooth working of an economy. They are required to increase compatibility and interoperability as they set up the framework within which agents must work. With every new technology that is invented the question arises as to how it fits with existing technologies. This question is addressed by standardization. By determining the requirements to be met for safety, quality, interoperability etc., standards establish the molds in which the newer technologies must fit in. Standardization is one of the key reasons for the success of industrialization. Associations of standardization have helped economies function by assuring consumers that the products being purchased meet a certain level of quality. The ISO (International Standards Organization), BIS (Bureau of Indian Standards), SCC (Standards Council of Canada), BSI (British Standards Institute) are examples of highly visible organisations that stamp their seal of approval on products that meet the publicly set level of requirements as per their regulations. There are further standard-setting associations that specifically look into the regulation of safety and usability of certain products, such as food safety, electronics, automobiles etc. These standards are deliberated upon in detail and are based on a discussion with sectoral players, users, the government and other interested parties. Given that they are generally arrived at based on a consensus, the parties involved are in a position to benefit by working within the system.

Standards for the data economy

Currently, the data economy functions without much regulation. Apart from laws on data protection and a few other regulations concerning storage, data itself remains an under-regulated commodity. While multiple jurisdictions are recognizing the need to have laws concerning data usage, collection and storage, it is safe to say that the legal world still needs to catch-up.

In this scenario, standardization provides a useful solution as it seeks to ensure compliance by emphasizing mutual benefit, as opposed to laws which would penalize non-adherence. A market player in the data economy is bound to benefit from standardization as they have readily accessible information regarding the compliance standards for the technology they are creating. By standardizing methods for collection, use, storage and sharing of data the market becomes more open because of increased availability of information, which benefits the players by removing entry barriers. Additionally, a standard-mark pertaining to data collection and usage gives consumers the assurance that the data being shared be used in a safe and quality-tested manner, thereby increasing their trust in the same. Demand and supply tend to match as there is information symmetry in the form of known standards between the supplier and consumer of data.

As per Rational Choice theory an agent in the economy who has access to adequate information (such as an understanding of costs and benefits, existence of alternatives) and who acts on the basis of self-interest, would pick that choice available to them that maximizes their gains. Given this understanding, an agent in the data economy would have higher benefits if there is increased standardization as the same would create avenues to access and usage in the market that is currently heading towards an oligopoly.

How can the data economy be standardized?

The internet has revolutionized the manner in which we share data. It has phenomenally increased the amount of data available on the platform. Anyone who has access to the internet can deploy any sort of data on to the same – be it an app, a website, visual media etc. With internet access coming to be seen as an almost essential commodity, its users and the number of devices connected to the Internet will continue to grow. Big Data remained a buzzword for a good part of this decade (2010’s), and with Big Data getting even bigger, transparency is often compromised as a result. Users are generally unaware of how the data collected from them is stored, used or who has access to it. Although, sometimes terms and conditions concerning certain data and its collection specify these things, it is overlooked more often than not, with the result that users remain in the dark.

There are 3 main areas where standardization would help the data economy –

  1. Data Collection
  2. Data Access
  3. Data Analysis

 

  1. Data Collection – Standardizing the process of data collection has a supply and demand side benefit. On the supply side, the collection of data across various platforms such as social media, personal use devices, networking devices etc., would be streamlined based on the purpose for which they are being harvested. Simpler language of terms and condition, broad specifications of data collection would help the user make an informed choice about whether they want to allow data collection. Thereby, this would seeking permissions from the user by way of categorizing data collection and making the same known to the user. On the demand side, this streamlined data collection would help with accumulating high-quality data as required for specific usage by those collecting it. This would also make for effective compliance with as is required by a significant number of data protection laws across the globe. Purpose limitation is a two-element principle. It says that data must be collected from a user for “explicit, specified and legitimate” purposes only and that data should be processed and used only in a manner that is compatible with the purpose it is collected for. This helps purpose limitation because once data providers are aware of how their data is going to be used, they can make a legitimate claim to check the usage of it by data collectors and seek stricter compliance requirements.

 

  1. Data Access – Standardizing data access would go a long way in breaking down the oligopoly of the 4 big tech companies over data by creating mechanisms for access to the same. As of now, there is no simple method for data sharing across databases and amongst industry players. With monetization of data rising with increasing fervor, access and exchange will be crucial to ensure that the data economy does not stagnate or have exceedingly high barriers to entry. Further, by setting standards for the access to data the stakeholders will be able to participate in discussions regarding the architecture of data access.

 

  1. Data Analytics – This is the domain that remains in the exclusive control of big tech companies. While an increasing number of entities are adopting data analytics, big tech companies have access to enormous amounts of data that has given them a head start. Deep Blue, Alexa, Siri are examples of the outcome of data analytics by IBM, Amazon and Apple respectively. Data analytics is the categorization and processing of data collected and involves putting to use the data resource to achieve the goal of creating newer technologies to cater to the needs of people. Data analytics requires investment that is often significantly beyond the reach of the general population. However, data analytics is extremely important to ensure that the data economy survives. By consistently searching for the next big thing in data analytics, we have seen the advent of Big Data, Artificial Intelligence and Machine Learning (a subset of AI) so far, indicating that investments in data collection and processing pay-off. Further, data analytics has a larger implication on how we tend to work and what aspects of our life we let technology take over. The search for smarter technologies and algorithms will ensure that the data economy thrives and consequently have an impact on the market economy. Standardization of this infrastructure would ensure fairer access norms and usage of collected data.

With the increasing application of processed information to solve our everyday problems, the data economy is currently booming; however, large parts of this economy are controlled by a limited number of players. Standardization in this field would ensure that we move towards increased competition instead of a data oligopoly, ensuring increased competition that will ultimately lead to the faster and healthier growth of the data economy.

Read more
  • Previous
  • 1
  • 2
  • 3
  • 4

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme