Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Blog Series
  • Write for us!
  • About Us
Menu

A Critical Analysis of the Publicly Available Data Exemption in the Digital Personal Data Protection Act

Posted on December 18, 2025 by Tech Law Forum NALSAR

[This article has been co-authored by Arpanjot Kaur and Shubham Thakare, third-year students at the National Law School of India University (NLSIU), Bengaluru. It critically examines Section 3(c)(ii) of India’s Digital Personal Data Protection Act, which exempts “publicly available” personal data from protection, arguing that its vague language creates a massive loophole that undermines the Act’s purpose. The authors contend that this blanket exemption diverges from international standards and fails the test established in the Puttaswamy judgment, particularly on grounds of legality and proportionality.]

Introduction: A Constitutional Promise Undone

The Digital Personal Data Protection Act, 2023 (“DPDPA”) arrived with the weight of constitutional expectation. The law was enacted nearly six years after the Supreme Court’s nine-judge bench in Justice K.S. Puttaswamy affirmed the right to privacy as a fundamental right under Article 21. As such, the DPDPA was meant to be the legislative architecture that would give this right a practical effect. The Puttaswamy judgment conceptualised privacy as an essential condition for human dignity, autonomy, and the freedom to shape one’s identity in a world increasingly mediated by data. It recognised that data privacy is not a peripheral concern but is central to democratic life, and warned that unchecked data processing can create power asymmetries that chill freedom of speech and enable pervasive surveillance. To avoid these problems, the Court imposed a positive obligation on the state to enact and enforce an effective data protection legislation for the country.

The court also stated that any such law must meet a three-part requirement when it seeks to put any permissible restriction on privacy. These requirements are that: first, the restriction must be grounded in law, second, it must pursue a legitimate state aim, and third, it must satisfy the doctrine of proportionality. Proportionality is undoubtedly the most exacting of these requirements, since it creates another multilevel test which requires that the state’s action be necessary, be the least intrusive means available, and strike a fair balance between its objectives and the rights of the individual. Consequently, it is against this rigorous constitutional standard that the DPDPA must be measured.

In this context, we argue in this piece that a single, deceptively simple provision within the Act (namely, §3(c)(ii)) subverts the statute’s purpose and fails to meet the three-step constitutional standard set by Puttaswamy. By creating a blanket exemption for “publicly available” personal data, the Act carves out a vast and lawless category of data to which principles of data protection do not apply. We develop this argument through three lines of inquiry into the provision. The first step in our inquiry is a close reading of the section, through which we expose how the vagueness of its language creates space for troubling consequences. Second, we situate the DPDPA’s approach within the broader field of international data protection law and reveal its status as a radical outlier by allowing for such a section. Thirdly, we apply the Puttaswamy test to argue that §3(c)(ii) is constitutionally untenable, since it creates a paradox where a statute meant to protect privacy ends up legally sanctioning its forfeiture.

Textual Indeterminacy and its Normative Consequences

A close look at the text of §(3(c)(ii) shows that it is rife with ambiguities that could end up undermining the Act’s entire purpose of protecting personal data. Its imprecise language provides a veneer of reasonableness while enabling expansive and harmful data processing practices.

The first limb of the exemption refers to personal data “made publicly available by the Data Principal.” At first glance, this seems to apply to voluntary and intentional acts, such as creating a public social media profile or writing a blog under one’s own name (just as we are doing right now). In such cases, it might arguably be reasonable for the platform hosting that content to utilise the name datum in some way, such as when displaying posts or enabling searches. However, it is beyond question that digital services today use the data of users beyond just functional purposes (see surveillance capitalism). We saw a clear example of how this phenomenon could play out in 2016 with the Cambridge Analytica scandal. Millions of Facebook users shared their data in what felt like ordinary, everyday ways, such as posting updates for friends or answering a light-hearted personality quiz. What they did not know was that this information was being quietly harvested and repurposed to build detailed psychological profiles. Those profiles were then sold to political campaigns and used to target voters, all without the knowledge or consent of the people who had originally shared their data.

This phenomenon of using data for unrelated purposes is known as “context collapse” in the digital sphere. Information shared in one context, for a specific purpose and intended for a particular audience (e.g., a tweet to friends, a public blog post on a niche topic), can be stripped of that context when scraped and aggregated by data brokers. Obviously, an individual who posts their private images and opinions on a public forum does not reasonably expect that data to be used in voter targeting.

These examples of context collapse bring to light the problem in the exemption’s drafting. The first limb of the section is too broad and refers categorically to personal data made “publicly available”, without any qualifications. Such a broad wording might just give a green light to allowing data fiduciaries to use data made public for unrelated purposes, such as in the Cambridge scandal. The text of this limb thus makes the mistake of treating all public disclosures as equivalent.

The second limb, exempting data “caused to be made publicly available” by the data principal, is even more perilous. The Act does not define what it means to “cause” data to become public. Does it require a direct act, or is a passive or incidental role sufficient? Must there be intent (mens rea) or knowledge? To understand how ambiguous this word is, consider a scenario where an individual is tagged in a public photo by a friend, or their name appears in the public minutes of a local community meeting they attended. Have they “caused” this data to be made public? Without a clear threshold for agency or intent, this phrase grants data fiduciaries an almost unlimited discretion to claim data as exempt. It creates a perverse incentive for entities to engineer situations where individuals indirectly contribute to the public disclosure of their data, and thereby strip it of protection even when they might not want to do so.

The third limb exempts data made public under a legal obligation. This covers information in public records like property registries, corporate filings, and court documents. Such disclosures are mandated by law for specific purposes of transparency and accountability. The exemption, however, severs the link between the purpose of disclosure and the legality of subsequent processing. The danger that comes from this severance can be seen in what happened with U.S. land records. These records were digitised for efficiency and convenience. However, instead of serving its function as a centuries-old notice system, it turned into a source of commodified personal data, which was bought and sold by commercial firms for marketing and profiling purposes. While it is understandable that this third limb aims to enhance transparency, it also erodes privacy and exposes individuals to risks such as identity theft and targeted attacks.

The cumulative effect of these shortcomings is the creation of a large class of personal data that exists entirely outside the purview of the Act, which paradoxically aims to protect personal data. Once data is deemed publicly available, the individual loses all statutory rights under the DPDPA, including the right to access, correct, or erase their data, and the right to seek redress for its misuse. The resulting chilling effect on speech and association cannot be understated, as individuals may have no option left but to withdraw from public digital life to avoid the risk of losing control over their personal information.

A Radical Outlier in International Data Protection

A close look at data protection laws in other jurisdictions makes it evident that the DPDPA’s treatment of publicly available data positions India as an outlier. While data regulations around the world recognise that personal data remains protected even when it is shared in the public domain, the DPDPA, in contrast, takes the approach of radical exclusion of such data from its scope.

The European Union’s GDPR is widely considered the global gold standard of data regulation. After its enactment in 2016, the regulation became a classic case of the Brussels effect and subsequently became a model for many other laws around the world, including in Brazil, Japan, Singapore, South Africa, South Korea, Sri Lanka, and Thailand. The GDPR (unlike DPDPA) provides no such blanket exemption for data that is made public. The very premise of GDPR’s application is that when certain data can be used to identify a living person, it is personal data and falls within the Regulation’s scope. This is irrespective of whether it is made public or kept private. Personal data made publicly available is thus treated in a similar fashion to non-public personal data. That being said, it still requires a valid legal basis under Article 6, such as consent, contractual necessity, or, most importantly, when it comes to data scraping by corporations, a “legitimate interest.” Even this legitimate interest ground does not act as a free pass for using personal data, since it requires the data controller to conduct a balancing test, weighing their interests against the fundamental rights and freedoms of the data subject. This ensures a case-by-case, contextual analysis, and this is the kind of safeguard the DPDPA has abandoned. Under the GDPR, an individual whose public data is processed retains their full suite of rights, including the right to object to the processing and the right to erasure.

The United States is another jurisdiction we can compare with. The U.S. has no single federal data protection law; instead, it relies on sector-specific federal laws (like HIPAA and GLBA) and FTC enforcement. Broad, GDPR-style protections exist only at the state level, led by California’s Consumer Privacy Act (“CCPA”) and similar laws in a handful of other states. California is one of the most business-friendly states in the U.S., and one might expect its data privacy law to broadly exempt public data, similar to the Indian counterpart. However, the CCPA adopts a more tailored and restrictive approach. While it exempts “publicly available information” from the scope of personal data, this exemption is narrowly drawn and limited to information lawfully made available through federal, state, or local government records. The California Privacy Rights Act (CPRA) in 2020 expanded this exemption modestly to include information that a business reasonably believes was made public by the consumer through widely distributed media, or shared without audience restrictions. Crucially, however, this does not grant a general license to scrape any personal data found online. The exemption remains bounded by whether the disclosure was clearly intended to be public. Moreover, the phrase “business reasonably believes” adds a further safeguard. This demonstrates that even in a comparatively less comprehensive regulatory environment, lawmakers have recognised the need for clear guardrails to prevent the notion of “publicly available” from swallowing the broader framework of privacy rights.

By contrast, the DPDPA does not merely narrow the scope of protection for public data; it eradicates it entirely. This categorical exclusion is untethered from any consideration of the sensitivity of the data, the context of its disclosure, or the nature of its subsequent use. Such a practice finds no parallel in major democratic jurisdictions. It reflects, whether intentionally or not, the state’s deliberate choice to prioritise administrative convenience and data monetisation over the protection of individual rights. This effectively reverses the right to privacy that Puttuswamy had conferred.

Failing the Constitutional Scrutiny of Puttaswamy

Having outlined the problems with Section 3(c)(ii), we now turn to the three-part test laid down in Puttaswamy to show that the section’s constitutional validity rests on an uncertain ground. As we had shown earlier, the Puttaswamy test requires that any restriction on privacy be grounded in law (legality), pursue a legitimate state aim, and satisfy the standard of proportionality. Section 3(c)(ii) fails on at least two of these counts.

First, the provision fails the requirement of legality under the test. A law that restricts a fundamental right must be clear, precise, and non-arbitrary, so as to give citizens fair notice and to prevent abuse of power. The phrase “caused to be made publicly available” is incurably vague. It provides no definite standard for when an individual’s conduct results in the loss of their privacy rights and instead leaves excessive discretion with data fiduciaries. As Gautam Bhatia has shown here, Indian constitutional jurisprudence has long recognised that such vagueness is fatal. As early as in F.N. Balsara (1951), the Supreme Court struck down statutory provisions for being “so wide and vague that it is difficult to define or limit their scope.” This principle was later (and popularly) reaffirmed in Shreya Singhal (2015). On this reasoning, the impugned section 3(c)(ii) falls within Article 14’s vagueness test, and likewise cannot survive the legality requirement of Puttuswamy since it is now a valid law.

Second, while the state may plausibly assert a legitimate aim for having such a broad provision, such as promoting economic growth, fostering AI innovation, or reducing compliance costs for businesses, the provision fails catastrophically at the third and most critical stage: proportionality.

Proportionality requires that the measure be necessary, be the least restrictive means available, and ensure that the benefit to the state outweighs the harm to the individual. §3(c)(ii) is demonstrably not the least intrusive means. The models adopted by the EU and California show that it is entirely possible to balance innovation with privacy without resorting to a blanket exclusion. The DPDPA’s choice of the most extreme and rights-effacing option available cannot be justified as necessary or minimally impairing. Furthermore, the provision fails to strike a fair balance between competing interests. It saddles the individual with the full risk of their personal data being misused, while handing data-processing entities a free pass to exploit any such data made public. The provision rests on the fiction that once information is in the public domain, it becomes non-personal as well. This runs squarely against the spirit of Puttaswamy, which located the right to privacy in the dignity and autonomy of the person, rather than in the accident of where their information happens to be found.

Conclusion

In conclusion, §3(c)(ii) of the DPDPA is unlikely to withstand constitutional scrutiny. As this piece has shown, it functions as a legislative override of the standards laid down by the Supreme Court in Puttaswamy. A statute that purports to safeguard data privacy cannot simultaneously create a such a blanket exemption for publicly available personal data. Such an exemption leaves a vast category of personal data vulnerable to collection, processing, and exploitation without consent, oversight, or remedy. It marks a serious failure on the part of the state to discharge its constitutional obligation to protect the fundamental right to privacy, and it calls for urgent rectification—whether by the legislature in good time, or by the judiciary in due course.

Categories

Recent Posts

  • Zero Days and Zero Rights? Legal Vacuum In India’s Cyber Incident Reporting Regime
  • A Critical Analysis of the Publicly Available Data Exemption in the Digital Personal Data Protection Act
  • The Cookie Consent Conundrum: Understanding EU’s Digital Privacy Law
  • Pixelated Perjury: Addressing India’s Regulatory Gaps in Tackling Deepfakes  
  • Between Tokens and Stakes: The Unintended Overreach of India’s Online Gaming Law
  • Gaps in Timeline and Consent Management in India’s Draft DPDP Rules 2025: A Critical Analysis
  • U.S. Visa Surveillance: The New Panopticon and its Privacy Implications
  • Machines, Middlemen, and Mandates: Vicarious Liability under the Companies Act, 2013
  • The Artificial Intelligence Conundrum: Balancing Innovation and Protection in Mental Healthcare
  • Behind the Avatars, Real Voices Cry: Can Indian Law Catch Up with Virtual Sexual Violence?

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
  • Twitter
  • LinkedIn
  • Instagram

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme