[Varad Tiwari is a second-year student at Hidayatullah National Law University, Raipur. In this piece, the author examines the evolving legal landscape of data de-identification following the EDPS v. SRB judgment. Specifically, the piece argues for a “pragmatic” approach to pseudonymization, suggesting that the status of data as “personal” should depend on the specific perspective of the holder and the realistic means available to re-identify it. By contextualising these European developments within the framework of India’s Digital Personal Data Protection (DPDP) Act, 2023, the article explores the compliance implications for Indian data controllers and the need for a nuanced interpretation of “identifiable” data in a high-tech regulatory environment.]
Introduction: Locating Pseudonymisation Within Data Protection Law
In its much-awaited judgment in EDPS v SRB (‘Judgment’), the Court of Justice of the European Union (‘CJEU’) addressed several contested questions concerning the scope of personal data under European data protection law. These included whether subjective assessments constitute personal data and whether data that has undergone pseudonymisation necessarily remains personal data. This paper confines itself to the latter issue.
Pseudonymisation occupies a legally awkward position. It is designed to reduce privacy risks while preserving data utility, yet it does not permanently sever the link between data and identity. As a result, regulators have historically struggled to determine whether pseudonymised datasets should fall within the full scope of data protection obligations. The Judgment represents a decisive shift away from rigid categorisation towards a more contextual assessment grounded in practical re-identification risk.
This shift has direct relevance for Indian data controllers operating under the Digital Personal Data Protection Act, 2023 (‘DPDP Act’). Although the DPDP Act does not expressly regulate pseudonymisation, its definition of personal data closely mirrors that of the GDPR. The reasoning adopted by the CJEU therefore offers a valuable interpretive lens for Indian courts and regulators, particularly in light of increasing cross-border data flows.
Conceptual Foundations: What Is Pseudonymisation And Why Does It Matter
Pseudonymisation refers to the processing of personal data in such a manner that it can no longer be attributed to a specific individual without the use of additional information, provided that such additional information is kept separately and protected through appropriate technical and organisational measures.
The defining feature of pseudonymisation is reversibility. Unlike anonymisation, which permanently removes identifiability, pseudonymisation merely displaces identifying elements. Its legal function is therefore not to exclude data from regulation altogether, but to lower the probability and impact of identification. In regulatory terms, it operates as a safeguard rather than a categorical exemption.
Historically, data protection authorities adopted a conservative approach to pseudonymisation. Any dataset from which identification was theoretically possible, regardless of practical constraints, was treated as personal data. This approach prioritised formal possibility over real-world capability. While normatively defensible from a rights-protective perspective, it resulted in heavy compliance burdens and discouraged responsible data sharing.
III. The Judgment In Edps V Srb: Shifting From Possibility To Practical Capability
In EDPS v SRB, the CJEU rejected the absolutist position traditionally taken by regulators. The Court held that pseudonymised data does not constitute personal data in the hands of a recipient who does not possess the means ‘reasonably likely’ to re-identify the data subject.
The Court’s reasoning is explicitly relational. Identifiability is assessed not in the abstract, but with reference to a specific actor and a specific context. The relevant inquiry is whether the recipient has access to additional information and whether re-identification would be reasonably likely having regard to objective factors such as time, cost, labour, and available technology.
This reasoning marks a pragmatic recalibration of data protection law. It aligns legal obligations with actual risk rather than speculative possibility. The Court thereby acknowledges that data protection cannot operate on the assumption of universal omniscience without undermining proportionality.
Remaining Ambiguity: The Absence Of A Foreseeability Standard
Despite its pragmatic orientation, the Judgment leaves a significant doctrinal gap. The Court does not clarify whether the assessment of reasonable likelihood must incorporate foreseeability.
A controller may conduct a diligent risk assessment and reasonably conclude that re-identification by the recipient is improbable. Subsequent technological developments or unforeseen data combinations may later render re-identification possible. The Judgment does not specify whether such retrospective possibility should invalidate the original assessment.
This omission has material compliance consequences. If liability is judged retrospectively, controllers are effectively held to a standard of hindsight rather than responsibility. Compliance becomes structurally unstable because controllers cannot account for unknown future capabilities. Without a foreseeability threshold, the reasonable likelihood test risks reintroducing the uncertainty it was meant to resolve.
Interpreting Pseudonymisation Under India’s Dpdp Act, 2023
The DPDP Act defines personal data as data about an individual who is identifiable by or in relation to such data. This formulation closely tracks the GDPR. However, the Act does not articulate any framework for anonymisation or pseudonymisation.
In the absence of express statutory guidance, Indian courts are likely to adopt a purposive interpretation. Given that pseudonymisation is inherently a privacy-enhancing practice, it would be counterintuitive to interpret the definition of personal data in a manner that penalises its use. A recipient-centric assessment of identifiability, similar to that adopted by the CJEU, would therefore be doctrinally coherent.
Such an approach would also promote regulatory interoperability. Cross-border data transfers are routine in modern data ecosystems. Divergent standards on pseudonymisation increase transaction costs and compliance complexity. Aligning Indian interpretation with global standards would therefore advance both privacy protection and ease of doing business.
Compliance Strategies For Indian Data Fiduciaries
With the Digital Personal Data Protection Rules, 2025 now notified and implementation timelines clarified, Indian data fiduciaries must operationalise pseudonymisation in a manner that withstands regulatory scrutiny.
First, contractual safeguards are essential. Recipients of pseudonymised data should be bound by strict use-limitation clauses, explicit prohibitions on re-identification attempts, access restrictions, and confidentiality obligations. Contracts should clearly specify who may access the data, for what purposes, and for what duration.
Second, technical and organisational controls should complement contractual safeguards. Sharing pseudonymised data within controlled and locally hosted virtual environments allows access to be time-bound and activity to be audited. Such environments reduce the risk of unauthorised duplication and facilitate real-time oversight.
Third, controlled environments enable dynamic reassessment of pseudonymisation techniques. As technological capabilities evolve, methods that were once effective may become inadequate. The ability to revise or revoke access without transferring data into the recipient’s infrastructure ensures continued compliance while allowing for smooth contractual termination.
VII. Conclusion: Towards A Responsible Controller Standard
The Judgment in EDPS v SRB signals a shift towards pragmatism by grounding identifiability in real-world capability rather than theoretical possibility. However, the absence of clarity on foreseeability limits its stabilising effect.
India should not replicate foreign standards uncritically. Instead, Indian regulators should refine them. Clarifying the role of foreseeability is the logical next step. A responsible controller standard should assess identifiability based on information that a data fiduciary is reasonably expected to possess at the time of disclosure. This approach would protect individuals without imposing impossible compliance burdens.
If developed carefully, such a standard would provide predictability, interoperability, and proportionality. These qualities are indispensable in a data protection regime that seeks not only to safeguard rights, but also to function in practice.