[Tanav Kansal is an advocate practicing before the courts in Delhi NCR. In this piece, the author interrogates the escalating tension between child safety mandates and data minimization principles, specifically through the lens of recent controversies surrounding age-inappropriate content on platforms like “India’s Got Latent.” By synthesizing the August 2025 Parliamentary Standing Committee Report with the DPDP Rules, the author proposes a novel, privacy-centric token-based model for age verification that moves beyond intrusive government ID collection. The analysis provides a roadmap for balancing the protection of minors with the fundamental right to digital anonymity, suggesting that technological tokens can satisfy regulatory compliance without compromising consumer privacy.]
Introduction
Recently, the YouTube comedy show India’s Got Latent (“IGL”) found itself amidst controversy and legal action over remarks made in the show and the influence such content could have on younger viewers. The makers insisted that the show caters solely to adults and it is ultimately the responsibility of parents to not let their kids watch content not made for them. However, in reality, no real mechanism stops a minor from opening IGL on YouTube and parental control on every action of the minor is impractical, intrusive and unrealistic.
The IGL controversy has revived discussions on online child safety in India. It highlights concerns over children’s exposure to visuals, terms, humour inappropriate for their age on streaming platforms like YouTube, Netflix etc. As OTT platforms have become the default source of entertainment for people across generations, it is seemingly important to provide appropriate safeguards for minors to be on these platforms. The judiciary has also called for reform and safeguards to ensure a safe digital space for children which is free from obscenity and vulgarity. In August 2025, The Parliamentary Standing Committee on Home Affairs in its Report on Cybercrime Prevention has also expressed concern about existing safeguards and suggested that platforms must adopt a ‘technologically robust mechanism’ to prevent minors from accessing inappropriate content.
However, improving age verification is not as straightforward as it sounds. Any attempt to make the system stricter has to balance child safety with privacy, convenience, and the realities of how families actually use streaming platforms. This blog examines the current age verification mechanism and proposes extension of token-based parental consent model from the Digital Data Protection Rules (“DPDP Rules”) to OTT platforms which offers a reliable, privacy-conscious model.
Limitations of Current Age Verification
The current age verification method used by OTT platforms is extremely weak. Before streaming an age-sensitive video, most platforms simply ask the viewer to self-declare if they are above 18 years by a simple ‘yes’ or ‘no’ button. A minor can click “yes” intentionally or even accidentally, and the platform has no way of verifying whether the viewer is indeed a major. In some cases, the prompt may even make a minor more inquisitive about the contents behind the age-gate. This lack of entry barriers for children is concerning because they mostly browse the internet unsupervised. Children can easily switch from educational videos to standup comedy clips like IGL which use explicit language. Exposure to such readily available online content is dangerous as it can shape behaviour, judgment and expectations of the child in a way parents may not anticipate. Designing an effective age-verification system is therefore necessary.
However, any system that seeks to determine the true age of a viewer will have to collect, store and process personal identity documents of the user. This will create an immediate privacy intrusion, especially where the data relates to children who cannot meaningfully consent. Moreover, repeated sharing of such identity documents also increases the risk of misuse and data breach.
These concerns are not unique to OTT platforms. India has recently dealt with balancing verifiable parental consent without compromising on privacy in the new data protection regime. Under the DPDP Rules, the platform must determine whether a user is a minor before processing their personal data, without the need to repeatedly collect ID proofs. To allow this, the DPDP framework provides a token-based verification method to convey consent, without transferring underlying identity documents. Although this mechanism is designed for data processing, it provides a model that OTT platforms can adopt and use to strengthen age gating on these platforms.
The Proposed Mechanism: Token-Based Age Verification
Rule 10 of the DPDP Rules requires platforms to obtain verifiable consent from the parent of the child, before processing personal data of any individual below 18 years. To operationalize this requirement, the Rules allow for three methods. The first is the verification based on the reliable identity and age details already available with the platform from the previous checks. The second method allows parents to voluntarily submit identity documents when asked by the platform to certify their age. The third, and the most significant alternative, is the authenticating age through a trusted digital repository such as DigiLocker, instead of providing identity documents to the platform. In this method, the parent authenticates themselves on the trusted service, which then generates a one-time ‘virtual token’ certifying the age of the individual, confirming that he/she is an adult and affirming his/her consent on behalf of the child.
If a similar model is adapted for OTT platforms, the mechanism would follow a structured verification process. When a minor tries to watch an age-restricted show like IGL, a verification request would automatically be triggered. The parent could either upload personal identity documents like PAN card or AADHAR card for verification or opt for a token-based authentication that would allow the platform to verify the age without the need for strong identity records and would act only for the specific purpose. Thus, a minor would be deterred from watching age-inappropriate content without the supervision or approval of the parent.
Presently, the virtual token system is confined to parental consent for data processing under the DPDP framework. However, it presents a structured method of verifying age and consent that avoids exposure of sensitive information circulating across platforms.
Why Tokenisation Matters
Tokenisation represents a significant shift in how personal information is verified. When applied in the context of OTT platforms, the tokenization mechanism offers a privacy-conscious method to authenticate the age of the minor. A privacy centric approach is essential in a verification mechanism, especially since most verification methods demand identity documents, without any constraints on the use of such data. Ideally, the more robust the verification method is, the greater identity proves it demands. This creates a perverse outcome where protecting children from inappropriate content requires exposing them to data security risks. Therefore, when dealing with the personal information of individuals who cannot even meaningfully consent, it becomes critical to adopt mechanisms that have privacy at the core of their operation.
A token-based system reduces the privacy risk by limiting what is shared and how often it is shared. This aligns with foundational principles in global and domestic data protection frameworks, including OECD guidelines, EU’s GDPR and the DPDP Act. Tokens operationalize the principle of ‘Privacy by design’ by ensuring privacy safeguards in the model itself. The model is designed to receive cryptographic information on the very limited question of whether parental consent exists or not. No other personal information of the identifier is shared with the platform through this mechanism, which implements the principle of ‘data minimization’ in its design. Further, the data collection is also restricted to the information that is necessary for verification, hence the ‘purpose limitation’ principle is also operationalized. The token is short-lived, i.e. it is valid for one transaction, after which it is redundant, and deleted permanently. Data is not retained in any form once the purpose of sharing the data suffices, thus data is not retained beyond the duration consented for.
Feasibility and Enforcement of Tokenization
While tokenisation offers a unique solution on paper, its success will depend on how well it fits into the complex reality of India. The government has expressed confidence in India’s digital public infrastructure. IT Minister Ashwini Vaishnaw has asserted that India possesses a “good digital architecture” built upon Aadhaar, UPI, and Digi Locker, which will aid in implementation of the DPDP Rules. Despite this digital ecosystem backing tokenisation, its translation into the streaming ecosystem is fraught with questions.
Introducing a verification mechanism will require enhanced compliance costs for all OTT platforms. Building token verification APIs, maintaining logs, and responding to audits are elaborate tasks that require significant capacity building. Large platforms like Netflix or YouTube will be able to absorb this cost of integrating a token-based verification system, but for smaller regional OTT players, the compliance load could be substantial. Without state-backed infrastructure or subsidies, the system might create barriers for regional platforms, potentially consolidating the market in the hands of a few well-resourced conglomerates.
Another operational challenge is accessibility. Virtual tokens presume that users, especially parents, can easily operate digital lockers or Aadhaar-linked IDs. However, most families in India have very little digital literacy, so they would find this process confusing anyway. Also, repeatedly asking for token approval might cause consent fatigue. If, for example, subsequent videos that the child clicks require approval, it might lead to psychological fatigue. Verification will thus become meaningless as parents might share accounts with their children to avoid constant verification and other such problems.
The reliability of the authentication platform itself is worth examining. Even though platforms like DigiLocker operate with recognized technical safeguards and high security data centers, a digital system of its scale cannot be treated as entirely risk free. There have been public incidents where Aadhaar-linked digital identity systems have raised concerns regarding data linkage, reuse, and the potential consequences of centralised storage of sensitive identity information. Thus, the governance and security standards of the authentication platform would need to be subject to strong oversight and regular audit to ensure that confidence in the model is justified.
Lastly, questions about the quality of consent provided also arise. The consent given by the parent might not be meaningful, either because they do not understand what the consent is for, or because they consent under pressure from the child. Many parents are likely to approve verification requests without reading or questioning the details. A determined child can simply guide the parent through the prompt or operate the parent’s device outright, turning the consent flow into a predictable click-through.
Conclusion
Streaming platforms have become an indispensable part of everyday entertainment, education and culture. Yet the current approach to age verification is largely ineffective, making it unsafe for children to access content online. There is a pressing need to strengthen entry barriers for children and find solutions that are practical for Indian families and do not increase exposure to privacy risks.
The token-based model introduced under the DPDP Rules is a useful reference point because it shows that verification can be designed with privacy in mind. If adapted for streaming platforms, the ‘virtual token’ method can verify age and consent without increased risk of circulating personal identity documents. As India considers the next steps in enforcing a ‘technologically robust’ system to restrict minors’ access to inappropriate content, exploring such models is a reasonable part of the conversation. It reflects an effort to improve child safety without losing sight of privacy in the process.