Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Role of Intermediaries in Countering Online Abuse: Still a Work In Progress, Part II

Posted on June 30, 2015 by Kartik Chawla

This is the second in a two-part series by Jyoti Panday of Centre for Internet and Society, Bangalore, on the role of intermediaries in addressing online abuse. The first part of this post is available here.

SIZE MATTERS

The standards for blocking, reporting and responding to abuse vary across different categories of platforms. For example, it may be easier to counter trolls and abuse on blogs or forums where the owner or an administrator is monitoring comments and UGC. Usually platforms outline monitoring and reporting policies and procedures including recourse available to victims and action to be taken against violators. However, these measures are not always effective in curbing abuse as it is possible for users to create new accounts under different usernames. For example, in Swati’s case the anonymous user behind @LutyensInsider account changed their handle to @gregoryzackim and @gzackim before deleting all tweets. In this case, perhaps the fear of criminal charges ahead was enough to silence the anonymous user, which may not always be the case.

TACKLING THE TROLLS

Most large intermediaries have privacy settings which restrict the audience for user posts as well as prevent strangers from contacting them as a general measure against online harassment. Platforms also publish monitoring policy outlining the procedure and mechanisms for users to register their complaint or report abuse. Often reporting and blocking mechanisms rely on community standards and users reporting unlawful content. Last week Twitter announced a new feature allowing lists of blocked users to be shared between users. An improvement on existing mechanism for blocking, the feature is aimed at making the service safer for people facing similar issues and while an improvement on standard policies defining permissible limits on content, such efforts may have their limitations.

The mechanisms follow a one-size-fits-all policy. First, such community driven efforts do not address concerns of differences in opinion and subjectivity. Swati in defending her actions stressed the “coarse discourse” prevalent on social media, though as this article points out she might be assumed guilty of using offensive and abusive language. Subjectivity and many interpretations of the same opinion can pave the way for many taking offense online. Earlier this month, Nikhil Wagle’s tweets criticising Prime Minister Narendra Modi as a “pervert” was interpreted as “abusive”, “offensive” and “spreading religious disharmony”. While platforms are within their rights to establish policies for dealing with issues faced by users, there is a real danger of them doing so for “political reasons” and based on “popularity” measures which may chill free speech. When many get behind a particular interpretation of an opinion, lawful speech may also be stifled as Sreemoyee Kundu found out. A victim of online abuse her account was blocked by Facebook owing to multiple reports from a “faceless fanatical mob”. Allowing the users to set standards of permissible speech is an improvement, though it runs the risk of mob justice and platforms need to be vigilant in applying such standards.

While it may be in the interest of platforms to keep a hands off approach to community policies, certain kind of content may necessiate intervention by the intermediary. There has been an increase in private companies modifying their content policy to place reasonable restriction on certain hateful behaviour in order to protect vulnerable or marginalised voices. Twitter and Reddit’s policy change in addressing revenge porn are reflective of a growing understanding amongst stakeholders that in order to promote free expression of ideas, recognition and protection of certain rights on the Internet may be necessary. However, any approach to regulate user content must assess the effect of policy decisions on user rights. Google’s stand on tackling revenge porn may be laudable, though the decision to push down ‘piracy’ sites in its search results could be seen to adversely impact the choice that users have. Terms of service implemented with subjectivity and lack of transparency can and does lead to private censorship.

THE WAY FORWARD   

Harassment is damaging, because of the feeling of powerlessness that it invokes in the victims and online intermediaries represent new forms of power through which users’ negotiate and manage their online identity. Content restriction policies and practices must address this power imbalance by adopting baseline safeguards and best practices. It is only fair that based on principles of equality and justice, intermediaries be held responsible for the damage caused to users due to wrongdoings of other users or when they fail to carry out their operations and services as prescribed by the law. However, in its present state, the intermediary liability regime in India is not sufficient to deal with online harassment and needs to evolve into a more nuanced form of governance.

Any liability framework must evolve bearing in mind the slippery slope of overbroad regulation and differing standards of community responsibility. therefore Therefore, a balanced framework would need to include elements of both targeted regulation and soft forms of governance as liability regimes need to balance fundamental human rights and the interests of private companies. Often, achieving this balance is problematic given that these companies are expected to be adjudicators and may also be the target of the breach of rights, as is the case in Delfi v Estonia. Global frameworks such as the Manila Principles can be a way forward in developing effective mechanisms. The determination of content restriction practices should  always adopt the least restrictive means of doing so, distinguishing between the classes of intermediary. They must evolve considering the proportionality of the harm, the nature of the content and the impact on affected users including the proximity of affected party to content uploader. Further, intermediaries and governments should communicate a clear mechanism for review and appeal of restriction decisions, accommodating the right to be heard and reinstating wrongfully removed content.

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme