Module 3: Platforms’ Human Rights Duties

  • Under International Human Rights Law (IHRL), the protection of human rights is an obligation placed on States, with no legal obligations on private entities to safeguard human rights beyond compliance with State-made regulations. However, States do have obligations under international law to protect their subjects from rights violations by private actors.
  • Tech platforms are increasingly expected to play an active role in upholding human rights. Platforms have an informal responsibility to do so under the UN Guiding Principles for Business (UNGPs) which are a useful source of further guidance.
  • The UNGPs clearly identify the need for private entities to undertake due diligence both to avoid infringing on rights and to ensure access to the necessary mechanisms of redress
  • A central motive behind the call for platforms to further take on human rights responsibility is the impact of online moderation practices on users’ rights, since what constitutes acceptable expression online is largely determined by platforms’ moderation efforts.
    • Beyond removing terrorist and illegal content (such as content produced by a designated terrorist group or, in certain jurisdictions, content that justifies or excuses terrorism), platforms increasingly moderate content labelled as “grey area”, where links to violent extremist or hate ideologies are less explicit. They do so to err on the side of caution, with a view to ensuring the safety of the platform and its users even in cases where this content is not strictly illegal in a given jurisdiction.
    • Most of the content removal conducted by social media platforms is moderated on the basis of violations of the platform’s own Content Standards, which often removes “harmful” content beyond any requirement or definition in law.
    • International Human Rights Standards explicitly state that limits to human rights, including to the right to freedom of expression, should be set by independent judicial authorities. In the specific context of online counterterrorism, platforms are often criticised for lacking transparent definitions of ‘terrorist actor’ or ‘terrorist content’ and for the impact such imprecision may have on lawful political expression.
  • Tech Against Terrorism recommends that tech platforms acknowledge how CTVE and content moderation practices may impact human rights, in order to take a proactive approach to human rights safeguarding.
  • This toolkit provides practical measures to examine your platform’s safeguarding capability and to action identifiable weaknesses by using international human rights law to define prohibited content so that exceptions to freedom of expression are lawful.

This toolkit also provide resources to increase your platform’s transparency when exercising its “quasi-legislative function” to determine where the limits of acceptable expression online must lie.

Human Rights for Business: Guiding Principles

  • In 2011, the UN Human Rights Council endorsed the UN Guiding Principles on Business and Human Rights (UNGPs). The UNGPs are considered as part of the catalogue of documents comprising international human rights law.
  • The UNGPs are the principal authority enjoining respect for human rights on private enterprises, and are additionally concerned with remediating the negative impact of commercial operations on human rights.
  • The Principles provide important guidance on standards of due diligence, transparency, and remediation that companies should implement across all spheres of their activity. The UNGPs in tandem with the more general principles and conventions of international human rights can inform a practical human rights-based approach to content governance, from policy development to enforcement.

The UNGPs offer a blueprint for improving transparency and enhancing mechanisms of accountability. As per Article. 31: In order to ensure their effectiveness, non-judicial complaints procedures, both private and public, should be:

  • (a) Legitimate: enabling trust from the stakeholder groups for whose use they are intended and being accountable for the fair conduct of grievance processes;
  • (b) Accessible: being known to all stakeholder groups for whose use they are intended, and providing adequate assistance for those who may face particular barriers to access;
  • (c) Predictable: providing a clear and known procedure with an indicative time frame for each stage, and clarity on the types of process and outcome available and means of monitoring implementation;
  • (d) Equitable: seeking to ensure that aggrieved parties have reasonable access to sources of information, advice and expertise necessary to engage in a grievance process on fair, informed and respectful terms;
  • (e) Transparent: keeping parties to a grievance informed about its progress, and providing sufficient information about the mechanism’s performance to build confidence in its effectiveness and meet any public interest at stake;
  • (f) Rights-compatible: ensuring that outcomes and remedies accord with internationally recognized human rights;
  • (g) A source of continuous learning: drawing on relevant measures to identify lessons for improving.

Tech companies, Moral Rights, Duties & Values

  • Human Rights may be conceived of as moral rights which bind private persons as well as legal rights which bind the state.
  • Certain scholars and practitioners of law contend that platforms have moral duties towards users. For example, Dr Jeffrey Howard, Associate Professor of Political Philosophy and Public Policy at UCL argues that with regard to content moderation, platforms have the duty to rescue (platforms can protect people from harm), and the duty to avoid complicity (not giving the space to amplify harmful expression).
  • This is particularly relevant to online counter terrorism and legal requirements to remove content. Howard states that such legal requirements may incentivise platforms to remove content quickly and therefore without proper assessment and consideration for human rights., Considering this risk, it may be more effective for the purpose of countering terrorism to enforce the moral duties of platforms.
  • Similarly, because platforms are not duty-bearers in the same way States are, the U.K’s Independent Reviewer of Terrorism Legislation Jonathan Hall KC has argued that it is conceptually easier to avoid the language of fundamental rights and refer instead to values, in the sense of the priorities which should guide online counterterrorism, and that the principal online value is freedom of expression. He states that, to uphold the value of freedom of expression, platforms should focus on the accompanying values of transparency, truth, democracy and individual autonomy or self-fulfilment, and explains how these values can be considered when a platform devises counterterrorism-related policy and/or processes.