- Introduction HR Toolkit
- Module 1
- Module 2
- Module 3
- Module 4
- Module 5
- Module 6
- Annexes
Module 5: Inscribing Online Counterterrorism Policies In a Human Rights Framework
Having the right Trust and Safety framework in place, including detailed policies and clear enforcement practices, ideally housed in a Trust & Safety Centre on your website, protects human rights by making the contract between your platform and your users clear and accountable. This ultimately also results in a better user experience and strengthens your brand’s reputation by making clear your commitment to both the freedom and safety of your users.
- We recommend that all information related to Content Standards, policies and moderation be hosted in a dedicated Trust & Safety Centre. This will improve accountability on the platform by further facilitating user access to key information about content moderation and counterterrorism efforts, while contributing to transparency overall. TikTok’s Safety Centre and Discord’s Safety Centre are both examples of this practice.

Building Human Rights-Compliant Counterterrorism Policy
Building your counterterrorism policy upon the following pillars will help you to translate a theoretical commitment to protecting human rights into product development and counterterrorism policy enforcement. These pillars will also strengthen your operations and branding in a competitive market of online platforms which are increasingly conscious of users, their rights, the agreement they enter into with your platform, and the expectations that a chosen platform will meet them.
- Transparency: Transparency in this context can be defined as ‘the decision to make visible, or provide access to, the resources on which an exercise of public or private power may be based’. Transparency is essential for users to understand how a platform respects their human rights. Transparency is the foundational principle of human rights-compliant policy, without which any further commitments are impractical.
- Clarity: Policy documents should be clear and use plain language where possible. They should also summarise key requirements for users and how platforms can respond to violative content, in order to improve accessibility and state clearly what is expected of both parties.
- Accountability: ‘Accountability is a cornerstone of the human rights framework, itself a system of norms that govern the relationship between “duty bearers” in authority and “rights holders” affected by their actions.’ Accountability is needed for your platform to correct misunderstandings and learn from mistakes.
- Oversight: Oversight of policy enforcement is crucial to ensure that targets are met and that, when they are not, lapses and mistakes are learnt from and remediated if it is justifiable to do so. Where possible, independent oversight is desirable as the best means of achieving the highest degree of accuracy.
- Necessity & Proportionality: The measure employed to restrict a right must be proportionate to the measure’s aim. Considerations of necessity and proportionality can help to limit restrictions on the right to freedom of expression.
- Legality: Legality can be defined as the assurance that rules and laws are sufficiently clear. This is a crucial consideration in drafting policy documents for users so that they understand the agreement they are accepting.
Including A Prohibition of Terrorism
Most platforms maintain internal lists of TVE entities meant to inform moderation enforcement. These lists may cover globally and locally designated terrorist groups, as well as violent extremist groups that are not designated in any jurisdiction. TVE lists are often needed to fill gaps of existing designation lists, and particularly in the case of far-right violent extremist groups which globally remain more or less undesignated.
To build internal lists, some platforms rely on resources produced by counterterrorism and counter-hate experts. US-based platforms might refer to the Southern Poverty Law Center list or other counter-hate groups, and Tech Against Terrorism recommends that platforms refer to our list of violent extremist organisations. Members of the EU Internet Forum (EUIF) can also refer to the EUIF wiki and its database of banned far-right violent groups in EU member states.
TCAP Designation
Whilst Tech Against Terrorism does not define terrorism, we have compiled a baseline list of groups for inclusion in the Terrorist Content Analytics Platform. We invite platforms to refer to the TCAP Group Inclusion Policy when building their internal list, in particular smaller platforms that may not have the resources to build their own lists. |
The question of how to define terrorism is debated amongst academics and counterterrorism practitioners and can be highly political. Given the lack of global agreement on what constitutes terrorism, platforms’ lists of terrorist and violent extremist actors raise significant human rights concerns in that the compilation of such lists may be arbitrary and procedurally deficient, or indeed discriminatory. Tech Against Terrorism therefore recommends platforms:
- Publish as much detail as possible about the TVE listing their content standards policy.
- Prohibit both terrorist and violent extremist groups and explain why such groups may warrant inclusion.
- Invite experts and local civil society organisations to provide feedback on TVE lists, in particular to consider local context.
- Clarify the different tiers used in the TVE list, if any, explaining the criteria and considerations for each tier, as well as providing examples of TVE entities that fall within each tier.
- Refer explicitly to national and international designation lists, such as the United Nations Security Council designation list, to ensure that the TVE list is founded in the rule of law. Where groups fall outside of these lists, to include the authority and relevant reasoning that justifies the prohibition of associated content.
- List the types of TVE content / behaviour which are prohibited (i.e., branded material, supporter-generated content, promotion, re-creation) and provide practical examples of what constitutes TVE content. We recommend to communicate to users in no uncertain terms that out of caution the moderation of material associated with a prohibited organisation may extend to all types of content and communications on the platform.
- Include an exception for content that is educational, journalistic or which reports on a terrorist organisation.
Committing to Human Rights
- Tech Against Terrorism recommends platforms to include an explicit commitment to human rights and the freedom of expression online in your Content Standards. This commitment will establish that the platform’s counterterrorism efforts, as well as its content moderation practices more broadly, are concerned fundamentally with the protection of human rights.
- To better articulate a commitment to human rights, Tech Against Terrorism recommends to develop an explicit human rights policy that details how the platform enacts this commitment through its policy and processes. For example, your platform could detail its practical efforts to safeguard users’ rights when enforcing content moderation by outlining what considerations are made when making such decisions to ensure that users’ freedom of expression or other rights are not infringed.
Building a detailed human rights policy
Having a human rights policy is important for tech companies to clearly communicate on how they consider their potential impact on human rights, and how human rights consideration are practically implemented at all key steps of Trust & Safety operations, from policy development to enforcement actions and user appeals.
Publishing a human rights policy allows tech platforms to:
- Further enshrine their human rights practices into their policies and operations
- Build accountability for their users and the general public, by increasing transparency around online CTVE activities (as well as Trust & Safety more broadly) and human rights
- Outline what redress and human rights safeguards look like on their services
Ensuring your platform creates and makes publicly available a counterterrorism policy is important to provide both users and the public with clear standards and expectations. Such policy should be the result of a deliberative process, informed by consultation of relevant laws and designation lists as well as by seeking academic and operational expertise to ensure that the policy is as sensitive and responsive as possible to the threat to your platform. Formulating policy in this way, by reference to both state and non-state opinion, helps to ensure and reassure users that the requirement to moderate content cannot be subverted for the purpose of removing legal and legitimate content, therefore threatening freedom of expression and non-discrimination.
A fully-fledged human rights policy should include the following explanation:
Tech Against Terrorism acknowledges that certain of these recommendations might be resource heavy and therefore difficult to achieve for small and medium platforms. For this reason, we indicate key recommendations to prioritise if resources are limited in bold. We also offer different human rights and online counterterrorism support services for tech companies, for more information please reach out at [email protected]. |
Content Moderation Decisions
The development of Community Guidelines and Terms of Services by tech platforms has created an online “parallel” to national counterterrorism legislation. With every platform setting up its own rules on online expression, these redefinitions are multiple in number and create fragmented norms around what is acceptable online. This practice impacts human rights in the broader sense and digital rights more specifically because platforms are assuming a legislative function by delimiting acceptable expression and behaviour online.
We recommend platforms to base their content moderation decisions around clear legal basis and transparent processes to help mitigate the risks of infringing on human rights. Aligning limits to expression as closely as possible with those established in international law can help mitigate the risk that platforms may misuse the power they are required to exercise:
- Making reference to international human rights standards can orient tech sector policies and practices towards upholding human rights in the digital space and safeguard against arbitrary moderation practices. Including by offering guidance as to when human rights can reasonably be limited in line with international human rights standards when required to balance the rights of different individuals and protect national security or public order. Articles 19 and 20 of the ICCPR, as well as the 6 factor-test of the Rabat Plan of Action offer such guidance with regard to restricting freedom of expression.
ICCPR: Art. 20 1. Any propaganda for war shall be prohibited by law. 2. Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.
The UN Strategy and Plan of Action on Hate Speech has defined hate speech as “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.” The Rabat Plan was developed by the UN Office of the High Commissioner for Human Rights in 2013 to provide guidance on how to restrict freedom of expression in line with Article 20 of the ICPR in a way that does not curtail free expression completely. It suggests a six-step test to assess whether speech should be criminalised: 1) context 2) speaker 3) intent 4) content and the form of speech 5) extent of the speech 6) likelihood and imminence. |
- Transparency: David Kaye, The former UN Special Rapporteur on Freedom of Expression, has emphasised that following an International Human Rights Law framework will in itself not be sufficient and calls for “radically better transparency”, capable of explaining the rationale underlying the policies, processes, and decisions of content moderation, and for a level of “industry-wide oversight and accountability” capable of scrutinising moderation decisions. (See the spotlight on transparency at the end of this module).
To increase the transparency of your content moderation policy, Tech Against Terrorism recommends platforms to:
- Ensure that prohibitions are enacted using clear definitions so that users have fair warning of what content is not permissible and can make informed decisions about their use of the platform.
- Explain in detail how Community Guidelines are enforced by naming the range of sanctions available (from imposing warnings to other users about sensitive content to outright removal, for example) ) to emphasise proportionality in response to policy violations
- Outline the process behind content moderation decisions, to include how evidence is collected and assessed when considering the initial complaint, whether users other than the complainant are invited to provide evidence in support or defence of an alleged violation, the admissibility of off-platform circumstances in decision making, and how and with reference to what framework the severity of an alleged violation is determined
- EInclude clear guidelines on user reporting and appeal. In this way, both complainants and defendants can more accurately protest decisions.
- Where resources area available, refer ‘high level’ moderation decisions (namely, those relevant to terrorism and violent extremism which present the greatest complexity and which could have a significant impact on human rights) to a panel of counterterrorism and digital rights experts. When cases are referred in this way, platforms should communicate candidly with the expert panel and provides the details of interim or provisional decisions. Platforms should act on the advice given by counterterrorism and digital rights experts who review decision cases.
- In the case of erroneous enforcement decisions that infringe significantly on human rights, publish a plan to review the moderation process that led to the infringement and make the findings public.
Details on Automated Moderation
Automated content moderation can heighten the risk to human rights, and in particular freedom of expression, because it lacks human context and can amplify mistakes or bias in the data which informs it.
Tech Against Terrorism recommends tech companies:
- Ensure a good balance between human and automated moderation to mitigate the effect of reviews incapable in importing context and nuance into decision making.
- Make publicly available information on how automated moderation is deployed, so that a balanced and informed discussion can take place to shape policy in this nuanced area.
Set clear definitions applicable to content moderation to ensure that the power to amplify content or create filter bubbles, which is inherent to the function of automated moderation, is exercised consistently and within constraints.