Module 6: Spotlight on Privacy and Transparency

Taking a Stand on Encryption

Concerns over the use of end-to-end encryption (E2EE) for terrorist and violent extremist purposes have motivated calls to break encryption by providing law enforcement with general access to encrypted content or by introducing the proactive and systematic scanning of all encrypted content. However, encryption experts, digital rights advocates, and service providers agree that the systematic surveillance of encrypted content is technically unfeasible unless the security and privacy of all users is to be compromised.

Tech companies should Undertake further work to counter the arguments against encryption and raise awareness amongst the general public and policymakers, whether through privacy statements or other public documentation like a dedicated blog, about the importance E2EE for online privacy and security as means of protecting the public against criminal actors.

Tech Against Terrorism recommends platforms:

Take the lead on developing resilient mitigation strategies focused on the early prevention and detection of terrorist and violent extremist use of their services. Metadata are available to detect terrorist and violent extremist use, and measures can be introduced to limit or prevent terrorist and violent extremist users from accessing encrypted services in the first place.

  • Emphasise the risks presented both by backdoors to and systematic monitoring of encrypted communication to the users’ right to privacy and freedom of expression. The breadth and ambiguity of legislation both enacted and proposed raises the prospect of increased surveillance of private communications in the future.
  • Place considerable emphasis on the argument that backdoors and systematic monitoring do not guarantee efficiency or success in the fight against criminal activities.

To learn more on terrorist use of E2EE and mitigation strategies, please see Tech Against Terrorism’s report on terrorist use of E2EE and related mitigation strategies.

Spotlight on: Transparency Reporting

Transparency is vital to ensure that the tech industry is accountable to the public and its users. Transparency, including transparency reporting, allows for public inspection of the extent to which fundamental freedoms, such as freedom of expression, the right to privacy and the principle of non-discrimination, are respected across the internet. ‘Meaningful’ transparency includes publishing detailed information about policy and enforcement, including on the decision-making processes behind the moderation of content via pro-active monitoring, user reporting, and other external content referrals (e.g., from Governments and NGOs).

The process of transparency reporting can encourage and acclaim instances of meaningful action taken by tech companies to tackle terrorist use of the internet, and can yield crucial statistical insight into the threat. In contrast, a lack of transparency will impede efforts to assess the compliance of moderation practices with human rights standards. Transparency should therefore be considered a key aspect of counterterrorism online, and transparency reporting has been a core part of Tech Against Terrorism’s support for the tech sector since 2017, including in the Mentorship Programme, from which the following set of recommendations are drawn.

The EU Regulation on Terrorist Content Online (TCO) has made it a legal requirement for platforms within its remit to publish the following metrics (Section III, Article 7):

  • In their Terms and Conditions, hosting service providers (HSPs) must clearly state their policy for countering terrorist content including (where appropriate) a meaningful explanation of the functioning of specific measures and use of automated tools.
  • HSPs must publish a yearly transparency report about the actions taken regarding the identification and removal of terrorist content. The report must include:
  • Information about measures to identify and remove content
  • Information about measures to address the reappearance of content, in particular when using automated tools
  • The quantity of terrorist content removed or disabled in the EU following a removal order or specific measures, and the quantity of content not removed (when the order was dismissed)
  • The number and outcome of complaints handled by the HSP
  • The number and outcome of administrative or judicial review proceedings requested by the HSP
  • The number of cases in which the HSP was required to reinstate content following a review
  • The number of cases in which the content was reinstated following a complaint from a user

TCO reporting template: Via Tech Against Terrorism Europe (TATE), a consortium with partners across Europe, Tech Against Terrorism provides capacity-building support for small platforms to align their online counterterrorism response with the European Union’s TCO requirements. As part of this support, we have created a transparency reporting template modelled for compliance with the EU TCO requirements. If you are interested in accessing this template, please reach out at [email protected].

Tech Against Terrorism resources on transparency

We are wary of unrealistic expectations as to what metrics should be included in transparency reports by not considering platform capacity, diversity, and functionality. We understand that transparency reports will vary from one platform to another, and that there is standard template for what a transparency report should be.

  • Transparency Reporting Guidelines on Online Counterterrorism Efforts: To encourage meaningful transparency around online counterterrorism efforts, we have launched a set of dedicated guidelines seeking to seek to improve transparency and accountability from both tech companies and governments. around online counterterrorism activities. The Guidelines serve as a starting point for increased transparency, and it is our aim that all governments and companies will report on the baseline set out in the Guidelines. we ask companies to report on a small number of core metrics covering policies, processes, systems, and outcomes. The Guidelines are meant to support tech companies in creating transparency reporting frameworks that are adapted to their different services, resources, and user bases.
  • Transparency reporting resources on the Knowledge Sharing Platforms, including:
    • Key recommendations for improving platforms transparency efforts and general accountability towards users, both via transparency reports and other publicly available policy documents.
    • Transparency Report Menu for platforms to build their own reports, informed by the metrics commonly reported on by different types of tech platforms and by Tech Against Terrorism’s mentorship and policy work in support of smaller platforms.
    • Transparency Reporting Benchmarks examining how platforms have established and organised their transparency reports.
  • Bespoke transparency reporting support: Our guidelines and transparency reporting resources complement the practical work Tech Against Terrorism carries out in support of the tech industry, particularly through its Policy Advisory and Response work.

Additional Resources on transparency reporting

In calling for increased transparency and accountability from tech companies, academics and digital rights advocates have also developed a number of resources and guidance to support platforms.

  • The Santa Clara Principles, launched by a coalition of digital rights advocates and academics,33 propose a “framework for the moderation of user-generated online content that puts human rights at the very center”. The Principles focus on a narrow minimum of information which should be provided to describe platforms’ moderation efforts and the impact on users, and should “serve as a starting point, outlining minimum levels of transparency and accountability that we hope can serve as the basis for a more in-depth dialogue in the future”.
  • New America’s Open Technology Institute, in collaboration with the Berkman Center for Internet & Society, has created a Transparency Reporting Toolkit. This project uses research on the current state of transparency reporting and aims to identify best practices. The Toolkit also offers a template transparency report and provides more general reporting guidelines.
  • AccessNow has also launched the Transparency Reporting Index, which provides an overview of transparency reporting practices across the tech sector.
  • The Organisation for Economic Cooperation and Development (OECD), launched its Voluntary Transparency Reporting Framework (VTRF). A web portal allowing users to access and submit standardised transparency reports on platforms’ online counterterrorism efforts. The VTRF is the result of a two-year multi-stakeholder process and is meant to encourage standardised transparency reporting on countering terrorist and violent extremist content.

Transparency beyond key metrics

To ensure meaningful transparency reporting, Tech Against Terrorism recommends transparency reports to provide background and contact to policy development over time and how it may impact enforcement.

For readers to have the full picture of what Trust & Safety means on a certain platform, transparency reports should include:

  • Background around how CTVE practices have evolved in response to the threat landscape, including in response to particular incidents that may have happened on a platform
  • Information about policy enforcement and the different range of moderations actions that can be taken in response to suspected violations
  • Information about user appeals process
  • Link to relevant content standards and publicly available policy document.
  • Background around potential engagement with thirds parties organisations to detect and counter terrorist and violent extremise use of the platform, and details on the result of this engagement.

When beginning the process of creating a transparency report, your platform should keep in mind what the goals of reporting are from the perspective of human rights. Such goals include informing users and the general public about:

  • The counterterrorism efforts undertaken and the results they yield
  • The practical limitations on freedom of speech which may be necessary to ensure the safety of users both online and offline
  • Detection methods, including via reporting schemes and proactive moderation practices
  • Responding to potential human rights infringements via user appeals
What must be reported are the mitigation strategies in place, how these are balanced with users’ rights, and what the outcomes of this exercise are, rather than how much data quantitively is produced. If conscious efforts have been made to leave out certain metrics on reporting, readers should be told why to promote understanding and the maintenance of good faith. Good reasons for limiting the information disclosure might include a concern for the privacy of individual users and their data, maintaining the competitive advantage of companies that have invested in technological development, and not providing insights that could help malevolent actors circumvent automated content moderation tools.