Moderation doesn’t always have to mean content deletion. Alternative approaches, of the kind suggested below, may be of interest to HSPs seeking to moderate content (pro-)actively beyond the requirements of the TCO Regulation. It’s important to highlight that such alternative moderation approaches are out of scope of the TCO regulation and can only be used when platforms have not received a removal order but want to moderate non-terrorist, otherwise harmful content proactively. When you receive a removal order, the procedure is clear: You have to remove the content and no alternative moderation approaches come into question.
Click on the specific moderation approach to read a description of it.
HSPs can partially or completely hide content, and thereby avoid blocking it, if they believe that users may find the content offensive or objectionable but it is nonetheless legitimate, legal and permissible across the entire platform. Hiding content from people from a vulnerable group or people located in a country where the content is illegal (while it is allowed in other countries) is one such response. Various technical functionalities can be used to hide content, such as login or paywall filters, a secure (search) mode to display age-appropriate content and geo- or time-blocking.
Disengagement deprives certain content or users of engagement metrics, deters activity around the post, and can make content generally unrewarding to post. However, the content and user account remain on the platform. Disengagement restricts the prominence of posts or accounts on the platform. Typical disengagement tactics include the disabling of platform features, such as, in the case of many social networks, the ability to like, comment, or share posts, so that the post can only be read, demonetisation (i.e. depriving accounts of the ability to make money from their content), or de-verification (i.e. removing any certification of the account’s or user’s identity). Such sanctions can entail, and be compounded by, a change in the treatment of the content or account by the platform’s algorithm: downgrading the content means that it is more difficult to be widely distributed and promoted through the platform’s mechanisms (usually recommendation algorithms).
The goal of pedagogical or communication-based tactics is to offer users additional information so that they can ultimately decide for themselves whether they want to see the content or not. In the end, the platform decides which content is provided with such notes, what these notes involve, what category of harm users are warned about, and how much additional information is offered. A well-known practice previously used by (previous) Twitter is to alert users that there may be harmful content in a post, such as misinformation or conspiracy narratives, and to permit users to see the content only after they have actively confirmed that they wish to do so by clicking on a button. Particularly in the case of political-ideological content that might promote radicalisation, counter-narratives and links to educational information can sensitise people to the possible effects of such content.
The premise of moderation mechanisms based on community empowerment is to allow the users themselves to create the digital space they imagine. Such strategies may be of particular interest to platforms where the idea of community is important, or whose moderation practices are already to an extent dependent on the support of users. These moderation approaches follow the type of distributed moderation. In addition to the up and down voting functionality already mentioned, this also includes the individual blocking or muting of specific accounts, which a large number of platforms already offer, or the use of admins or moderators from the community itself. Closely linked to this is the concept of ‘Trusted Flaggers’, which is examined in the Digital Services Act (Article 22). The concept refers to users who are particularly trustworthy and competent to assess the illegality of content and report it (objectively and quickly) and who represent collective (public welfare-oriented) interests regardless of the specific online platform. Content reported in this way should be processed on the HSP side as a matter of priority and quickly.
These alternative approaches to moderation can be relevant even where platforms are not compelled by the TCO Regulation to take action. Regardless of the form it takes, the TCO Regulation accommodates a proactive approach: if, in the course of (pro-)active, own moderation measures, the HSP encounters content that deals with an imminent threat to life or a terrorist act, this must be deleted and the competent authority of the EU Member State affected by it must be informed immediately (TCO Regulation, Art. 14.5).
More details on the (technical) methods required by these alternative approaches, as well as their advantages and disadvantages and case studies, are provided by Tech Against Terrorism here.