
Jamily Maiara Menegatti Oliveira (Masters in European Union Law from the School of Law of University of Minho)
On 18 November 2025, the European Board for Digital Services, in cooperation with the European Commission, published its first annual report under Article 35(2) of the Digital Services Act (DSA). This report is dedicated to identifying the most prominent and recurring aspects of systemic risks associated to Very Large Online Platforms (VLOPs), as well as the respective mitigation measures.[1] The report holds institutional significance, inaugurating a new reporting cycle under the DSA. More importantly, it illustrates the European Union’s initial steps in incorporating the structural impacts of digital platforms on the exercise of fundamental rights into a risk governance framework.
Although Article 34(1)(b) of the DSA expressly includes media freedom and pluralism within the fundamental rights potentially affected by systemic risks, the report does not treat media as a distinct category of analysis. The reference to media freedom and pluralism is subsumed within the broader context of freedom of expression and information, as well as considerations regarding access to a plurality of opinions, including those originating from media organisations. This methodological approach suggests a functional perspective on media freedom and pluralism, centred on the implications of content dissemination and moderation systems for civic discourse, and raises a legal question as to whether indirect safeguards suffice to uphold the democratic integrity of the digital public sphere.
Currently, the media have established a massive presence on online platforms, using them as priority channels for the distribution of journalistic content. Wide-reaching platforms, such as Facebook and X (formerly Twitter), are among the main drivers of traffic to news sites, especially among younger users, who rely on online social media as their primary source of information.[2]
Platforms act as intermediaries that host and organise user publications and interactions, also taking on the role of moderating the content generated. This moderation translates into a practice of screening and managing publications, establishing criteria for visibility, reach and permanence. In this context, legal and academic scholarship points to “algorithmic governance”, which can be defined as the way in which large platforms exercise social order through automated mechanisms, often combined with human supervision or assisted by machines.[3]
Thus, in the contemporary digital ecosystem, recommendation algorithms play a central role in defining visibility and content circulation. These mechanisms select or suggest what each user will see, based on interests inferred from their connections and even the content they create or interact with.[4] Given this, the literature highlights that algorithmic moderation shapes the visibility of content and, consequently, is capable of directly influencing public discourse.[5]
In principle, these mechanisms can be useful tools for content management, as they reduce exposure to inappropriate or harmful material and promote reliable sources of information. However, the same logic of algorithmic selection, which orders and prioritises content, can be used to restrict its visibility through automatic moderation techniques. Among these practices, shadow banning stands out, which consists of silently altering or reducing the reach of certain publications under the pretext of ensuring a healthier information environment and moderating online discussions.[6]
Shadow banning practices can include banning search suggestions, account blocking measures, and decreasing follower engagement through algorithmic governance. In this way, content suppression takes on new contours, because in traditional media, when content was illegal, scrutiny was done after the fact. However, the latest content moderation techniques on digital platforms seek to proactively reduce the exposure and impact that published content will potentially have.[7]
In this context, the first report of the European Digital Services Board, pursuant to Article 35(2) of the DSA, clarifies that systemic risks to freedom of expression and information, enshrined in Article 11 of the Charter of Fundamental Rights of the European Union (CFREU), are not linked to individual pieces of content considered illegal or problematic, but rather to the structural functioning of the systems used to disseminate, organise and moderate such content, such as recommendation systems, advertising mechanisms and content moderation practices. Based on the risk assessment reports submitted by VLOPs and observations from civil society organisations, the report identifies, among other risk factors, excessive use of automated mechanisms without adequate human oversight, as well as deficiencies in the platforms’ appeal systems. There are also risks related to the intentional manipulation of services, namely through the abusive use by users of reporting mechanisms to silence legitimate speech, and the impact of recommendation systems on unequal exposure to opinions and content, affecting certain groups more intensely and, in particular, the plurality of voices in the digital public space.[8]
Shadow banning poses an increased risk to the media, as the details of how the mechanism operates and the logic underpinning the procedure are not readily ascertainable. Thus, their content may be automatically flagged or have its visibility changed unilaterally by the moderation team without the author being able to determine whether their content has, in fact, been subject to visibility restrictions, either through human review or algorithmic systems. On the other hand, when actions are taken in an unequivocal and obvious manner, such as suspensions or bans, the user is able to realise that their content is being suppressed, whereas shadow banning can be practised without the subject even identifying that they are not reaching the users they intend to. This opacity in content moderation can be used to suppress points of view. In practice, it has been found that, for instance, X and Facebook have reduced the visibility of content without first notifying the users affected.[9]
Although in theory the DSA prohibits shadow banning practices, the 2025 Media Pluralism Monitor (MPM) report indicates that these continue to be part of the moderation strategies employed by VLOPs. Thus, the platform does not instantly downgrade media publications, but causes views to decrease over time, making it appear to be a mere algorithmic result.[10] Although platforms are in a phase of adaptation, the 2025 MPM points out that many essential monitoring tools are still in development and therefore have not yet reached full effectiveness.
It is in this context that the European Media Freedom Act (EMFA) – Regulation (EU) 2024/1083 – takes on particular relevance as an indispensable regulatory complement to the DSA. While the DSA focuses primarily on the systemic risks arising from the operation of digital platforms and their algorithmic systems, the EMFA shifts the focus to the structural protection of the media as institutions essential to democratic public debate. By enshrining specific safeguards in terms of editorial independence, transparency in content distribution, protection of journalistic sources and resistance to economic or political interference, the EMFA recognises that media pluralism cannot be ensured indirectly, through the regulation of online discourse, but rather requires its own institutional guarantees.
The articulation between the DSA and the EMFA thus reveals a two-pronged European approach: on the one hand, the containment of systemic risks generated by the algorithmic governance of platforms; on the other, the strengthening of the autonomy and viability of the media as central actors in the public sphere. This ultimately raises the question of whether this regulatory combination will be sufficient to tackle opaque moderation practices, such as shadow banning, and to ensure that the digital transition does not result in a silent erosion of informational pluralism. The answer to this question will, to a large extent, depend on the effectiveness of the concrete application of these instruments and on the ability of European institutions to ensure that the protection of digital democracy does not remain merely at the normative level, but translates into real effects on the functioning of the European public sphere.
[1] European Board for Media Services, First report of the European Board for Digital Services in cooperation with the Commission pursuant to Article 35(2) DSA on the most prominent and recurrent systemic risks as well as mitigation measures, 18 November 2025, 48, https://digital-strategy.ec.europa.eu/en/news/press-statement-european-board-digital-services-following-its-16th-meeting.
[2] Philip M. Napoli, “Social media and the public interest: governance of news platforms in the realm of individual and algorithmic gatekeepers”, Telecommunications Policy 39, no. 9 (2015): 751, https://doi.org/10.1016/j.telpol.2014.12.003.
[3] See Laura Savolainen, “The shadow banning controversy: perceived governance and algorithmic folklore”, Media, Culture & Society 44, no. 6 (2022): 1091–109, https://doi.org/10.1177/01634437221077174.
[4] See Alessandro Galeazzi et al., “Revealing the secret power: how algorithms can influence content visibility on Twitter/X”, Proceedings of the 33rd Network and Distributed System Security Symposium (NDSS 2026), ahead of print, 8 September 2025, 1, https://doi.org/10.48550/arXiv.2410.17390.
[5] Galeazzi et al., “Revealing the secret power: how algorithms can influence content visibility on Twitter/X”, 3.
[6] Galeazzi et al., “Revealing the secret power: how algorithms can influence content visibility on Twitter/X”, 1.
[7] Savolainen, “The shadow banning controversy: perceived governance and algorithmic folklore”.
[8] European Board for Media Services, First report of the European Board for Digital Services in cooperation with the Commission pursuant to Article 35(2) DSA on the most prominent and recurrent systemic risks as well as mitigation measures, 13–15.
[9] Galeazzi et al., “Revealing the secret power: how algorithms can influence content visibility on Twitter/X”, 1.
[10] Tijana Blagojev et al., Monitoring media pluralism in the European Union: results of the MPM2025, EUI, RSC, Research Project Report, Centre for Media Pluralism and Media Freedom (CMPF), Country Reports, 2025, 29, https://hdl.handle.net/1814/92916.
Picture credit: by Nothing Ahead on pexels.com.
Author: UNIO-EU Law Journal (Source: https://officialblogofunio.com/2026/01/26/algorithmic-moderation-shadow-banning-and-systemic-risks-for-media-in-the-european-union-reflections-from-the-first-report-under-article-352-of-the-digital-services-act/)