
Jamily Maiara Menegatti Oliveira (Masters in European Union Law from the School of Law of University of Minho)
As we mentioned in last week’s post, on 18 November 2025, the European Board for Digital Services – in cooperation with the European Commission – published the first annual report under Article 35(2) of the Digital Services Act (DSA), dedicated to identifying the most prominent and recurring aspects of systemic risks associated with Very Large Online Platforms (VLOPs), as well as the respective mitigation measures.[1] This is an institutionally relevant document, not only because it inaugurates a new reporting cycle under the DSA, but above all because it reveals how the European Union is beginning to translate classic democratic concerns – such as the integrity of public debate and electoral processes – into the language of risk governance.
Among the various areas analysed, Section 3.3 stands out, in which the report identifies systemic risks associated with civic discourse, electoral processes and public safety. The decision to treat these elements as structural risks, rather than as isolated incidents or mere pathologies of the digital environment, is particularly significant. It highlights the recognition that the regular functioning of platforms – through their algorithmic systems, amplification models and moderation practices – can profoundly affect the conditions under which public opinion is formed in a democratic society.
Among the most recurrent risks, the report highlights the systemic dissemination of disinformation and misleading information regarding election dates, candidate eligibility, registration or voting procedures, as well as the circulation of narratives aimed at delegitimising election results through unfounded allegations of fraud or institutional interference. These phenomena are not analysed as isolated incidents, but as predictable effects of information ecosystems designed to maximise reach and engagement, often at the expense of information reliability.[2]
It should be noted that the DSA does not regulate elections or electoral processes as such but recognises that the structural functioning of digital platforms can generate systemic risks for electoral processes by affecting civic discourse, access to information and the integrity of public debate.
Indeed, algorithmic recommendation raises several concerns, such as the potential polarising effect and the ability to create “information bubbles”, as well as the risk of abuse of commercial and political power and the potentially manipulative effect that recommendations can have on users’ privacy, autonomy and self-determination.[3]
Although the current ease of access to diverse and pluralistic information on digital platforms can make the democratic process more participatory and inclusive,[4]new technologies also enable the dissemination of misinformation on a large scale, with unprecedented speed and targeting accuracy. As information flows become increasingly personalised, echo chambers and the dynamics of reinforcing pre-existing beliefs also intensify,[5] making citizens particularly vulnerable to the influence exerted by people, facts, States, organisations, software and devices, as well as their respective programmers and regulators.[6]
Digital platforms are currently extremely important tools for the exercise of democracy, not only because of the ease of interaction between citizens, representatives and candidates (or would-be candidates), but also because of their dynamic nature and the ease with which content can be disseminated. However, this flow of information can serve a purpose opposite to this goal, proving to be a fertile ground for disinformation. In the electoral process, disputes are formed through the dialectic between candidates, so that disinformation in this area is even more worrying, since information can be accessed instantly and the spread of disinformation during an electoral process can significantly influence the electoral will, as sharing is commonly done without checking the veracity and reliability of the information.[7]
Disinformation can target the integrity of the electoral process by discrediting the voting, vote counting system, as well as candidates or political parties, with the aim of gaining an advantage for the opposition.[8]
In traditional media, during election periods, regulations are usually put in place to ensure that voters have access to a plurality of voices. However, online platforms are not subject to the same levels of regulation, making them more susceptible to disinformation and online micro-targeting of voters, a political practice used to manipulate public opinion through data analysis and online content targeting.[9]
Particular concern is expressed about the impact of generative artificial intelligence on electoral processes. The report highlights risks associated with the use of chatbots and automatic content generation systems that can provide incorrect information about elections or candidates, as well as the circulation of synthetic content designed to mimic reliable sources or public figures.[10] In electoral contexts, characterised by high temporal and emotional sensitivity, these tools enable sophisticated forms of information manipulation, posing new challenges to the protection of democratic integrity.
In this context, it is also important to mention the contribution of Regulation (EU) 2024/1689, which establishes harmonised rules on artificial intelligence (AI Act), and strengthens the protection of democratic processes in the digital environment. Under Article 6(2), in conjunction with Annex III, AI systems designed to influence the outcome of an election or referendum, or the electoral behaviour of natural persons in the exercise of their right to vote, are classified as high-risk AI systems. This regulatory classification reflects the explicit recognition that certain applications of artificial intelligence pose structural risks to the integrity of democratic processes, justifying the imposition of enhanced obligations in terms of design, use and supervision. The link between the DSA and the AI Act thus reveals a complementary European approach: while the DSA focuses on identifying and mitigating systemic risks associated with the functioning of digital platforms, the AI Act intervenes directly on specific technologies that could seriously compromise citizens’ political self-determination.
An additional element worth highlighting is the time dimension of electoral processes, which poses specific challenges to the risk-based governance approach adopted by the DSA. Election campaigns are characterised by short periods, high communication intensity and rapid circulation of content, which limits the effectiveness of mitigation mechanisms that are overly dependent on ex post assessments or gradual compliance procedures. In this context, the ability of platforms and competent authorities to identify and mitigate risks in a timely manner is particularly important, otherwise regulatory responses may prove to be structurally out of step with the speed of information manipulation during critical moments in democratic life.
Although the report does not expressly refer to the concept of the rule of law, the centrality given to civic discourse and the integrity of electoral processes reveals a material concern with the democratic quality of the digital public space. By framing disinformation, polarisation, information manipulation and intimidation of actors in the public debate as systemic risks, the Council and the Commission assume that the protection of democracy in the European Union depends, to a large extent, on the preservation of open, pluralistic and secure information ecosystems.
The analysis of the electoral aspect of the first report adopted under Article 35(2) of the DSA shows a significant evolution in the European approach to protecting democracy in the digital environment. By recognising that the risks to electoral processes stem largely from the structural functioning of digital platforms and their amplification and moderation systems, the European Union is moving away from purely reactive responses and towards a preventive model of risk governance. However, the effectiveness of this approach will depend on the ability to ensure swift, transparent and proportionate enforcement of the obligations imposed on platforms, especially during election periods marked by high informational sensitivity. Electoral integrity is thus established as one of the main criteria for assessing European regulatory ambition and its ability to safeguard a truly pluralistic and democratic digital public space.
[1] European Board for Media Services, First report of the European Board for Digital Services in cooperation with the Commission pursuant to Article 35(2) DSA on the most prominent and recurrent systemic risks as well as mitigation measures, 18 November 2025, 48, https://digital-strategy.ec.europa.eu/en/news/press-statement-european-board-digital-services-following-its-16th-meeting.
[2] European Board for Media Services, First report of the European Board for Digital Services in cooperation with the Commission pursuant to Article 35(2) DSA on the most prominent and recurrent systemic risks as well as mitigation measures, 20.
[3] See Natali Helberger et. al, “Regulation of news recommenders in the Digital Services Act: empowering David against the Very Large Online Goliath”, Internet Policy Review, 26 February 2021, https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large.
[4] See Felisbela Lopes, “Os desafios do pluralismo e da liberdade dos média diante da transição digital: o caso do Serviço Público de Média’, in E-Book CitDig: Centro de Excelência Jean Monnet Em Cidadania Digital & Sustentabilidade Tecnológica – II Congresso Ibero-Americano Sobre Direito e Tecnologias Digitais Subordinado Ao Tema “Estado de Direito e Transição Digital, ed. Alessandra Silveira and Maria Inês Costa (Pensamento Sábio – Associação para o conhecimento e inovação, 2025), 17, https://doi.org/10.21814/1822.97169.
[5] European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Tackling online disinformation: a European Approach, Brussels, 26.4.2018, COM(2018) 236 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236.
[6] See Jonathan Bridenbaker, “The digital citizen as technoliberal subject: the politics of constitutive rhetoric in the European Union’s digital decade“, Communication and Democracy 58, no. 2 (2024): 163, https://doi.org/10.1080/27671127.2024.2385912.
[7] See Rodrigo López Zilio, Direito Eleitoral (Editora Juspodivm, 2023), 504–5.
[8] Zilio, Direito Eleitoral, 506.
[9] Elda Brogi et al., “EU and media policy: conceptualising media pluralism in the era of online platforms. The experience of the Media Pluralism Monitor”, in Research Handbook on EU Media Law and Policy, ed. Pier L. Parcu and Elda Brogi (Edward Elgar Publishing, 2021), 28, https://doi.org/10.4337/9781786439338.00007.
[10] European Board for Media Services, First report of the European Board for Digital Services in cooperation with the Commission pursuant to Article 35(2) DSA on the most prominent and recurrent systemic risks as well as mitigation measures, 20.
Picture credit: by Element5 Digital on pexels.com.
Author: UNIO-EU Law Journal (Source: https://officialblogofunio.com/2026/02/02/systemic-risks-digital-platforms-and-electoral-integrity-in-the-european-union-reflections-from-the-first-report-under-article-352-of-the-digital-services-act/)