EU digital governance – what money (alone) cannot buy

Bruno Saraiva [master’s student in European Union Law and Digital Citizenship & Technological Sustainability (CitDig) scholarship holder]

The stakes have never been higher. They could not have been lower. In today’s world, competition is no longer a byproduct of cohabitation but the very condition of survival – economic, diplomatic, military. And the arena of this competition is digital: data, computing, and the capacity to summon and shape the world’s resources at near-instant speed.[1] This is the new frontier of sovereignty and power. Against this backdrop, the European Union (EU) has wagered that funding, infrastructure, and regulation together – not raw scale alone – will secure its place in the digital age.

Funding as foundation

One must give credit where credit is due: the EU’s AI innovation package reflects a cohesive, participatory and integration-oriented approach. But it could always be more comprehensive. Going beyond mere funding and regulatory flexibility, it offers a coherent, structured approach that emphasises not only technological development but also the education and empowerment of potential users regardless of background – a crucial wellspring of transformation. The further technological advancement diffuses outwards from academic institutions and research centres into society, the greater their potential to generate transformative difference. This is because such fusion entails engagement with a wide range of issues, modes of thought, toolsets and problem-solving strategies, fostering a reflexive process that enriches both innovation and governance.[2] In this sense, diversity of contact and application operates as an engine of innovation, particularly when viewed through the lens of long-term economic development.

However, financing is at the heart of any form of technological development. For this reason, the EuroHPC Regulation [Regulation (EU) 2021/1173] amendment must be praised. By wielding its formidable budget of around EUR 7 billion for the period of 2021-2027, this initiative lays down the infrastructural pre-requisites for digital sovereignty[3] and AI governance.[4] This, in turn, aids in the prevention of infrastructural capture, assuring the effectiveness of regulation – as well as Artificial Intelligence (AI) creation and integration within the EU.

Funding and regulation as two sides of the same coin

To do this, the EuroHPC Joint Undertaking endeavours to coordinate the EU’s investment in high-performance computing (HPC) and supercomputing. As we alluded, it does more than just foster industrial or scientific advancement. It does so in a manner that assures Europe’s strategic autonomy by supporting the creation and existence of native, European-owned (and -operated) high performance computing infrastructure. Without infrastructure physically and legally based in Europe, regulatory efforts like the AI Act or the General Data Protection Regulation (GDPR) would be dependent on the goodwill of those polities who physically hold technological infrastructures. This effort is therefore a near pre-requisite for the legal compliance, risk classification, and enforcement of AI, cloud services and data-derived innovation.[5] Furthermore, initiatives as the EuroHPC User Days 2025 Awards, where innovation in European supercomputing was celebrated on 30 September 30 and 1 October 1 2025, allow the EU to make noticeable “outstanding contributions to the European supercomputing ecosystem.”[6]

As literature consistently underscores, applying GDPR safeguards to AI remains a continuous challenge.[7] As Sophie Stalla-Bourdillon highlights in her discussion of Opinion 28/2024,[8] the EuroHPC initiative plays a crucial role in providing the necessary infrastructure for secure, privacy preserving environments. By enabling the processing of sensitive data within the stringent parameters established by the GDPR, EuroHPC represents a cornerstone in aligning technological capacity with fundamental rights’ protection.

Moreover, large scale simulations and stress testing offer a pathway to developing independent metrics for detecting prohibited AI practices, as delineated in Article 5 of the AI Act. In the absense of such comparative modelling and analysis of how these techniques influence social media dynamics and human interaction, regulators would be compelled to rely predominantly on private enforcement mechanisms. This raises the spectre of orchestrated deployment of prohibited AI practices concealed behind sectorial and territorial silos – thereby obscuring damaging behaviours in areas where technology intersects with uneven social contexts.[9]

On the other side of that spectrum, as Mazzucato and Floridi highlight,[10] state-led investment shapes the very trajectory of technological innovation. Since this innovation has social consequences, one cannot discount the indirect shaping of a community’s political economy. We should keep this in mind and realise the state-based risks of regulation and infrastructure monopolies as regards information technology;[11] something that the EU is particularly well suited to lead on due to its participatory and integration-led institutional framework. The creation of the AI Office and established bodies such as the European Data Protection Board (EDPB) serve a silent, but powerful guarantee of rule of law within Member States by “communitarising”regulation and moderating its impacts via group participation.

In this sense, EuroHPC manifests the EU as an “entrepreneurial regulator”, not just issuing regulation such as the AI Act and the Digital Services Act (DSA), but also the material infrastructure needed for enforcing those rules and fostering the industrial competitiveness of the single digital market.

Regulation and industry notably intersect in the subject of content moderation at scale, an ongoing EU concern, as restated by the Commission’s guidelines on high-risk AI use.[12] This, however, requires the processing of enormous streams of data, the very same data that makes up the Union’s digital market.[13] As Judgment CJEU Glawischnig-Piesczek v. Facebook, 3 October 2019, C-18/18 and the aforementioned guidelines make clear, the vetting of all digital “content” made available by platforms is their responsibility.

By maintaining high-performing computing capabilities within the physical and sovereign space of the EU, combating infrastructural capture becomes possible through trusted EU-hosted moderation technologies – preserving the effet utile of the DSA and its emphasis on accountability of very large online platforms (VLOPs).

Lessons from history

Although it is obviously hard to quantify, a superficial analysis suggests that the EU’s strategy compares favourably with the US government’s paradoxically laissez-faire strategy towards AI, which consists of removing barriers while making massive investments.  

Although investment is a key element of technological development, it is not a panacea. While a direct comparison is complicated, the example provided by previous research cycles should be borne in mind; during a 1980s rush to take the lead in physics research the US-led (and eventually US-only) LHC project found itself unable to even get off the ground, compared with the slower, and less ambitious research objectives of Europe’s CERN. While the former was budgeted at USD 10 billion upon its untimely legislative demise, the then contemporaneous CERN cost the equivalent of USD 2,3 billion (while already being at twice over budget). At the end, it would be the CERN’s Large Hadron Collider that would deliver the much sought-after Higgs-Boson particle, also known as the “God particle.”[14]

Though money talks, science often comes in whispers.[15] That is why we find the EU’s approach particularly convincing. It does not simply leverage investment and infrastructure creation but endeavours to create a structured economic sector – by focusing on small and medium-sized enterprises (SMEs) integration. From a practical standpoint, it does so by encouraging the creation of special purpose AI models and integrates them from conception into existing (or only-now determinable) economic processes. This not only operates in concert with the EU’s circular economic strategy,[16] but also avoids the risks of blindly fostering innovation for the self-serving economic bounty that it provides.

By recognising these technological developments for what they are – tools that are both potent and dangerous – one can reduce the risks of: (a) the development of gratuitous investment bubbles, and (b) the unforeseen societal effects born out of unmonitored technological development.

Regarding this first point, integration of these tools into existing social and economic structures is essential to assure the permanence of the solutions developed and invested in. By allowing SMEsto become stakeholders in the development and perpetuation of these technologies, these same technologies are much more likely to receive sustained support – not just funding – but attention and continuous use from a much wider pool of participants than elite practice would allow.

Funding, education, and the workforce

We must note, then, that this approach conceptually aligns the EU’s AI strategy with that of China, which, in addition to leveraging significant state funding and private-industry integration, is actively expanding higher education capacities – such as increasing undergraduate enrolment to cultivate AI talent in line with national strategic goals.[17]

The crucial difference lies in competence allocation: the EU treaties do not empower the Union to directly interfere with educational curricula, including the insertion of AI studies, whereas Beijing’s centralised governance allows for national-level curricular directives. Since education remains a domain avenue zealously guarded by the Member States, the EU instead exerts influence indirectly, primarily through the funding of research grants – such as Horizon Europe – that steers academia towards AI related priorities.[18]

Given these limitations, the Union relies on a combination of soft law instruments – such as recommendations, communications and codes of conduct – alongside financial programs including Horizon Europe, Erasmus+ and Digital Europe. It further promotes cooperation through partnership and transnational networks, most notably the Digital Education Action Plan (2021-2027), and the European Universities Initiative. Finally, vocational training frameworks under article 166 TFEU are mobilised to prepare the workforce for industrial transformation, the green transition and the broader process of digitalisation, within which AI is increasingly framed as a strategic priority.[19]

We suggest that this competence marks the area where the EU’s strategy can most clearly distinguish itself from both the chinese and U.S. approaches to AI governance. By leveraging its formidable vocational training resources, the EU can help “narrow the gap” between industry and academia – both physically and culturally –[20] by activating this intermediate layer of education. We will explore this dimension further in a future blog post.

*

While the U.S. strategy favours a “brute force” approach, both the EU and the People’s Republic of China place greater emphasis on the educational dimension of these tools. Long-term results remain uncertain, but in the short term, a wave of technically empowered graduates is likely to yield a workforce that not only uses AI more effectively and thoughtfully but also shapes the very models it creates. By privileging training and application over sheer production of AI models, these systems can be refined to suit specific social, industrial, or regulatory contexts – resulting in more optimised and specialised tools.

In this respect, the EU’s funding mechanisms and regulatory guidance consistently encourage a “limited model” approach. Rather than chasing scale for its own sake, Europe’s strategy aims to embed AI within existing economic and social structures, fostering resilience and inclusivity.

Taken together, this approach reflects a broader wager: that Europe can sustain competitiveness in the digital age not by outspending rivals, but by aligning investment, education, and regulation in a mutually reinforcing framework. The true measure of success will not be the number of models produced, but whether these tools strengthen democratic legitimacy, protect fundamental rights, and empower diverse actors across the single digital market.


[1] European Commission: European Political Strategy Centre, The future of European competitiveness. Part A, A competitiveness strategy for Europe (Publications Office of the European Union, 2025), 25, https://data.europa.eu/doi/10.2872/9356120.

[2]  European Commission: Directorate-General for Research and Innovation, Science, research and innovation performance of the EU, 2024 – A competitive Europe for a sustainable future (Publications Office of the European Union, 2024), 80, https://data.europa.eu/doi/10.2777/965670.

[3] “EUR 1,9 billion from the Digital European Program (DEP) to support the acquisition, deployment, upgrading and operation of the infrastructures, the federation of supercomputing services, and the widening of HPC usage and skills…”. See The European High Performance Computing Joint Undertaking (EuroHPC JU), “Discover EuroHPC JU – EuroHPC JU,” accessed September 1, 2025, https://www.eurohpc-ju.europa.eu/about/discover-eurohpc-ju_en.

[4]  “EUR 900 million from Horizon Europe (H-E) to support research and innovation activities for developing a world-class, competitive and innovative supercomputing ecosystem across Europe…”. See “Discover EuroHPC JU – EuroHPC JU.”

[5] As it relates to AI development and the AI Act in particular, HPC infrastructure is a requirement for the training and deployment of large AI models (including foundation models). The AI Act’s effectiveness relies on the capacity to audit, sandbox and test AI models and systems – something that requires the massive computational resources that this investment assures. “Discover EuroHPC JU – EuroHPC JU.”

[6] See EuroHPC JU, “EuroHPC User Days 2025 Awards: celebrating innovation in European supercomputing”, Press Release, October 2, 2025, https://www.eurohpc-ju.europa.eu/eurohpc-user-days-2025-awards-celebrating-innovation-european-supercomputing-2025-10-02_en.

[7] See Sandra Barbosa and Sara Félix, “Algorithms and the GDPR: an analysis of Article 22,” Anuário Da Proteção de Dados, 2021, 67–93; Paul De Hert and Guillermo Lazcoz, “Radical rewriting of Article 22 GDPR on machine decisions in the AI era,” European Law Blog, ahead of print, October 13, 2021, https://doi.org/10.21428/9885764c.acdeb23f; Article 29 Data Protection Working Party, “Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679 (Wp251rev.01),” February 6, 2018, https://ec.europa.eu/newsroom/article29/items/612053.

[8] Sophie Stalla-Bourdillon, “EDPB Opinion 28/2024 on personal data processing in the context of AI models: a step toward long-awaited guidelines on anonymisation?,” European Law Blog, ahead of print, January 12, 2025, https://doi.org/10.21428/9885764c.3518cb2b.

[9] This finds particular backing when squared with the findings of Amir Al-Maamari, “Between innovation and oversight: a cross-regional study of AI risk management frameworks in the EU, U.S., UK, and China,” version 1, preprint, arXiv, 2025, https://doi.org/10.48550/ARXIV.2503.05773. Here, the author suggests “the EU implements a structured, risk-based framework that prioritizes transparency and conformity assessments, while the U.S. uses decentralized, sector-specific regulations that promote innovation but may lead to fragmented enforcement. The flexible, sector-specific strategy of the UK facilitates agile responses but may lead to inconsistent coverage across domains. China’s centralized directives allow rapid large-scale implementation while constraining public transparency and external oversight.” This regulatory fragmentation can mas harmful AI uses across sectors and jurisdictions.

[10] See Mariana Mazzucato, The entrepreneurial state: debunking public vs. private sector myths, revised edition, Anthem Frontiers of Global Political Economy (Anthem Press, 2014); Luciano Floridi, The 4th revolution: how the infosphere is reshaping human reality, first edition (Oxford University Press, 2014).

[11] Something even the EU is not immune to as Judgment CJEU Republic of Poland v. European Parliament and Council of the European Union, 26 April 2022, C-401/19, ECLI:EU:C:2022:297 shows. Although the stated goal is admirable, namely to preserve freedom of expression and information that might be endangered by overly broad content filtering duties on platforms, the reality is that if Articles 17(4)(b) and (c) were struck down this would severely limit EU-wide digital regulatory capacity. Regulation and rule-keeping is not neutral – in practice excesses and deficits have much in common.

[12] See European Commission, “Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act”, Policy and Legislation, 4 February 2025, accessed June 3, 2025, https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act.

[13] See Tarleton Gillespie, “Content moderation, AI, and the question of scale,” Big Data & Society 7, no. 2 (2020), https://doi.org/10.1177/2053951720943234; Paul M. Barrett Hendrix Justin, “Is generative AI the answer for the failures of content moderation?,” Tech Policy Press, April 3, 2024, https://techpolicy.press/is-generative-ai-the-answer-for-the-failures-of-content-moderation.

[14] See Michael Riordan, “A bridge too far: the demise of the superconducting super collider,” Physics Today 69, no. 10 (2016): 48–54, https://doi.org/10.1063/PT.3.3329.

[15] The “Draghi report” also highlights this dynamic in other high technology sectors while emphasising the need for sustained support: “The EU has developed a world-class space sector, despite much lower levels of funding, but is now starting to lose ground (…)”. See European Commission, The future of european competitiveness, 60.

[16] Namely in the textile and food, water and nutrition cycles, which, despite occurring naturally and being essential to human life, remain economically undervalued in marginalised regions. See European Commission:  Directorate General for Communication, Circular economy action plan: for a cleaner and more competitive Europe (Publications Office of the European Union, 2020), 13 and 14, https://data.europa.eu/doi/10.2779/05068. For a broader view on the connection between digital development and circular economy, see Balgopal Singh, “Product circularity in the digital circular economy,” in Sustainable innovations and digital circular economy, ed. Rubee Singh and Vikas Kumar (Springer Nature Singapore, 2025), https://doi.org/10.1007/978-981-96-1064-8_10.

[17] “China’s top universities expand enrolment to beef up capabilities in AI, strategic areas,”, Reuters, March 10, 2025, https://www.reuters.com/world/china/chinas-top-universities-expand-enrolment-beef-up-capabilities-ai-strategic-areas-2025-03-10/.

[18] Under the treaties, the EU does not have the competence to harmonise national curricula as stated by Articles 165(4) and 166(4) TFEU. Its role in education and vocational training is supportive, limited to fostering cooperation, mobility, the recognition of diplomas and the adaptation of skills to industrial change. By contrast, the Union enjoys broader competences to coordinate programs, fund initiatives, and set strategic priorities in research and technological development (Articles 179-182 TFEU, providing the umbrella for the aforementioned EuroHPC initiative), industry (Article 173 TFEU, which coordinates investments like the AI Acts’ innovation sandboxes, AI factories and the Circular Economy Action Plan) and environment (Articles 191 to 193 TFEU). This legal framework explains why EU strategies on AI, digital transition and circular economy rely on funding mechanisms (e.g. Horizon Europe, Erasmus+, Digital Europe Programme) and soft-law coordination, rather than binding curricula.

[19] European Commission: Directorate-General for Research and Innovation, Science, research and innovation performance of the EU, 190.

[20] Both this gap and the potential for “open science”  are highlighted by institutional literature in the European Commission’s institutional literature. See European Commission: Directorate-General for Research and Innovation, Science, research and innovation performance of the EU, 207–12 and 277.


Picture credit: by Ron Lach on pexels.com.

 
Author: UNIO-EU Law Journal (Source: https://officialblogofunio.com/2025/10/14/eu-digital-governance-what-money-alone-cannot-buy/)