The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act

  • Home
  • External Source
  • The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act
Inês Neves (Lecturer at the Faculty of Law, University of Porto | Researcher at CIJ | Member of the Jean Monnet Module team DigEUCit) 

March 2024: a significant month for both women and Artificial Intelligence

In March 2024 we celebrate women. But March was not only the month of women. It was also a historic month for AI regulation. And, as #TaylorSwiftAI has shown us,[1] they have a lot more in common than you might think.

On 13 March 2024, the European Parliament approved the Artificial Intelligence Act,[2] a European Union (EU) Regulation proposed by the European Commission back in 2021. While the law has yet to be published in the Official Journal of the EU, it is fair to say that it makes March 2024 a historical month for Artificial Intelligence (‘AI’) regulation.

In addition to the EU’s landmark piece of legislation, the Council of Europe’s path towards the first legally binding international instrument on AI has also made progress with the finalisation of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.[3] As the EU’s cornerstone legislation, this will be a ‘first of its kind’, aiming to uphold the Council of Europe’s legal standards on human rights, democracy and the rule of law in relation to the regulation of AI systems. With its finalisation by the Committee on Artificial Intelligence, the way is now open for future signature at a later stage. While the non-self-executing nature of its provisions is to be expected, some doubts remain as to its full potential, given the high level of generality of its provisions, and their declarative nature.[4]

Later, on 21 March, the United Nations (UN) General Assembly adopted a landmark resolution on the promotion of “safe, secure and trustworthy” AI systems that will also benefit sustainable development for all.[5] It is also a forerunner in this regard, as it is the first UN resolution in this area. Like the previous developments, it builds on the sui generis nature of AI, both as an enabler of the 17 Sustainable Development Goals and as a risk to international human rights law. The resolution is also concerned about the digital divide between AI champions and developing nations, with challenges to the inclusive and equitable access to the benefits of AI, starting with the digital literacy gap.

In this text, we will focus on the AI Act as the development with the ‘most teeth’. It directly imposes requirements on specific AI systems and obligations on various actors in the AI lifecycle, from developers and suppliers to importers, distributors, deployers and others.

As we will see, it is an improvement with respect to some AI systems and uses that may harm fundamental rights. However, it is not a panacea. In particular, we will highlight the insufficiency of the normative framework with regard to deepfakes, especially those that target women in particular.

As this text will show, the AI Act has loopholes that make the Commission’s proposal for a Directive on combating violence against women and domestic violence[6] another ‘first’ to watch. The Directive criminalises certain forms of violence against women across the EU, with a particular focus on online activity (‘cyberviolence’). The fact that it targets, among others, the non-consensual sharing of intimate images (including deepfakes) makes it a safer avenue when compared to the limited transparency requirements of the AI Act.

So the question here is: why do women need the EU Directive on violence against women and why is the AI Act not enough?

After briefly contextualising both the AI Act and the proposed Directive on violence against women and domestic violence, the bridges between them in relation to deepfakes will be considered.

The Artificial Intelligence Act as approved

The Artificial Intelligence Act, or as it is more commonly known, the AI Act, is seen as the most influential example of an attempt to regulate AI across the board. The previously predominant area of ethics has been abandoned in favour of binding law – ‘hard law’.

In addition to the expectations placed in this EU legislation, which will shape or inspire the future governance of AI, including beyond the EU, the Regulation was and is awaited with great anxiety and hope, because of the benefits it will bring, both to citizens (in terms of mitigating the risks of AI to health, safety and fundamental rights) and to businesses, whether they are suppliers, deployers, importers or distributors of AI, which will gain greater legal certainty as to what is expected of them. National public administrations will also benefit from increased citizen confidence in the use of AI.

In general, the Regulation, which is the result of a European Commission proposal from April 2021, pursues the goal of human-centred AI and is faced with a difficult balance: between protecting fundamental rights on the one hand, and ensuring EU leadership in a sector that is critical to it.

This balance takes the form of a ‘mix’ of i) measures to support innovation (with a particular focus on SMEs and start-ups) and ii) harmonised, binding rules for the placing on the market, putting into service and use of AI systems in the EU. These rules are adapted to the intensity and scope of the potential risks involved. It is precisely this idea of proportionality that explains why, in addition to a set of prohibited practices (which pose an unacceptable risk to the health, safety and fundamental rights of citizens), there are also strict rules for high-risk systems and their operators, as well as specific obligations for certain AI systems (those designed to interact directly with natural persons, or that generate or manipulate content that constitutes deep falsification) and general-purpose AI models. In contrast, (other) low-risk AI systems will only be asked to comply with voluntary codes of conduct.

The paradigm shift – from ‘wait and see’ to legislation ‘with teeth’ – explains the set of rules dedicated to market oversight and surveillance, governance and law enforcement. Indeed, although this is a Regulation – directly applicable in EU Member States and therefore not requiring transposition as a Directive – Member States will still have a crucial role to play in terms of enforcement and will have to establish or designate at least a notifying authority and a market surveillance authority responsible for monitoring post-market systems.

Moreover, as in the case of other EU legislation, it will be up to the Member States to make decisions. From the outset, it will be up to the Member States to decide on the objectives and offences for which real-time biometric remote identification in public places will be allowed in order to maintain public order (which is generally prohibited by the Regulation). It will also be up to the competent national authorities to establish at least one AI regulatory sandbox at national level. Finally, it will also be up to Member States to regulate the possibility of imposing fines on public authorities and bodies that are also subject to the obligations of the AI Act.

So, there is still a long way to go. Firstly, although the Regulation will enter into force on the twentieth day following its publication in the Official Journal of the EU, it provides for its application to be deferred over time. Thus, in addition to or without prejudice to a general applicability period of twenty-four months, there are alternative gaps of six months for prohibitions, twelve months for governance, and thirty-six months for high-risk AI systems.

Until then, all eyes are on the Member States and the European Commission.

The AI Act has been perhaps the most coveted, discussed, debated and trendy piece of EU legislation in recent times. And what it seeks to achieve is worthy and deserving of such prominence. But it is important to remember that there is still a lot of work to be done, and that the promises it makes will depend on its effective implementation.

From the EU’s first-ever wide action on combating violence against women and domestic violence to a ‘historic deal’

At present, there is no specific legislation on violence against women in the legal order of the EU. Although potentially covered by horizontal legislation on the general protection of victims of crime, it has become necessary to adopt legislation specifically aimed at preventing and combating violence against women, either by i) criminalising certain forms of violence, such as female genital mutilation, forced marriage and a number of forms of cyberviolence, or by ii) strengthening protection (before, during and after criminal proceedings), access to justice and support for victims of violence, as well as ensuring cooperation and coordination of national policies and between competent authorities.

The priority is in line with the EU Gender Equality Strategy 2020-2025,[7] one of the objectives of which is to put an end to gender-based violence. This is why, in addition to preparing the EU’s accession to the Council of Europe Convention on preventing and combating violence against women and domestic violence (Istanbul Convention),[8] which would be approved by Council decision on 1 June 2023,[9] the European Commission adopted the first comprehensive legal instrument at EU level to tackle violence against women – the Commission’s proposal for a Directive on combating violence against women and domestic violence from 8 March 2022.

With regard to its ‘first core’ – the criminalisation of physical, psychological, economic and sexual violence against women across the EU, both offline and online – the Directive includes minimum rules on limitation periods, incitement, aiding, abetting, and attempt, as well as indications on the applicable criminal penalties. A second dimension (covering all victims of crime, not just women) focuses on the speedy processing of complaints and the effective and specialised handling of investigations, individual risk assessment, adequate support services and the training and competence of police and judicial authorities and other national bodies.

Among the offences criminalised by the Directive are non-consensual exchange of intimate or manipulated material, cyber stalking, cyber harassment and cyber incitement to violence or hatred.

Although the criminalisation of rape in the initial proposal was not included in the provisional agreement due to a lack of consensus on the legal definition (the issue of consent and the ‘only yes means yes’ approach),[10] the Directive takes important steps to prevent and criminalise forms of cyberviolence. It is the case of the production or manipulation and subsequent distribution to a multitude of end-users, through information and communication technologies, of images, videos or other material that creates the impression that another person is engaged in sexual activities without that person’s consent. The Directive also requires Member States to take the necessary measures to ensure the rapid removal of such material, including the possibility for their competent judicial authorities to issue, at the request of the victim, binding judicial decisions to remove or block access to such material, addressed to the relevant intermediary service providers.

EU lawmakers reached a provisional agreement (“a historic deal”) on 6 February 2024[11], which now needs to be formally adopted so that the text can be published in the Official Journal of the EU, opening a three-year period for its implementation by Member States.

Building bridges between the AI Act and the Directive on violence against women: the particular case of deepfakes

While applauded, the AI Act leaves us with the bittersweet feeling of a series of exemptions that could condemn it to a dead letter, as well as the strong dependence on the adoption of harmonised standards and common specifications to guide operators in complying with all the requirements (especially for high-risk AI systems).

At the same time, it should also be recognised that the AI Act will by no means be the panacea for all AI ills, nor the remedy for the EU’s strategic dependencies. On the contrary, in addition to realpolitik, it is important not to ignore the importance of other pieces of national and EU legislation that are equally important in building a human-centred and business-friendly AI ecosystem.

In fact, there is nothing in the Regulation that allows important sectoral or specific legislation to be overturned by repeal. On the contrary, the AI Act needs them to fulfil its objectives. For evidence of this, look no further than its response to deepfakes and the inadequacy of the AI Act’s transparency requirements to deal with practices that could constitute criminal offences.

Indeed, the only mandatory requirement for providers who use an AI system to generate or manipulate image, audio or video content that bears a striking resemblance to existing persons, places or events and that would mislead a person into believing it to be authentic (‘deep fakes’) is to clearly and conspicuously disclose that the content has been artificially generated or manipulated by labelling the AI output accordingly and disclosing its artificial origin.

This transparency requirement should not be interpreted as implying that the use of the system or its output is necessarily legitimate (and licit). Moreover, transparency may be an enabler of the implementation of the Digital Services Act (DSA),[12] particularly with regard to the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of artificially generated or manipulated content. However, neither the AI Act nor the DSA adequately protect women from deepfakes that specifically target them.

To begin with, deepfakes are not classified as either prohibited or high risk under the AI Act. As a result, they are (only) subject to transparency obligations regarding the labelling and detection of artificially generated or manipulated content. In addition to relying heavily on implementing acts or codes of practice, the disclosure of the existence of such generated or manipulated content is to be made in a reasonable manner that does not interfere with the display or enjoyment of the work. Furthermore, there is no obligation of removal or suspension of content.

Transparency requirements are primarily intended to benefit those who see, hear or are otherwise exposed to the manipulated content. It is a precondition for the free development of personality to the benefit of the recipients.

What about those who are harmed by deepfakes?

According to the “2023 State of Deepfakes: Realities, Threats and Impact” report by the start-up Home Security Heroes,[13] “The prevalence of deepfake videos is on an upward trajectory, with a substantial portion featuring explicit content. Deepfake pornography has gained a global foothold and commands a considerable viewership on dedicated websites, most of which have women as the primary subjects.” In fact, “99% of the individuals targeted in deepfake pornography are women.”

While a transparency requirement can protect the fundamental rights of recipients, and while deepfakes can be included in the assessment of systemic risks arising from the design, functioning and use of online services, as well as from potential misuse by recipients of the service, neither the AI Act nor the DSA do what the Directive proposes to do: i) criminalise these practices and ii) require the effective and rapid removal or blocking of access by the relevant service providers.

It is therefore safe to say that whatever its shortcomings, the Directive has the advantage of filling gaps in EU and national legislation on forms of violence that, while not exclusively affecting women, are clearly “targeted” at them. Thus, if the Directive on combating violence against women and domestic violence is a ‘first’, like the AI regulations, it is certainly a primus inter pares when it comes to combating violence against women.

[1] Josephine Ballon, “The deepfakes era: What policymakers can learn from #TaylorSwiftAI”, EURACTIV, 5 February 2024. Available at

[2] European Parliament, “Artificial Intelligence Act: MEPs adopt landmark law”, Press Release, 13 March 2024. Available at

[3] Council of Europe, “Artificial Intelligence, Human Rights, Democracy and the Rule of Law Framework Convention”, Newsroom, 15 March 2024. Available at

[4] See the European Data Protection Supervisor (EDPS) statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Available at See also, Eliza Gkritsi, “Council of Europe AI treaty does not fully define private sector’s obligations”, EURACTIV, 15 March 2024. Available at

[5] United Nations, “General Assembly adopts landmark resolution on artificial intelligence”, UN News, 21 March 2024. Available at

[6] Proposal for a Directive of the European Parliament and of the Council on combating violence against women and domestic violence, COM/2022/105. Available at

[7] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – A Union of Equality: Gender Equality Strategy 2020-2025, COM/2020/152 final. Available at

[8] The Council of Europe Convention on preventing and combating violence against women and domestic violence (Istanbul Convention). Available at

[9] Council of the EU, “Combatting violence against women: Council adopts decision about EU’s accession to Istanbul Convention”, Press release, 1 June 2023. Available at

[10] Mared Gwyn Jones, “EU agrees first-ever law on violence against women. But rape is not included”, EURONEWS, 7 February 2024. Available at; Lucia Schulten, “EU fails to agree on legal definition of rape”, DW, 7 February 2024. Available at This has led to criticism from social groups, who say the agreement is disappointing – see, inter alia, Amnesty International, “EU: Historic opportunity to combat gender-based violence squandered”, News, 6 February 2024. Available at; Clara Bauer-Babef, “No protections for undocumented women in EU directive on gender violence”, EURACTIV, 9 February 2024. Available at

[11] European Parliament, “First ever EU rules on combating violence against women: deal reached”, Press release, 6 February 2024. Available at; European Commission, “Commission welcomes political agreement on new rules to combat violence against women and domestic violence”, 6 February 2024. Available at and Caroline Rhawi, “Violence against Women: Historic Deal on First-Ever EU-wide Directive”, renew europe., 6 February 2024. Available at

[12] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance), PE/30/2022/REV/1, OJ L 277, 27.10.2022. Available at

[13] Home Security Heroes, “2023 State of Deepfakes: Realities, Threats, and Impact”. Available at

Picture credits: Markus Winkler on

Author: UNIO-EU Law Journal (Source: