Looking identity in the eye: brief considerations on the frontiers of biometric data and identity

  • Home
  • External Source
  • Looking identity in the eye: brief considerations on the frontiers of biometric data and identity
José Vegar Velho [Guest Lecturer at the School of Law of the University of Minho | Commissioner at the Portuguese Data Protection Authority (Comissão Nacional de Proteção de Dados – CNPD)]

On the 25th of March 2024, the Portuguese DPA – CNPD – issued a decision to temporarily limit the processing of biometric data relating to the processing operation for the collection of iris, eye, and face data in Portugal, that was being performed by a globally established private company, which, at the time, already had a direct impact on about 300,000 persons in said national territory.[1]

Such data was claimed to be the basis of a universal ID, to be used as proof of personhood and human condition, that is, establishing whether an individual is both human and unique – a digital ID.

This ID was presented as a global digital passport that guarantees people a way to preserve their privacy to authenticate themselves as humans online, in a world where intelligence is no longer a discriminator between people and AI.

It was also argued that the possibility for an individual to claim that he or she is a natural and unique person in the ID users’ network without having to provide additional evidence of their identity is a potentially useful functionality for a number of online services.

For this biometric data to be processed, potential adherents first had to install an application on their electronic devices, which was also a portfolio of cryptocurrency, therefore each citizen receiving tokens corresponding to cryptocurrencies through their participation.

Several complaints were received by the Portuguese DPA following the significant media impact that such activities had, and, in summary, the material or legal grounds that led to this decision, after due investigation, were the preliminary evidence that suggested that processing of such data involved minors, as well as the fact that it was not guaranteed that data subjects could exercise their right to erase their data or withdraw consent, and the failure to comply with the practical and legal demands that emerge from the right to be informed and informed consent.

During the course of the proceedings, the data controller communicated to the CNPD that it has set an establishment in the EU (Germany). Therefore, in view of the concept of ‘cross-border processing’ under the GDPR and the competence assigned to the supervisory authority of the Member State of the single establishment to act as lead supervisory authority, the Portuguese authority issued a new decision, holding that the one-stop-shop mechanism was applicable in this new context, and assumed its role as a Concerned Supervisory Authority (CSA) within the meaning of Article 4(22) (c) GDPR.[2]

By virtue of the regime set forth for that status, mainly in accordance with Articles 55 to 58, as well as 60 to 66 of the GDPR, such role, in general, entails a narrower and stricter scope on the exercise of the powers and competences of national DPAs, as otherwise assigned to them within their national territories if having a leading role.

Moreover, this mechanism also requires specific instruments of coordination between authorities, which are often governed by different domestic legal regimes — substantive, procedural, and organizational — that are not fully resolved by the GDPR.

Casuistically, some dissonances caused by such plurality may not only create difficulties for the desired harmonization but may also give rise to disturbances emerging from the reflection of foreign law within the national legal systems of other countries — more or less directly — where the processing may actually be taking place, and that may not share the same response mechanisms.

This asymmetry is all the more evident given that, in abstract terms, the lead authority may be located in a local jurisdiction where the processing activities themselves are not carried out in practice. While nevertheless being required to assume the role of lead authority, the essential effects of its decisions, in accordance with the rules that govern them, may pragmatically impact other spaces and their citizens — who are the subjects of the fundamental rights at stake — and only secondarily its own. Also, in this hypothesis, the investigations that are necessary to take place regarding important aspects of the processing itself, also in terms of evidence, for instance, may be more difficult to be successfully achieved, considering this misalignment.

The conclusion has, in other words, a significant methodological consequence – if the merits of the one-stop-shop mechanism are said to be, for instance, consistence in harmonization of enforcement, legal certainty for businesses, efficiency of supervision and consistency for individuals, in reality there could be limit situations where these goals may subordinate the value that genetically conforms them in terms of effectiveness, which is the protection of the fundamental rights of citizens, resulting in a principle-based inversion.

The Proposal for a Regulation laying down additional procedural rules relating to the enforcement of GDPR,[3] now under trilogue discussions in the EU, intends to reinforce the cooperation mechanism, to enhance CSAs participation and to streamline their role in the co-decision procedure. This legal instrument aims at overcoming in some extent the differences of the legal regimes between Member States that impact in the enforcement of the GDPR. This initiative from the Commission is, in a way, the acknowledgement of the current shortcomings of the GDPR in guaranteeing effectively the fundamental right to data protection, in particular in what concerns data processing activities carried out throughout the EU/EEE.

However, depending on which type of data may trigger these actions, the consequences may have different effects, in particular when it comes to certain types of biometric data used for identification purposes.

Although this text does not have as its main goal those particular decisions or actions by the Portuguese DPA, it may serve as a motto that raises a deeper set of issues that may present several challenges not only from a data protection perspective but also from a wider ontological view on how to assimilate and understand such realities.

Question such as “how does current legislation regulate biometric data?”, “what is their juridical nature and practical effects?”, or “which juridical/legal interests might they intersect?” and, consequently, depending on how they may be considered from those perspectives, which particular conflicts may arise, are therefore of significant importance. They are not merely theoretical or academic discussions, but rather quite challenging for a systematic approach regarding the legal limitations of their protection and their enforceability, as well as the more classical notions of the personality of individuals and of public interest.

In recent years, the usage of biometric data has exponentially increased, mostly due to its effectiveness in identifying a natural person, and the development of technological advances that made such processing more attractive and economically valuable, with a potentially amplified risk for the privacy of subjects.

Article 4(14) of the GDPR defines biometric data as personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person.

However, considering such definition, biometric data may present a category of personal data when understood under the pre-constitutive notion presented in Article 4(1) of the GDPR, that is, firstly, it needs to consist of specific information relating to an identified or identifiable natural person.

The conclusion is, therefore, that when biometric data serves the purpose of identification of a natural person, it is, indeed, personal data, but that does not mean that all biometric data is necessarily personal data. One may assume, however, that biometric data systems are usually used for the purposes of identification of natural persons in digital IDs, therefore falling under the scope of the GDPR.

This being said, it is no surprise that, in those circumstances, the European legislator addresses this category under the exceptional scope designed in Article 9 of the GDPR, that is, the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited, except if any of the conditions set in its following numbers are fulfilled, namely, explicit consent.

The legal technique that was used to draw this design consists of a somewhat open clause, which is not without its flaws, and demands specific hermeneutical efforts and requirements in its application.

These types of legal formulations are methodologically characterised by posing a set of normative criteria to the interpreter that, by their generality, constitute a valuative set of legal hypotheses that can only achieve full meaning in contact with the particular cases at hand, as the legal norm intends to be applicable to a whole domain of cases that should share qualitative features in common.

In other words, the legislator itself seeks to provide a relatively flexible space to the interpreter that would serve to better respond to the plurality of practical life, to a multiple set of circumstances that, albeit this concession, is not entirely free or subjective, providing the coordinates from which one should operate, sacrificing, in a way, strict legal certainty for a controlled effectiveness, open to evolution and time, particularly important and welcome when one talks about technology.

However, although the technical intention of the legislator appears adequate, the situations to be regulated should be somewhat analogous in nature, in order for the criteria to be at the same time flexible and determinate, and capable of being articulated in a systematic and more effective manner.

That is to say that when using such a resource, the legislator should be, primarily, the one who sets the nucleus of the material equivalences, by establishing a regime that intends to be applicable to a category of cases of the same species or value, leaving to the interpreter the practical rational judgment of similarities or dissimilarities that trigger the hypotheses of the norm, which functions as a mechanism to assimilate both realities.

Failing to do so may result in implementing solutions that the legislator did not envisage for such cases, or, by lack of common features in the cases previewed in the hypothesis — that is, if it is just a mere deposit or group of possibilities or exceptions — a regime that may serve well some of those groups and not others.

All of this is even more challenging when facing a prohibitive rule, open to exceptions, which may create instability in decision-making by authorities, reflecting in an eventual lack of predictability, and also creating difficulties for the agents that develop such technologies.

To put it simply, the reasons that justify the specific nature and limitations of processing, for instance, of data related to political or philosophical views or trade union membership, are distinct among themselves, and share little in common with genetic data, biometric data, data concerning health, or data concerning a natural person’s sex life or sexual orientation. Also, those are not equivalent in many aspects and, in some cases, may even overlap. Although they all share the same main purpose of protection – v.g., the potential of discriminatory use – , the nature and reasoning behind each category would preferably demand more specific evaluations, also regarding the disposability of such data.

This is more obvious when facing the concept of biometric data, often used in digital identities, regardless of the definitions mentioned above.

The risks and privacy conflicts posed by the processing of iris scans, for instance, may not be comparable with other biometric data such as voice recognition. One may say they may even be considered special among the special and, by themselves, a digital identity in a strict sense.

The issue may be even more evident nowadays, with the increasing usage of neurodata, which also have the potential of identifying a natural person even beyond expressions of will or conscience, therefore often being referred to as lying past the last frontier – how we biologically think, behaviour prediction, manipulation of brain activity, and other aspects –, able to transcend notions of declaration or teleological/practical judgment: that is, an identity of the identity, that still has not a distinct and stable legal regime considering their specific nature.

If one can argue that data regarding sexual orientation or political opinions also present risks — and are therefore, as a rule, prohibited — but still can, in theory, be proportionally enclosed in the concept of informational self-determination, mostly affecting the person who may freely consent to its usage for certain purposes, when we face some specific cases of these biometric data used for digital identities the solution may not be so obvious, on more than one level.

Considering the legal design, one can adopt a maximalist approach of ownership of such types of data, similar, let us say, to a right of personal or intellectual property, or a more restricted one, like many legal systems do, considering ethical or moral limitations, not conceding the possibility of an absolute disposability of one’s personality.

Even the maximalist perspective, however, does not solve all the problems that may derive from the exception of free, explicit consent, as another question that may be posed is the economic advantage or monetisation that can be associated with those practices, and that can play a decisive role in the resolution/determination of the will of the subject, virtually being able to affect even the concept of freedom that one departs from when accepting a maximalist approach. Economic vulnerability of the subjects or other similar contexts can also be regarded as digital exploitation.

Finally, all these arguments raise another issue which is, all considered, whether such mechanisms or IDs should be freely implemented by private companies, which legitimately pursue profit.

Considering the specific nature of those data, it is possible to discuss whether a private company, sometimes located in other jurisdictions and subject to their laws, more or less restrictive or transparent, should be able to gather unique and immutable information of the citizens of other countries, being able, in abstract, to represent and identify with precision a very significant sample of a certain population of a country or state, which may raise issues regarding public interest and even national security or sovereignty.

One of the most notable arguments that can be raised is the lack of democratic oversight of their activities and infrastructures, not perfectly solved by mere strong self-governance, especially under the umbrella of different jurisdictions, or the consequences that may arise from data breaches.

Each country may also choose to regulate these matters differently, which may pose difficulties of coordination and harmonisation between legal systems, even when they share a common space, or may incite forum shopping.

In conclusion, innovation is desirable and should be incentivised, either in the public or private sectors.

However, in order to achieve the ideal proper balance between fundamental rights and technological advantages, mainly digital IDs, a stable and clear framework should be maintained, that considers the specific nature and risks of each category of personal data, differences posed in order to achieve material similarities or equivalences, as well as harmonised solutions, which is not only essential for the future innovation developments and investment decisions of all players involved but also can hold privacy as a valuable and profitable commodity, representing a real economic asset and added value for the citizens and companies that lead those initiatives, promoting, at the same time, preventive cooperation with DPAs, more than their corrective or punitive powers.


[1] See Comissão Nacional de Proteção de Dados, Deliberation/2024/137, 25 March 2024, AVG/2023/1205, available in English at https://www.cnpd.pt/media/imub4o4i/pt-sa-decision-worldcoin_temporary-limitation-of-processing_20240325.pdf, accessed on 18 September 2025.

[2] See Comissão Nacional de Proteção de Dados, Deliberation 2024/279, 9 July 2024, available in English at https://www.cnpd.pt/media/ko2ddsnf/deliberation_2024_279_worldcoin_en.pdf, accessed on 18 September 2025.

[3] See European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679, Brussels, 4.7.2023, COM(2023) 348 final, 2023/0202(COD).


Picture credit: by Maksim Goncharenok on pexels.com.

 
Author: UNIO-EU Law Journal (Source: https://officialblogofunio.com/2025/10/07/looking-identity-in-the-eye-brief-considerations-on-the-frontiers-of-biometric-data-and-identity/)