Iris collection as a proof of personhood: current trends on biometric recognition

  • Home
  • External Source
  • Iris collection as a proof of personhood: current trends on biometric recognition
Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 

In Portugal, more than 300,000 people have already “sold” their iris scan to Worldcoin Foundation, which in return offers them cryptocurrency. In March 2024, the Portuguese data protection authority (hereinafter, the CNPD) decided to suspend the company’s collection of iris and facial biometric data for 90 days in order to protect the right to the protection of personal data, especially of minors, following in the footsteps of Spain, which also temporarily banned the company’s activities for privacy reasons.[1]

In a statement, the CNPD explains that the company has already been informed of this temporary suspension, which will last until the investigation is completed and a final decision is made on the matter. The adoption of this urgent provisional measure comes in the wake of “dozens of reports” received by the CNPD in the last month, which report the collection of data from minors without the authorisation of their parents or other legal representatives, as well as deficiencies in the information provided to data subjects, the impossibility of deleting data or revoking consent.[2] In CNPD’s press release, one can read that “[g]iven the current circumstances, in which there is unlawful processing of the biometric data of minors, combined with potential infringements of other GDPR rules, the CNPD considered that the risk to citizens’ fundamental rights is high, justifying an urgent intervention to prevent serious or irreparable harm.”[3]

In its detailed suspension decision, one is provided with important information, namely the fact that, through complaints, it was revealed that some data subjects only became aware of the risks involved in the processing of their data due to media exposure of the matter, and that these risks were never properly explained to them; furthermore, they were allegedly not provided with information on the processing carried out, namely on the data actually collected and for what purposes, nor on how to exercise the rights provided for in the law on the protection of personal data; and, as reported by the media, there are a number of citizens who authorise this data collection and subsequent processing because they are economically vulnerable and/or are not fully aware of the objectives and implications of their participation in the Worldcoin project.[4]

And what is this project all about? Cofounded by OpenAI CEO Sam Altman, the Worldcoin Foundation’s white paper entitled “A New Identity and Financial Network”[5] outlines the goals of scanning people’s iris and facial biometrics. According to the document, “[i]f successful, Worldcoin could considerably increase economic opportunity, scale a reliable solution for distinguishing humans from AI online while preserving privacy, enable global democratic processes, and show a potential path to AI-funded UBI.[6] Worldcoin consists of a privacy-preserving digital identity network (World ID) built on proof of personhood and, where laws allow, a digital currency (WLD).”

For the company, in a world where AI is becoming more and more powerful, there is an urgent need for “proof of personhood”, and the most viable way to issue it is through custom biometric hardware – the Orb. The Orb captures high-quality iris images with more than an order of magnitude higher resolution compared to iris recognition standards, through which a ‘World ID’ is created – a “digital identity solution enabling users to prove their uniqueness and humanity anonymously […]”,[7] according to Worldcoin.

Even though the goal of Worldcoin considers the activities conducted are preserving of privacy, recent complaints and bans offer a contrasting perspective. Indeed, Eileen Guo and Adi Renaldi from the MIT Technology Review published, in April 2022, a long article exposing many of the company’s weaknesses and challenges it presents. For instance, they interviewed Iyus Ruswandi, a local Indonesian who was tempted to “sell” his iris to Worldcoin Indonesia, in December 2021. The representatives would collect the scans, in return for “free cash (often local currency as well as Worldcoin tokens) to Airpods to promises of future wealth […] What they were not providing was much information on their real intentions.”[8] In the author’s interview with Ruswandi, he stated that representatives even had to help residents set up emails and log on to the web, leading him to reflect on why Worldcoin was targeting low-income communities in the first place, rather than crypto enthusiasts or communities.[9]

From the accounts gathered so far, it is possible to witness how this practice has affected vulnerable communities, from populations who do not have access to the most up-to-date digital literacy, to children, who cannot validly consent to this type of practice. But there is also an inequality of knowledge between those who have sold their irises and the company, simply because the latter has apparently not been fully transparent about its operations. In this context, it is relevant to refer to recital 20 of the EU AI Act which determines that “[i]n order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant context and can include […], in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them […]”.[10] (Author’s bold). And although this reference to digital literacy in this recital involves different groups of people, it is true that Article 4 of the AI Act states that “[p]roviders and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf […]”, putting the emphasis on those who work closely with the technology.

The scenario examined in this text is particularly worrying, especially as we are dealing with biometric data, which “can enable the authentication, identification or categorization of natural persons and the recognition of emotions of natural persons”.[11] As discussed in the European Parliament’s 2021 study “Biometric Recognition and Behavioural Detection”, to uniquely identify natural persons, “strong”[12] biometric identifiers must be captured, converted into digital data and ultimately into a standardised template. These identifiers can be captured by appropriate physical scanners, with the active conscious cooperation of the individual, remotely without such cooperation, or with the help of existing other data. Thus, capturing biometric identifiers means converting a person’s unique physical characteristics into digital data, leading to the “datafication” of humans.

Because the features that uniquely identify a person are part of a person’s body, their collection and use interfere with a person’s personal autonomy and dignity, the report stresses. Once a biometric template has been created and stored in a reference database, anyone in possession of that template can identify and locate that person anywhere in the world, putting that person at serious risk of being tracked and monitored. Also, it can be used to identify the individual for an unlimited number of purposes and situations.[13] In fact, while the risk of fraud and the difficulties posed by poor data quality or missing data are reduced by models that use “strong” biometrics, these “also increase ethical concerns, as they enable more efficient public surveillance and can be used for the creation of elaborate profiles”.[14]

According to Article 5(1)(h) of the AI Act, the use of “real-time” remote biometric identification systems[15] in spaces accessible to the public for the purpose of maintaining public order is prohibited, unless and to the extent that such use is strictly necessary for defined purposes in the Regulation.[16] When it comes to the use of “post” biometric identification systems (in deferred time),[17] this is considered a high-risk practice, and not prohibited like the one described above, although the result is the same – massive identification of subjects without their consent or knowledge, something which is intrusive in nature. Much criticism has been directed at the fact that the latter is not entirely prohibited, and that the former includes exceptions to its prohibition.[18] Hence, that highlights how the practice of biometric identification carries a very high risk of threatening basic rights and safeguards,[19] and ultimately democracy itself.

Now, when considering the operations of companies which make use of biometric identification as its main activity, and the purposes for which all the sensitive data will be used can be very questionable, we step on to very dangerous territory. In this regard, it is relevant to consider Alfonso Ballesteros’ insights in his article “Digitocracy: ruling and being ruled”: “[d]igitocracy[20] seems to be a new form of government […] a new way to rule an unprecedented number of people smartly and efficiently. […] Rulers are no more modern technocratic humanists than mere rational entrepreneurs seeking to earn money. They are postmodern entrepreneurs [who] have been able to hybridise their economic interests with new postmodern ideas; in particular, those that blur the distinctions between artefacts and humans, and a declared pretension to be acting for the good of humanity.”[21]

These days, this is a matter for the most careful consideration, as without strong enough safeguards, we will be increasingly subject to vested interests making use of our most sensitive information, and that could lead to a path of no return. Thus, in the face of an offer of “proof of personhood”, we should be concerned as to whether this is weakening our very own autonomy and dignity – in essence, our humanness – or if it will enhance and, on the contrary, protect our life in coexistence with technology.

[1] See Elizabeth Howcroft, “Portugal orders Sam Altman’s Worldcoin to halt data collection”, Reuters, 26 March 2024, See also Expresso, “Worldcoin: Comissão de Proteção de Dados suspende recolha de dados da íris”, 26 March 2024,

[2] CNPD, “CNPD suspende recolha de dados biométricos”, 26 March 2024,

[3] The full text of the press release is available at:

[4] CNPD, “DELIBERAÇÃO/2024/137”, AVG/2023/1205, 4,

[5] Worldcoin Foundation, “A New Identity and Financial Network”, Worldcoin Whitepaper,

[6] UBI stands for universal basic income.

[7] Worldcoin Foundation, “World ID – The protocol to bring privacy-preserving global proof of personhood to the internet”,

[8] Eileen Guo and Adi Renaldi, “Human and technology – Deception, exploited workers, and cash handouts: how Worldcoin recruited its first half a million test users”, MIT Technology Review, 6 April 2022,

[9] Eileen Guo and Adi Renaldi, “Human and technology – Deception, exploited workers, and cash handouts: how Worldcoin recruited its first half a million test users”.

[10] European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, P9_TA(2024)0138 (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), (Hereinafter, EU AI Act or AI Act).

[11] EU AI Act, Recital 14.

[12] These are, according to the study, fingerprint, iris, or retina.

[13] European Parliament, “Biometric Recognition and Behavioural Detection – Assessing the ethical aspects of biometric recognition and behavioural detection techniques with a focus on their current and future use in public spaces”, Study requested by the JURI and PETI committees, Policy Department for Citizens’ Rights and Constitutional Affairs, Directorate-General for Internal Policies, PE 696.968, August 2021, 44,

[14] European Parliament, “Biometric Recognition and Behavioural Detection – Assessing the ethical aspects of biometric recognition and behavioural detection techniques with a focus on their current and future use in public spaces”, 14.

[15] EU AI Act, Recital 17: “[…] ‘Real-time’ systems involve the use of ‘live’ or ‘near-live’ material, such as video footage, generated by a camera or other device with similar functionality. […]”

[16] EU AI Act, Recital 33: “Those situations involve the search for certain victims of crime including missing people; certain threats to the life or to the physical safety of natural persons or of a terrorist attack; and the localisation or identification of perpetrators or suspects of the criminal offences listed in an annex to this Regulation, where those criminal offences are punishable by a custodial sentence or a detention order for a maximum period of at least four years in the Member State concerned in accordance with the law of that Member State. Such a threshold for the custodial sentence or detention order in accordance with national law contributes to ensuring that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems.”

[17] EU AI Act, Recital 17: “[…] In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.”

[18] See Patrick Breyer, Sergey Lagodinsky and Kim van Sparrentak, “Protecting privacy: biometric mass surveillance and the AI Act”, The Greens/EFA in the European Parliament, 6 March 2024,

[19] For instance, “[…] AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons. Considering the imbalance of power in the context of work or education, combined with the intrusive nature of these systems, such systems could lead to detrimental or unfavourable treatment of certain natural persons or whole groups thereof.” – Recital 44, EU AI Act.

[20] “Digitalisation as a form of government”.

[21] Alfonso Ballesteros, “Digitocracy: ruling and being ruled”, Philosophies 5, 9 (2020): 11,

Picture credits: by Wojtek Paczeu015b on

Author: UNIO-EU Law Journal (Source: