On 21 April 2021, the European commission unveiled its proposal for a regulation of the various use cases of artificial intelligence (AI) technologies within the European Union, including facial recognition technologies (FRTs). FRTs are based on artificial intelligence methods that apply so-called “deep learning” techniques using biometric databases. They can be used for authentication (for instance verifying one’s identity by recognising one’s face) and identification (for example, linking an identity to a given face among a database of known faces) purposes. Facial recognition technologies have become part of citizens’ everyday life through different experiences, from unlocking one’s smartphone with one’s face to automatically identifying friends on pictures posted on social media. There are many potential applications for these technologies, be it for security purposes (border security, unlocking smartphones, online payments, access to public services…), marketing (targeted advertising), or even recreational purposes (face swapping, identification on social media posts).
In its legislative proposal, the European commission chose to opt for a riskbased approach that categorises AI technologies depending on three levels of risk: unacceptable, high or low. Accordingly, four AI applications are forbidden by the proposal, as the Commission considers they bear an unacceptable level of risk. For instance, it is the case for real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (article 5(1)(d)), which European executives have deemed contrary to the European Union’s values. However, the proposal includes three relatively large exceptions to this ban. For example, police forces will be able to use such technologies to identify victims of criminal offenses, including missing children, to locate victims or suspects of criminal acts that can entail prison sentences of at least three years’ time, or to prevent a threat to the life or safety of others or in the event of a terrorist attack. Concerning the other use cases of AI services that require remote biometric identification — uses in the private sector for instance —, the European commission proposes to classify them in the category of high-risk applications. Their use can thus be authorised under certain guarantees, notably the creation of a risk management system (article 9), a minimal level of quality of the data used to train the algorithms (article 10), an obligation of transparency and information towards users (article 13) and human supervision (article 14).
The necessity of supervising the deployment of facial recognition technologies in order to protect the fundamental rights and freedoms of European citizens is at the heart of current debates. Indeed, more and more civil society actors — such as those who launched the Reclaim Your Face campaign — are denouncing the highly intrusive aspect of these technologies. Albeit forbidding, in principle, the processing of biometric data, the General Data Protection Regulation (GDPR) comprises many exceptions. In a report published in June 2020, Renaissance Numérique noted that even though a comprehensive legal framework already exists, its enforcement remains fragmented and partly inefficient, thus endangering European citizens’ rights.
In line with previous works on this matter, the think tank organised a seminar on 21 February 2021, aiming at establishing a comparative analysis of the uses of FRTs in two European countries: France and the United Kingdom (UK). This European seminar was prepared in partnership with the British Embassy in Paris and law firm Pinsent Masons, and brought together around fifty private and public actors, members of civil society and researchers. Comparing France and the United Kingdom’s uses and regulation of facial recognition technologies proves interesting in several ways. On the one hand, debates around those technologies are now well entrenched in both countries (albeit being fairly recent). On the other hand, there are significant differences in the way those technologies are being deployed and regulated on both sides of the Channel.
This note is fuelled by the discussions that took place during the seminar, and questions the major issues when it comes to regulating facial recognition technologies in France, the United Kingdom, and Europe in a broader sense. The comparison allows us to imagine an appropriate regulatory framework to answer the challenges induced by such technologies.