Publication 13 October 2021

European legislation on artificial intelligence

Joint analysis by Renaissance Numérique and the Chair on the Legal and Regulatory Implications of AI of Grenoble Alpes University

In preparation for the first debate between the European Ministers of Telecommunications dedicated to the European legislation on artificial intelligence (AI Act), Renaissance Numérique and the Chair on the Legal and Regulatory Implications of Artificial Intelligence of Grenoble Alpes University are publishing their joint analysis of the draft regulation. In this note, submitted to the European Commission in the framework of a public consultation organised by the European executive, the co-authors examine four specific aspects of the proposal.

AUTHORS & CONTRIBUTORS

  • Mathias Becuywe Research Engineer, Grenoble Alpes University

  • Stéphanie Beltran Gautron Research Engineer, Grenoble Alpes University

  • Jennyfer Chrétien, General Director, Renaissance Numérique

  • Théodore Christakis Professor of Law, Grenoble Alpes University

  • Jessica Galissaire Studies Manager, Renaissance Numérique

  • Alexandre Lodie Research Engineer, Grenoble Alpes University

  • Guillaume Morat Senior Associate, Pinsent Masons

  • Marine Pouyat Consultant, W Talents

  • Annabelle Richard Partner, Pinsent Masons

  • Anaïs Trotry PhD Candidate, Grenoble Alpes University

A problem of legibility and flexibility in the regulation

Two elements deserve particular attention:

  • The legal definition of artificial intelligence (AI) systems: is it adapted to the reality and upgradeability of these systems?
  • The mechanisms for classifying systems as “high-risk” : the European Commission’s text provides for a pyramidal classification of AI systems based on a risk-based approach. It seems necessary to clarify these criteria and exceptions in the draft regulation (social scoring, real-time remote biometric identification systems, etc.).

In this respect, it would be appropriate to include relevant stakeholders in the procedures for reviewing this piece of legislation.

“Although adopting an approach that would broaden the definition of AI techniques beyond software systems alone would have the merit of encompassing potential future developments of these technologies, it could also create legal uncertainty.”

The multi-stakeholder governance approach must be reinforced

As drafted, the proposal allows Member States a certain degree of flexibility when designating the national authorities responsible for monitoring the application of the regulation. However, it would be desirable to allow for greater harmonisation in the implementation of the regulation within the European Union. In addition, it will be necessary to ensure that the designated authorities are capable of carrying out these tasks. These problems linked to the harmonisation of the implementation of European regulations have already been illustrated since the entry into force of the General Data Protection Regulation (GDPR).

The respective roles of the European Artificial Intelligence Board (EAIB), Member States and the Commission should also be specified. To date, the text seems to introduce several parallel governance systems, without a mechanism for clear communication between them. Finally, it will be necessary to give more autonomy to the EAIB in order to ensure its independence from the European Commission.

The AI Act poses implementation challenges

There are still many unknown variables regarding the assessment of these technologies. Given the diversity of AI systems and of their uses, determining the right scope of analysis is not always easy. At this stage, some of the proposed concepts in the draft regulation do not yet have a methodology for evaluation, nor a settled définition. These difficulties of interpretation will have to be resolved so that the actors at hand are able to grasp these concepts in their evaluation process.

In view of these interpretation challenges, and in line with the necessity to strengthen its role in terms of governance, the EAIB should be tasked with drawing up concrete recommendations in consultation with expert groups, relevant stakeholders, and the actors of the European artificial intelligence ecosystem.

Regulatory sandboxes : a lever for innovation on which the Union should have a strong ambition

The European Commission’s proposal relies on the assumption that the existence of a stable and clear regulatory framework would enable the development of the AI market in the European Union. However, the framework remains complex and will probably not be sufficient in itself to provide an incentive mechanism likely to create a market.

Thus, the regulatory sandboxes represent the “innovation” aspect of this text. However, in order for them to function and be a real lever for innovation, it seems essential to build a harmonised approach between the competent national authorities, and that these authorities dispose of the human, technical, and financial means to put them into action.

In this perspective, the functioning of regulatory sandboxes should be discussed in a collegial manner between the European Commission, the EAIB, national competent authorities, the AI Expert Group and relevant industry and civil society representatives. For now, the functioning of regulatory sandboxes varies from one Member State to another. Besides, regulatory sandboxes are most often an opportunity for data protection authorities (DPAs) to assess some compliance issues, rather than real experimental frameworks aimed to open up to innovation.

“Such a regulation will not be implemented correctly without an agile governance – one that is open to relevant stakeholders and expertise. Just like for regulatory sandboxes, this will require the development of ambitious tools that do not oppose regulation and innovation, and which will allow the European Union to permanently establish itself as a territory of excellence and trust when it comes to artificial intelligence.”

More on this subject