Publication 12 September 2023

Economic and ethical perspectives on the future of artificial intelligence in the European Union

Summary of the debate organised on 21 June 2023 by Renaissance Numérique and Confrontations Europe, in partnership with Microsoft France

AUTHORS

  • Inès Michel, Policy Officer, Confrontations Europe

  • Brivaela Renaud, Project Assistant, Renaissance Numérique

On Wednesday 21 June, think tanks Confrontations Europe and Renaissance Numérique, in partnership with Microsoft France, organised a conference at the Maison de l’Europe in Paris, entitled “Economic and ethical perspectives on the future of artificial intelligence in the European Union”. The first round table, aimed at examining the impact of the increasing use of artificial intelligence (AI) on the European economy and job market, brought together Michel COMBOT, Managing Director of Numeum, Agata HIDALGO, Head of European Affairs at France Digitale, and Audrey PLONK, Head of the Digital Economy Policy Division at the OECD. The second round table, entitled “How to ensure that AI is developed and used in compliance with European ethical values”, brought together Brunessen BERTRAND, Professor at the Jean Monnet Chair in Data Governance at the University of Rennes, Constance BOMMELAER DE LEUSSE, Executive Director of the McCourt Institute, and Mariagrazia SQUICIARINI, Head of the Executive Office for Social and Human Sciences at UNESCO. The first round table was moderated by Michel DERDEVET, President of Confrontations Europe, and the second by Nicolas VANBREMEERSCH, President of Renaissance Numérique. This summary outlines their discussions.

Introduction

AI is now part of the daily lives of billions of people and is transforming our societies and economies, sometimes imperceptibly, but often with far-reaching consequences. Beyond the potential benefits of these technologies, the rapid growth of AI raises questions that are at the heart of the discussions led by Renaissance Numérique and Confrontations Europe.

Because of the potential bias and discrimination it may generate, and the security, data protection, and mass manipulation or surveillance issues that may be associated with it, the large-scale adoption of AI raises ethical questions. AI can lead to a major reorganisation of the workplace, including job changes or redundancies, as well as a distortion of competition on the job market. Thus, it also raises economic issues. These concerns emphasise the need to objectivise these phenomena, in order to assess the need to regulate AI developments (or not), and to identify different levers for action. In this respect, several non-binding texts have been adopted at an international level in recent years, such as the OECD Framework for the Classification of AI Systems, the OECD Principles on Artificial Intelligence and the UNESCO Recommendation on the Ethics of Artificial Intelligence.

Over the past few months, two legislative proposals have brought AI to the forefront of the European public debate: the proposal for a regulation establishing harmonised rules on artificial intelligence (the “AI Act”) and the proposed AI liability directive. At the plenary session in June 2023, MEPs adopted their negotiating position on the AI Act. By approving new transparency and risk management rules for AI systems, the European Parliament has just enacted the world’s first AI legislation of its kind. At the time of publication of this summary, “trialogues” between the Parliament, the Council, and the Commission, on the final form of the text, are underway.

In addition to an overall assessment of the future of AI in the EU, the conference examined two aspects of AI: the opportunities created by its growing use in the economy and the job market, and the question of its responsibility and ethical dimension.

Conference on "Economic and ethical perspectives on the future of artificial intelligence in the European Union"

The impact of the growing use of artificial intelligence on the European economy and job market

From an economy and employment point of view, there are many questions that need to be answered. How can the entire production and service chain be adapted? How can we effectively support this societal revolution? What professional skills do we need to develop to ensure that everyone benefits as much as possible from this revolution? What kind of new professions are likely to emerge?

Up until now, automation by AI has affected middle-income jobs, but in the future it could affect all types of jobs, including the most highly skilled. The fear that this technology will profoundly transform the European job market, in particular by cutting certain jobs, seems justified to some speakers. The personal services sector, for example in the fields of health, well-being, or care for the elderly, is likely to be spared. What’s more, the impact of this digital revolution cannot be measured simply by the idea that machines are replacing people. It must also be considered in terms of what it generates, particularly in terms of job creation within the EU. A report by the American firm McKinsey estimates that artificial intelligence could lead to global GDP growth of 1.2% per year until 2030.

As well as creating jobs, AI could help to improve productivity in the workplace. Research into conversational assistants based on generative AI, such as ChatGPT, has shown significant effects on productivity, with reductions in working time and drudgery, and improvements in production quality. The European Parliament’s research department estimates an increase in AI-related work productivity of between 11% and 37% by 2035.

Nevertheless, AI is still not widely deployed in the European job market, as Eurostat figures show. In 2021, 28% of large European companies were using AI, but this statistic varies according to the size of the organisation: if we look at all European companies and not just the largest, the figure decreases to just 8%. In France, 6% of companies have adopted AI on a daily basis. Germany is at roughly the same level, with 7%, the European average. By way of comparison, this proportion rises to 23% in Ireland and 19% in Cyprus. The main obstacles to the use of AI by European companies appear to be a shortage of talent and a lack of skills and innovation.

Conference on "Economic and ethical perspectives on the future of artificial intelligence in the European Union"

In most EU Member States, there is still limited training in digital professions, such as data science or AI, compared to the growing supply of jobs in this sector. The OECD now recognises the importance of developing skills through initial and continuing training, as well as through career changes. The skills challenge is twofold: not only do we need to train AI specialists, but we also need to make AI more accessible, by teaching the people whose work will be transformed by this tool how to integrate it and use it on a daily basis.

When it comes to AI research and development, one of the EU’s budget objectives for 2021- 2027 is to focus on industrial innovation, digitisation, and support for SMEs. The US government, meanwhile, has invested more than $1.7 billion in AI research and development in 2022, according to Stanford University’s AI Index Report 2023. The US advantage lies in its private sector, which is a solid source of funding for AI research and venture capital. In the absence of public and private investment, the journey towards the creation of European champions in artificial intelligence still seems a long one, even if nearly 600 start-ups are dedicated to AI or use AI in part of their business today in France.

Considering the success of tools such as ChatGPT with a wide audience, the rise of generative AI in the European job market raises new issues. The question that now arises is: how can we create a framework of trust at EU level, to support businesses in the ethical and responsible use of AI?

The AI Act passed by the European Parliament is an initial response, which should have a major and direct impact on start-ups, the general public, and investors alike. The aim of this regulation is to ensure that AI developed and used in Europe respects the rights and values of the EU: democracy, fundamental rights, and the rule of law. On the one hand, it aims to support SMEs in their use of AI and compliance with European regulations, the objective being to benefit the entire sector and not just the big digital companies. On the other hand, it aims to ensure human supervision of processes based on AI technologies.

AI should therefore not replace people in the workplace but allow them to concentrate on tasks with greater added value. Legislators and private-sector players are highlighting the need to understand how AI can be used by businesses, in order to provide the best possible support and framework for the innovation, use, and development of these technologies. Far from being confined to the “digital technologies” sector, AI is expected to bring about a far-reaching transformation of society.

How can we ensure that AI is developed and used in a way that respects fundamental European values?

AI, and in particular generative AI, has recently seen a strong “socialisation” of its applications: it is now used by many non-professionals on a daily basis, as illustrated by the boom in the conversational agent ChatGPT, which exceeded 200 million users in April 2023. As pointed out in the first round table, democracy, fundamental rights, and the rule of law form the bedrock of European values. Faced with the socialisation of AI uses, we need to identify the various levers that can enable AI to be developed and used in a way that is inclusive, respects privacy, and does not undermine human dignity.

In this respect, the challenge is to provide a framework for the deployment of AI in order to protect users and citizens, without hampering innovation. The growing adoption of AI-based tools thus requires a balance between innovation and responsible growth.

As the United Nations (UN) pointed out in a recent position statement, “the data used to inform and guide AI systems may be inaccurate, discriminatory, outdated, or irrelevant”. As a result, “AI systems can lead to discriminatory decisions”, resulting in unfair treatment, particularly for groups that are already discriminated against.

Some AI systems, such as facial recognition technologies, may also pose risks to the protection of personal data, security, and fundamental rights and freedoms in general. These fundamental values therefore deserve particular attention when it comes to the use of AI systems. Certain guarantees already exist to this effect at European level. This is the case of the General Data Protection Regulation (GDPR), which was adopted in 2016 and came into force in 2018, and which regulates the processing of data equally throughout the European Union”. Following on from the French Data Protection Act of 1978, amended by the Personal Data Protection Act of 2018, the GDPR aims to strengthen the rights of individuals while making those responsible for processing personal data more accountable.

Conference on "Economic and ethical perspectives on the future of artificial intelligence in the European Union"

The AI Act, approved by the European Parliament on 14 June 2023, and the draft directive on liability in the field of AI also address the need to strike a balance between protection and innovation, and to harmonise the rules between Member States. The aim of the AI Act is thus “to establish a European vision of AI based on ethics, by preventing the risks that are inherent to these technologies through a common set of rules to avoid certain deviations”. As some of the speakers pointed out, AI requires a proactive approach, of which the AI Act is merely the first step. Favouring a “test and learn” approach, the Act is bound to evolve. In this sense, it is not intended to deal with all the risks associated with AI. Moreover, it will have to be coordinated with other European texts, such as the GDPR and the Copyright Directive.

When the European executive decided to launch a legislative initiative to regulate AI, the question may have arisen of whether to opt for general regulation, i.e. across the board and horizontally (the path chosen with the AI Act) or to legislate via different sectoral texts (an approach that is more often adopted in the United States). For some speakers, the current European approach to digital regulation (the proliferation of major texts with evocative acronyms: GDPR, AVMS Directive, DSA, DMA, AI Act, DGA, etc.) reflects the EU’s political will to show the world that it intends to enforce its values through these texts. This intention lends a geopolitical aspect to the subject, to be understood alongside its legal dimension.

In addition to the law’s agility, its technological neutrality appears to be a central challenge for the appropriate regulation of AI. The risks and benefits that AI can bring lie in the uses to which it is put. For this reason, several participants stressed the importance of regulating not the technology as such, but its uses.

In addition to legislative initiatives, initial training is another essential pillar for the responsible development of AI. In the case of engineering students, for example, who will be called upon to develop tomorrow’s AI systems, their training must go beyond science and technology. It must emphasise the ethical issues involved in these technologies. This will make it easier to foresee potential risks and avoid them, by adopting an approach based on “ethics by design”.

Conference on "Economic and ethical perspectives on the future of artificial intelligence in the European Union"

Finally, several participants advocated the idea that the European Union should take part in the construction of a global approach to AI, the challenge being not to lock itself into a “European ethics bubble”. As some of the major players in AI are Asian, it is crucial that the language chosen to regulate AI is understandable from their point of view and does not isolate the EU from other powers. While the approach put forward in the GDPR has benefited from a “Brussels effect” and has been replicated all around the world, the speakers are not promising the same fate for the AI Act. Europe therefore needs to maintain a proactive approach, so that it can align itself as closely as possible at a global level. This brings us back to the importance of cross-border collaboration to ensure the effectiveness of the various frameworks that are emerging. As the main AI tools have not been developed within the European Union, it is all the more necessary for the EU to take these foreign elements into account.

However, as one speaker pointed out, the word “ethics” has as many definitions as there are players; hence the difficulty for European institutions to regulate without sidelining themselves. Ethics, for China, obviously doesn’t reflect European values. For some speakers, it is therefore unlikely that an “International Convention on Digitalisation” will come into being, as countries (including within the European Union itself) disagree on key concepts. For others, it should not be forgotten that similar concerns existed a decade ago about the lack of control over connected objects. However, thanks to a long and patient process of labelling, and the collaboration of public authorities and the EU, it was possible to achieve a virtuous balance.

The balance to be struck between respecting European ethical values and the need to include external players in the discussions also raises questions of governance and capacity to govern. A genuine investment aimed at increasing the skills of those in power on all issues relating to the digital transformation of society therefore seems essential and urgent.


More on this subject