EUR-Lex Access to European Union law

Back to EUR-Lex homepage

This document is an excerpt from the EUR-Lex website

Document 52021AE2482

Opinion of the European Economic and Social Committee on Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (COM(2021) 206 final — 2021/106 (COD))

EESC 2021/02482

OJ C 517, 22.12.2021, p. 61–66 (BG, ES, CS, DA, DE, ET, EL, EN, FR, HR, IT, LV, LT, HU, MT, NL, PL, PT, RO, SK, SL, FI, SV)

22.12.2021   

EN

Official Journal of the European Union

C 517/61


Opinion of the European Economic and Social Committee on Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts

(COM(2021) 206 final — 2021/106 (COD))

(2021/C 517/09)

Rapporteur:

Catelijne MULLER

Referral

European Parliament, 7.6.2021

Council, 15.6.2021

Legal basis

Article 114 of the Treaty on the Functioning of the European Union

Section responsible

Section for the Single Market, Production and Consumption

Adopted in section

2.9.2021

Adopted at plenary

22.9.2021

Plenary session No

563

Outcome of vote

(for/against/abstentions)

225/03/06

1.   Conclusions and recommendations

1.1.

The European Economic and Social Committee (EESC) welcomes the fact that the Commission proposal for the Artificial Intelligence Act (the AIA) not only addresses the risks associated with AI, but also raises the bar substantially as regards the quality, performance and trustworthiness of AI that the EU is willing to allow. The EESC is particularly pleased that the AIA puts health, safety and fundamental rights at its centre and is global in scope.

1.2.

The EESC sees areas for improvement regarding the scope, definition and clarity of the prohibited AI practices, the implications of the categorisation choices made in relation to the ‘risk pyramid’, the risk-mitigating effect of the requirements for high-risk AI, the enforceability of the AIA and the relation to existing regulation and other recent regulatory proposals.

1.3.

The EESC stresses that AI has never operated in a lawless world. Because of its wide scope as well as its primacy as an EU Regulation, the AIA could create tension with existing national and EU laws and related regulatory proposals. The EESC recommends amending recital 41 to duly reflect on and clarify the relations between the AIA and existing and upcoming legislation.

1.4.

The EESC recommends clarifying the definition of AI by removing Annex I and slightly amending Article 3 and widening the scope of the AIA so as to include ‘legacy AI systems’ and AI components of large scale IT systems in the area of freedom, security and justice as listed in Annex IX.

1.5.

The EESC recommends clarifying the prohibitions regarding ‘subliminal techniques’ and ‘exploitation of vulnerabilities’ so as to reflect the prohibition of harmful manipulation, and also adding ‘harm to fundamental rights, democracy and the rule of law’ as conditions for these prohibitions.

1.6.

The EESC sees no place in the EU for the scoring of the trustworthiness of EU citizens based on their social behaviour or personality characteristics, irrespective of the actor performing the scoring. The EESC recommends broadening the scope of this ban so as to include social scoring by private organisations and semi-public authorities.

1.7.

The EESC calls for a ban on use of AI for automated biometric recognition in publicly and privately accessible spaces, except for authentication purposes in specific circumstances, as well as for automated recognition of human behavioural signals in publicly and privately accessible spaces, except for very specific cases, such as some health purposes, where patient emotion recognition can be valuable.

1.8.

The ‘list-based’ approach for high-risk AI runs the risk of normalising and mainstreaming a number of AI systems and uses that are still heavily criticised. The EESC warns that compliance with the requirements set for medium- and high-risk AI does not necessarily mitigate the risks of harm to health, safety and fundamental rights for all high-risk AI. The EESC recommends that the AIA provide for this situation. At the very least, the requirements of (i) human agency, (ii) privacy, (iii) diversity, non-discrimination and fairness, (iv) explainability and (v) environmental and social wellbeing of the Ethics guidelines for trustworthy AI should be added.

1.9.

In line with its long advocated ‘human-in-command’ approach to AI, the EESC strongly recommends that the AIA provide for certain decisions to remain the prerogative of humans, particularly in domains where these decisions have a moral component and legal implications or a societal impact such as in the judiciary, law enforcement, social services, healthcare, housing, financial services, labour relations and education.

1.10.

The EESC recommends making third party conformity assessments obligatory for all high-risk AI.

1.11.

The EESC recommends including a complaints and redress mechanism for organisations and citizens that have suffered harm from any AI system, practice or use that falls within the scope of the AIA.

2.   Regulatory proposal on Artificial Intelligence — AI Act

2.1.

The EESC welcomes the fact that the Commission proposal for the Artificial Intelligence Act not only addresses the risks associated with AI, but also raises the bar substantially as regards the quality, performance and trustworthiness of AI that the EU is willing to allow.

3.   Overarching comments — AIA

Objective and scope

3.1.

The EESC welcomes both the objective and the scope of the AIA. The EESC particularly welcomes the fact that the Commission puts health, safety and fundamental rights at the centre of the AIA. The EESC also welcomes the external effect of the AIA, ensuring that AI that is developed outside of the EU has to meet the same legal standards if deployed or having an impact within the EU.

Definition of AI

3.2.

The definition of AI (Article 3(1) in conjunction with Annex I AIA) has given rise to a discussion among AI scientists to the effect that a number of the examples given in Annex I are not considered AI by AI scientists, and a number of important AI techniques are missing. The EESC sees no added value in Annex I and recommends removing it entirely from the AIA. The EESC also recommends amending the definition in Article 3 (I) as follows:

‘“Artificial intelligence system” (AI system) means software that can, in an automated manner, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions, influencing the environment it interacts with.’

Health, safety and fundamental rights — the risk pyramid

3.3.

The escalating ‘risk pyramid’ (from low/medium risk, to high risk, to unacceptable risk) used to categorise a number of general AI practices and domain-specific AI use cases acknowledges that not all AI poses risks and not all risks are equal or require the same mitigating measures.

3.4.

The chosen approach presents us with two important questions. First, do the mitigating measures (for high-risk and low/medium-risk AI) really sufficiently mitigate the risks of harm to health, safety and fundamental rights? Second, are we ready to allow AI to largely replace human decision making, even in critical processes such as law enforcement and the judiciary?

3.5.

As to the first question, the EESC warns that compliance with the requirements set for medium- and high-risk AI does not necessarily mitigate the risks of harm to health, safety and fundamental rights in all instances. This will be further elaborated on in Section 4.

3.6.

As to the second question, what is missing from the AIA is the notion that the promise of AI lies in augmenting human decision making and human intelligence, rather than replacing it. The AIA works on the premise that, once the requirements for medium- and high-risk AI are met, AI can largely replace human decision making.

3.7.

The EESC strongly recommends that the AIA provide for certain decisions to remain the prerogative of humans, particularly in domains where these decisions have a moral component and legal implications or a societal impact such as in the judiciary, law enforcement, social services, healthcare, housing, financial services, labour relations and education.

3.8.

AI systems do not operate in a lawless world. A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today. Legal sources include, but are not limited to: EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as for instance the Medical Device Regulation in the healthcare sector). The EESC recommends amending recital 41 to duly reflect this.

4.   Specific comments and recommendations — AIA

Prohibited AI practices

4.1.

The EESC agrees that the AI practices in Article 5 do indeed have no social benefit and should be prohibited. It finds some wordings unclear, however, which could make some prohibitions difficult to interpret and easy to circumvent.

4.2.

Evidence exists that subliminal techniques can not only lead to physical or psychological harm (the current conditions for this particular prohibition to kick in), but can, given the environment they are deployed in, lead to other adverse personal, societal or democratic effects, such as altered voting behaviour. Moreover, it is often not the subliminal technique itself, but rather the decision who to target with a subliminal technique that is AI driven.

4.3.

In order to capture what the AIA aims to prohibit in Article 5(1)(a), which is manipulating people into harmful behaviour, the EESC recommends amending the paragraph as follows: ‘(…) an AI system deployed, aimed at or used to materially distort a person’s behaviour in a manner that causes or is likely to cause harm to that person’s, another person’s or group of persons’ fundamental rights, including their physical or psychological health and safety, or to democracy and the rule of law’.

4.4.

The EESC recommends amending the prohibited practice of exploitation of vulnerabilities in Article 5(1)(b) in the same manner, so as to include harm to fundamental rights, including physical or psychological harm.

4.5.

The EESC welcomes the prohibition of ‘social scoring’ in Article 5(1)(c). The EESC recommends that the prohibition of social scoring also apply to private organisations and semi-public authorities, rather than just to public authorities. There is no place in the EU for scoring the trustworthiness of EU citizens based on their social behaviour or personality characteristics, irrespective of the actor performing the scoring. If there were, the EU would open the door to multiple areas where social scoring would be allowed, such as at the workplace. The conditions under subparagraphs (i) and (ii) should be clarified so as to draw a clear line between what is considered ‘social scoring’ and what can be considered an acceptable form of evaluation for a certain purpose, i.e. at the point where the information used for the evaluation should no longer be deemed relevant or reasonably related to the goal of the evaluation.

4.6.

The AIA aims to ban real-time remote biometric identification (with facial recognition, for example) for law enforcement and categorise it as ‘high risk’ when used for other purposes. This leaves ‘post’ and ‘near’ biometrics recognition allowed. It also leaves biometric recognition not aimed at identifying a person, but rather at assessing a person’s behaviour from their biometric features (micro expressions, gait, temperature, heart rate, etc.) allowed. The limitation to ‘law enforcement’ allows biometric identification, as well as all other forms of biometric recognition not aimed at identification of an individual, including all mentioned forms of ‘emotion recognition’ for all other purposes, by all other actors, in all public and private places, including at the workplace, shops, stadiums, theatres etc. This leaves the door wide open to a world where we are constantly being ‘emotionally assessed’ for whatever purpose the actor assessing us deems necessary.

4.7.

The AIA categorises ‘emotion recognition’ generally as low risk, with the exception of a few user domains where they are categorised as high risk. This type of biometrics recognition is also known as ‘affect recognition’ and sometimes ‘behaviour recognition’. All these types of practices by AI are extremely invasive, lack any sound scientific basis and pose substantial risks of harm to a number of fundamental rights of the EU Charter, such as the right to human dignity, the right to integrity of the person (which includes mental integrity) and the right to a private life.

4.8.

Broadly in line with the call of the EDPS and EDPB of 21 June 2021 for a ban on the use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination, the EESC calls for:

a ban on use of AI for automated biometric recognition in publicly and privately accessible spaces (such as recognition of faces, gait, voice and other biometric features), except for authentication purposes in specific circumstances (for example to provide access to security sensitive spaces),

a ban on use of AI for automated recognition of human behavioural signals in publicly and privately accessible spaces,

a ban on AI systems using biometrics to categorise individuals into clusters based on ethnicity, gender, political or sexual orientation or other grounds on which discrimination is prohibited under Article 21 of the Charter,

a ban on the use of AI to infer emotions, behaviour, intent or traits of a natural person, except for very specific cases, such as some health purposes, where patient emotion recognition is important.

High-risk AI

4.9.

In deciding whether an AI practice or use that poses a risk to health, safety or fundamental rights should nevertheless be allowed under strict conditions, the Commission looked at two elements: (i) whether the AI practice or use can have social benefits and (ii) whether the risk of harm to health, safety and fundamental rights this use nevertheless poses can be mitigated by meeting a number of requirements.

4.10.

The EESC welcomes the alignment of these requirements with elements of the Ethics guidelines for trustworthy AI (‘EGTAI’). However, five important EGTAI requirements are not specifically dealt with in the requirements for high-risk AI in the AIA, namely: (i) human agency, (ii) privacy (iii) diversity, non-discrimination and fairness, (iv) explainability and (v) environmental and social wellbeing. The EESC feels that this is a missed opportunity, because many of the risks AI poses are those of privacy, bias, exclusion, inexplicability of the outcomes of AI decisions, the undermining of human agency and the environment and all are reflected in our fundamental rights.

4.11.

The EESC recommends adding these requirements to those of Chapter 2 of Title III of the AIA, to improve the ability of the AIA to effectively protect our health, safety and fundamental rights from adverse impact of AI, used by public authorities and private organisations alike.

4.12.

The EESC welcomes the ‘interwoven system’ between the AIA and Union Harmonisation Legislation. It recommends extending the scope of the AIA and the requirements for high-risk AI beyond ‘AI safety components’ or the situation where the AI system is itself a product covered by Union harmonisation legislation listed in Annex II. This is because AI can pose risks not only when used as safety components of these products and because the AI system itself is not always a product. When used, for example, as part of a diagnostic or prognostic tool in the medical field or an AI-driven thermostat that regulates a boiler.

4.13.

The EESC warns, however, that the chosen ‘list-based’ approach for high-risk AI in Annex III can lead to the legitimation, normalisation and mainstreaming of quite a number of AI practices that are still heavily criticised and for which the societal benefits are questionable or lacking.

4.14.

Moreover, the risks of harm to health, safety and fundamental rights cannot necessarily always be mitigated by compliance with the 5 requirements for high-risk AI, in particular when it comes to less mentioned fundamental rights that could be impacted by AI, such as the right to human dignity, the presumption of innocence, the right to fair and just working conditions, the freedom of association and assembly, the right to strike, to name a few.

4.15.

The EESC strongly recommends adding the management and operation of the telecom and internet infrastructure to point 2 of Annex III. The EESC also recommends extending the scope of this point beyond AI safety components.

4.16.

AI systems to determine access to education and evaluate students pose a number of risks of harm to student health, safety and fundamental rights. Online proctoring tools, for example, to supposedly flag up ‘suspicious behaviour’ and ‘indications of cheating’ during online exams by using all kinds of biometrics and behaviour tracking are truly invasive and lack scientific evidence.

4.17.

The use of AI systems for monitoring, tracking and evaluation of workers causes serious concerns as regards workers’ fundamental rights to fair and just working conditions, to information and consultation and to justified dismissal. The addition of these AI systems to the high-risk list is likely to cause conflicts with national labour laws and collective labour agreements for (un)fair dismissal, healthy and safe working conditions and worker information. The EESC calls for a guarantee of the full involvement and informing of workers and the social partners in the decision making process on the use of AI in the workplace, and on its development, procurement and deployment.

4.18.

The requirement of ‘human oversight’ is particularly relevant in labour relations, because the oversight will be done by a worker or a group of workers. The EESC stresses that these workers should receive training on how to perform this task. Moreover, given the fact that these workers are expected to be allowed to disregard the output of the AI system or even decide not to use it, there should be measures in place to avoid the fear of negative consequences (such as demotion or dismissal) if such a decision is taken.

4.19.

The use of AI systems in relation to access and enjoyment of public services is broader than the use of AI systems in relation to access and enjoyment of essential private services, where for the latter only credit (worthiness) scoring by AI is considered high risk. The EESC recommends broadening the scope of point 5(b) of Annex III to AI systems intended to evaluate the eligibility for essential private services.

4.20.

AI used by law enforcement authorities and in migration, asylum and border control management for making individual (criminal or security) risk assessments poses a risk of harm to the presumption of innocence, the right of defence and the right to asylum of the EU Charter. AI systems in general merely seek correlations that are based on characteristics found in other ‘cases’. Suspicion in these instances is not based on actual suspicion of a crime or misdemeanour by the particular person, but merely on characteristics that that person happens to share with convicted criminals (such as address, income, nationality, debts, employment, behaviour, behaviour of friends and family members and so on).

4.21.

The use of AI in the administration of justice and democratic processes is particularly sensitive and should be approached with more nuance and scrutiny than is now the case. Merely putting systems to use to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts overlooks the fact that judging is so much more than finding patterns in historical data (which is in essence what current AI systems do). The text also assumes that these types of AI will only assist the judiciary, while leaving fully automated judicial decision making out of scope. The EESC also regrets there is no mention of AI systems or uses in the realm of democratic processes, such as elections.

4.22.

The EESC recommends adding a provision that provides for the situation where it is either obvious or became clear during the prior conformity assessment that the 6 requirements will not sufficiently mitigate the risk of harm to health, safety and human rights (for example by amending Article 16(g) AIA).

Governance and enforceability

4.23.

The EESC welcomes the governance structure set up by the AIA. It recommends the AI Board to hold regular obligatory exchanges of view with wider society, including the social partners and NGOs.

4.24.

The EESC strongly recommends widening the scope of the AIA so as to include ‘legacy AI systems’, i.e. systems that are already in use or are deployed prior to the coming into force of the AIA, in order to avoid deployers fast tracking any prohibited, high- and medium-risk AI to avoid compliance requirements. Moreover, the EESC strongly recommends not to exclude AI that is a component of large scale IT systems in the area of freedom, security and justice as listed in Annex IX from the scope of the AIA.

4.25.

The complexity of the requirements and accountability activities, plus the self-assessment, runs the risk of simplifying this process into check lists where a simple ‘yes’ or ‘no’ could suffice to meet the requirements. The EESC recommends making third party assessments obligatory for all high-risk AI.

4.26.

The EESC recommends having appropriate (financial) support measures and simple and accessible tools in place for micro and small organisations, as well as civil society organisations, to be able to understand the purpose and meaning of the AIA, as well as to be able to meet its requirements. These measures should go beyond supporting Digital Innovation Hubs and consist in the facilitating of access to high-level expertise regarding the AIA, its requirements, its obligations and particularly the reasoning behind these.

4.27.

The EESC recommends including a complaints and redress mechanism for organisations and citizens that have suffered harm from any AI system, practice or use that falls within the scope of the AIA.

Brussels, 22 September 2021.

The President of the European Economic and Social Committee

Christa SCHWENG


Top