• Non ci sono risultati.

The Work of the Expert Group on Artificial Intelligence

Nel documento Artificial intelligence and labour law (pagine 50-56)

E. European Union

II. The Work of the Expert Group on Artificial Intelligence

1. Ethics guidelines

In June 2018, the European Commission appointed 52 experts from academia, civil society and industry to a High Level Expert Group on AI.214 The "AI HLEG"

presented a first draft of guidelines on the ethics of implementing AI in December 2018. This draft was revised following a consultation period, and amended guidelines published in April 2019.215 In them the group elaborates four "ethical imperatives" that must be adhered to: respect for human autonomy; prevention of harm; fairness; and explainability.216 The authors admonish AI practitioners to

"[a]cknowledge and address the potential tensions between these principles".217 The expert group formulates the following specific requirements: 1) human agency218 and oversight,219 2) technical robustness220 and safety, 3) privacy and data governance, 4) transparency,221 5) diversity,222 non-discrimination and fairness, 6) societal and environmental well-being, and 7) accountability.223 One demand made by the AI HLEG regarding transparency – under the aspect of

"communication" – is that "AI systems should not represent themselves as humans to users". Humans, instead, "humans have the right to be informed that they are

212 Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the security and liability implications of artificial intelligence, the internet of things and robotics v. 19.02.2020, COM(2020) 64 final.

213 Communication from the Commission to the European Parliament, the Council and the European Economic and Social Committee - A European Data Strategy v. 19.2.2020, COM(2020) 66 final.

214 This High Level Expert Group also formed the steering group for the European AI Alliance, a forum for broad public discussion of all aspects of AI development and its impact on the economy and society: https://ec.europa.eu/digital-single-market/en/european-ai-alliance.

215 High Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 2019.

216 Guidelines, p. 12.

217 Guidelines, p. 13.

218 Guidelines, p. 16: "The overall principle of user autonomy must be central to the system’s functionality."

219 Guidelines, p. 16: "Human oversight helps ensuring that an AI system does not undermine human autonomy or causes other adverse effects. Oversight may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach".

220 Guidelines, p. 16.

221 Guidelines; p. 18.

222 Guidelines, p. 18 ("consideration and involvement of all affected stakeholders throughout the process" and "ensuring equal access through inclusive design processes as well as equal treatment").

223 Guidelines, p. 17 f.

51

HSI-Working Paper No. 17 December 2022

interacting with an AI system. This entails that AI systems must be identifiable as such".224

To ensure the fulfilment of these requirements, the expert group essentially recommends focusing on technical methods: "architectures for trustworthy AI", conceptually integrated "ethics and rule of law (X-by-design)", "explanation methods", "testing and validating"225 and "quality-of-service parameters".226 Concerning possible architectures, the authors consider that "[r]equirements for Trustworthy AI should be 'translated' into procedures and/or constraints on procedures, which should be anchored in the AI system’s architecture".227 The group’s take on requirements of "ethics and rule of law by design" is that companies are responsible for "identifying the impact of their AI systems from the very start, as well as the norms their AI system ought to comply with to avert negative impacts".228 In the context of "explanation methods ", the AI HLEG states,

" we must be able to understand why [the system] behaved a certain way and why it provided a given interpretation", given that "sometimes small changes in data values might result in dramatic changes in interpretation".229 The enumerated "non-technical methods" include: regulation, codes of conduct, standardisation, certification, "accountability via governance frameworks", "education and awareness to foster an ethical mind-set", "stakeholder participation and social dialogue", and "diversity and inclusive design teams". Regarding "accountability"

specifically, the AI HLEG concludes that companies "should set up governance frameworks, both internal and external, ensuring accountability for the ethical dimensions of decisions associated with the development, deployment and use of AI systems". This could "include the appointment of a person in charge of ethics issues relating to AI systems, or an internal/external ethics panel or board ". 230

There is no specific consideration of workers' interests in the guidelines. In fact, workers are only mentioned in passing, as when "asymmetries of power or information, such as between employers and workers" are acknowledged231 and reference is made to "potentially vulnerable persons and groups, such as workers,

224 Guidelines; p. 18.

225 See Guidelines, p. 21-22. In this respect, it is primarily a matter of using "sufficiently realistic data" and monitoring

"throughout the entire life cycle".

226 Cf. Guidelines, p. 22 (definition of "appropriate quality of service indicators").

227 Guidelines, p. 21.

228 Guidelines, p. 21.

229 Guidelines, p. 21 with the example of confusing a school bus with an ostrich.

230 Guidelines, p. 23.

231 Guidelines, p. 2 (at footnote 2) with a reference to "articles 24 to 27 of the Charter of Fundamental Rights of the EU (EU Charter) dealing with the rights of the child and the elderly, the integration of persons with disabilities and workers' rights".

52

HSI-Working Paper No. 17 December 2022

women, persons with disabilities, ethnic minorities, children, consumers or others at risk of exclusion".232 There is also the requirement that AI systems "should support humans in the working environment, and aim for the creation of meaningful work".233 In relation to the possibility to contest and seek redress against decisions made by AI systems, reference is made to the "right of association and to join a trade union in a working environment, as provided for by Article 12 of the EU Charter of Fundamental Rights".234 Under requirement 5, diversity, non-discrimination and fairness, in the context of "stakeholder participation", the AI HLEG advises: “In order to develop AI systems that are trustworthy, it is advisable to consult stakeholders who may directly or indirectly be affected by the system throughout its life cycle. It is beneficial to solicit regular feedback even after deployment and set up longer term mechanisms for stakeholder

participation, for example by ensuring workers information, consultation and participation throughout the whole process of implementing AI systems at organisations”.235 Finally, it points out that AI "could help governments, unions and industry with planning the (re)skilling of workers [and] could also give citizens who may fear redundancy a path of development into a new role".236

In April 2019, the European Commission adopted a Communication in which it explicitly welcomed the seven core demands of the AI HLEG (prioritising human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability).237

2. Policy recommendations

Also in 2019, the policy recommendations of the AI High-Level Expert Group were published.238 Here the group speaks, among other things, of the need to "enable workers made redundant or faced with the threat of redundancy due to automation and increased AI take-up" to "seek new forms of employment as the structure of

232 Guidelines, p. 11.

233 Guidelines, p. 12.

234 Guidelines, p. 15 (at footnote 32).

235 Guidelines, p. 19.

236 Guidelines, p. 33.

237 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Building trust in Human Centric Artificial Intelligence of 8 Apr 2019, COM(2019) 168 final.

238 High-Level Expert Group on Artificial Intelligence, Policy and Investment Recommendations for Trustworthy AI, 2019.

53

HSI-Working Paper No. 17 December 2022

the labour market is reshaped in response to the turn to increased reliance on digital services and processes".239

Among other things, the recommendations of the expert group – complementing the ethics guidelines developed by the group – focus on the legal framework for AI.240 The group advocates for a "risk-based and multi-stakeholder approach". The paper states: “The character, intensity and timing of regulatory intervention should be a function of the type of risk created by an AI system. In line with an approach based on the proportionality and precautionary principle, various risk classes should be distinguished as not all risks are equal. The higher the impact and/or probability of an AI-created risk, the stronger the appropriate regulatory response should be. 'Risk' for this purpose is broadly defined to encompass adverse impacts of all kinds, both individual and societal. For specific AI applications that generate 'unacceptable' risks or pose threats of harm that are substantial, a precautionary principle-based approach should be adopted instead”.241 Furthermore, the AI HLEG suggests an evaluation and, if necessary, a modification of the current EU law,242 without, however, addressing labour (protection) law regulations.

Fundamental to the recommendations is a so-called human-centric approach. In this sense, the AI HLEG also calls for “a process of representation, consultation and, where possible, co-creation, where workers are involved in the discussion around AI production, deployment or procurement process in order to ensure that the systems are usable and that the worker still has sufficient autonomy and control, fulfilment and job satisfaction. This implies informing and consulting workers when developing or deploying AI, as set out in the existing texts adopted by the European institutions and the social partners”.243 The recommendations continue: “Workers (not only employees but also independent contractors) should be involved in discussions around the development, deployment or procurement of algorithmic scheduling and work distribution systems, to ensure compliance with health and safety legislation, data policy, working time legislation and work-life balance legislation. Social dialogue plays a key role to enable this”.244

239 Recommendations, p. 36.

240 For example, Recommendations, p. 37: "This section complements the Guidelines by providing guidance on appropriate governance and regulatory approaches beyond voluntary guidance".

241 Recommendations, p. 37 f. ; fn. 53 goes on to state (with reference to Council of Europe, Revised draft study of the implications of advanced digital technologies (including AI systems for the concept of responsibility within a human rights framework, 2019): “This includes not only tangible risks to human health or the environment, but also includes intangible risks to fundamental rights, democracy and the rule of law, and other potential threats to the cultural and socio-technical foundations of democratic, rights-respecting, societies.”

242 Recommendations, p. 38 f.

243 Recommendations, p. 13.

244 Ibid.

54

HSI-Working Paper No. 17 December 2022

3. White Paper on artificial intelligence

Building on the work of the AI HLEG, the European Commission in February 2020 presented a White Paper on AI to initiate a broad public consultation.245

a) Basic contents

In it, the Commission once again forcefully describes the risks associated with the use of AI.246 On the issue of non-discrimination, for example, the Communication states: “Bias and discrimination are inherent risks of any societal or economic activity. Human decision-making is not immune to mistakes and biases. However, the same bias when present in AI could have a much larger effect, affectingand discriminating many people without the social control mechanisms that govern human behaviour. This can also happen when the AI system ‘learns’ while in operation. In such cases, where the outcome could not have been prevented or anticipated at the design phase, the risks will not stem from a flaw in the original design of the system but rather from the practical impacts of the correlations or patterns that the system identifies in a large dataset”.247

At the same time, the Commission identifies specificities of AI systems that make the enforcement of fundamental rights more difficult: “The specific characteristics of many AI technologies, including opacity ('black box-effect'), complexity, unpredictability and partially autonomous behaviour, may make it hard to verify compliance with, and may hamper the effective enforcement of, rules of existing EU law meant to protect fundamental rights. Enforcement authorities and affected persons might lack the means to verify how a given decision made with the involvement of AI was taken and, therefore, whether the relevant rules were

245 White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, COM(2020) 65 final of 19 Feb 2020; cf also Unger, ZRP 2020, 234; Jüngling, MMR 2020, 440; Gasparotti/Harta, European Strategy on Artificial Intelligence An Assessment of the EU Commission's Draft White Paper on AI, 2020. On Basic Questions of Regulation at National and European Level Hacker, NJW 2020, 2142.

246 Elsewhere, the Commission explicitly recognises that "workers and employers are directly affected by the design and use of AI systems in the workplace." The involvement of the social partners will therefore "be a crucial factor in ensuring a human-centred approach to AI at work"; White Paper, p. 7.

247 White Paper, p. 11 f.

55

HSI-Working Paper No. 17 December 2022

respected. Individuals and legal entities may face difficulties with effective access to justice in situations where such decisions may negatively affect them”.248

Like the AI HLEG, the Commission advocates a "risk-based approach". This is

"important to help ensure that the regulatory intervention is proportionate".

However, there is a need for "clear criteria to differentiate between the different AI applications, in particular in relation to the question whether or not they are 'high-risk'".249 The Commission advocates a two-step assessment. First, it should be determined whether AI is used in an area in which significant risks are to be expected due to the nature of the activities typically undertaken.250 The second criterion is whether the "AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise". The latter reflects the understanding that "not every use of AI in the selected sectors necessarily involves significant risks".251 However, there could be "exceptional instances where, due to the risks at stake, the use of AI applications for certain purposes is to be considered as high-risk as such – that is, irrespective of the sector concerned and where the below requirements below would still apply". In this respect, the Commission specifically mentions the area of anti-discrimination law: “In light of its significance for individuals and of the EU acquis addressing employment equality, the use of AI applications for recruitment processes as well as in situations impacting workers’

rights would always be considered "high-risk" and therefore the below requirements would at all times apply. Further specific applications affecting consumer rights could be considered”.252

However, the "risk-based approach" favoured by the Commission proved to be controversial from the beginning. Some criticise the definition of "high-risk" as not clear enough to create legal certainty. Clarification is required as to when the risks associated with the use of an AI application are to be considered "significant".253 Others question whether a meaningful distinction can be made between low-risk and high-risk applications and suggest instead that a risk management approach

248 White Paper, p. 12.

249 White Paper, p. 17.

250 White Paper, p. 17.

251 White Paper, p. 18.

252 White Paper, p. 18. Another example cited is the use of AI applications for the purposes of remote biometric identification.

253 Cf. EU White Paper on Artificial Intelligence, cepAnalysis No. 4/2020, p. 4.

56

HSI-Working Paper No. 17 December 2022

should be adopted, whereby the party best positioned to control or mitigate the risks would be deemed legally responsible.254

b) Results of the consultation

The consultation opened by the Commission with the White Paper, which ran from 19 February to 14 June 2020, drew a large number of comments. Overall, the need for action was almost universally affirmed. A large majority of respondents felt that there were gaps in the legislation or that new legislation was needed.255 As a result, the Commission announced plans for a regulatory proposal.256

Nel documento Artificial intelligence and labour law (pagine 50-56)