• Non ci sono risultati.

Fundamental deficits of the current anti-discrimination law

Nel documento Artificial intelligence and labour law (pagine 115-119)

G. Problem areas under labour law

IV. Anti-discrimination law

3. Fundamental deficits of the current anti-discrimination law

After what has just been said, an adjustment of the concept of "indirect discrimination" seems urgently required. Likewise, the demands for facilitation of evidence for potentially aggrieved parties seem plausible.572 While considering how far changes to the AGG should go and what they should look like in concrete terms, however, one must not neglect the fundamental question of whether the current anti-discrimination law is still structurally capable of guaranteeing sufficient protection against discrimination emanating from AI systems.

a) Identifiability of acts of discrimination

In particular, there are doubts as to whether individual legal protection directed at claims for damages and compensation is effective enough with regard to the use of AI. The concerns that exist in this context are due not only to the difficulties of proof outlined above, which a victim of discrimination will regularly face. The problems go deeper. For example, the literature has rightly pointed out that discrimination will often be difficult to detect under the conditions of AI use:

“Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Equivalent

570 Cf. also, for example, the Third Equality Report of the Federal Government, BT-Drs. 19/30750 of 10.06.2021, p. 138, according to which "platform operators [should] bear the burden of proof that they do not violate the provisions of the AGG on protection against discrimination when using algorithmic systems".

571 Cf. Martini, Blackbox Algorithmus - Grundfragen einer Regulierung Künstlicher Intelligenz, 2019, p. 361 f.

572 On this cf. also e.g. De Stefano, Valerio/Wouters, Mathias: AI and digital tools in workplace management and evaluation – An assessment of the EU’s legal framework, May 2022, p. 66 f. with further reform proposals.

116

HSI-Working Paper No. 17 December 2022

mechanisms and agency do not exist in algorithmic systems. … Compared to traditional forms of discrimination, automated discrimination is more abstract and unintuitive, subtle, and intangible”.573 Against this background, there is good reason to doubt whether the "traditional legal remedies and procedures for detection, investigation, prevention, and correction of discrimination which have predominantly relied upon intuition" are still fit for purpose.574

b) Collective legal protection

The AGG is to a large extent directed to individual legal protection, which provides victims of discrimination with rights, in particular to damages and compensation. In addition, Section 23 AGG opens up the possibility of support by anti-discrimination associations and creates an institution, the Federal Anti-Discrimination Agency, to support persons in asserting their rights. However, this does not change the fact that a person protected by anti-discrimination law generally has to face the user of the algorithm alone, clearly in an inferior position to both the user and the machine, and also under considerable pressure to assert their rights in a timely manner.575 There are indeed ways and means to strengthen the position of the protected party if they try to enforce their rights by taking legal action. However, this alone does little to change the considerable knowledge asymmetry576 and most importantly the fact that the potentially injured party is procedurally relegated to the role of the aggressor. It appears downright overwhelming for the discriminated party to take on an algorithm in a court battle.577

573 Thus Wachter/Mittelstadt/Russell, Why Fairness cannot be automated Bridging the Gap between EU Non-Discrimination Law and AI, 1 (2), p. 10, p. 67.

574 See Wachter/Mittelstadt/Russell, Why Fairness cannot be Automated Bridging the Gap between EU Non-Discrimination Law and AI, 1 (2).

575 In this respect, cf. in particular also the two-month period of Sec. 21(5), first sentence AGG.

576 From a US perspective, Kim, Data-Driven Discrimination at Work, William & Mary Law Review 2017, 857 (921): "The claimants would have to trace how the data miners collected the data, determine what populations were sampled, and audit the records for errors. Conducting these types of checks for a dataset created by aggregating multiple, unrelated data sources containing hundreds of thousands of bits of information would be a daunting task for even the best-resourced plaintiffs. In addition, the algorithm's creators are likely to claim that both the training data and the algorithm itself are proprietary information. Thus, if the law required complainants to prove the source of bias, they would face insurmountable obstacles".

577 Cf. Orwat, Diskriminierungsrisiken durch Verwendung von Algorithmen, 2020, p. 137 f.: "Both the right to informational self-determination and anti-discrimination place burdens of responsibility on the individual concerned to identify and take action against unlawful data processing and unjustified unequal treatment. However, questions arise as to whether these basic legal concepts are still appropriate at all, given an increasing amount of data and algorithm-based as well as automated decision-making processes and their specific characteristics. This is because such burdens of responsibility require very high professional, cognitive and temporal prerequisites on the part of the individuals concerned in order to (a) perceive the many situations with data processing and differentiations at all, b) process the information resulting from the information duties under data protection law if necessary [...] as well as to enforce rights of access, correction or deletion and, above all, to (c) assess for themselves the individual consequences resulting from data processing and diverse (potential) differentiation decisions and to recognise the risk of possible discrimination for themselves".

117

HSI-Working Paper No. 17 December 2022

In view of this situation, it is advisable to supplement individual legal protection with collective legal protection.578 There are indeed two reasons for this – apart from the weakness of the former: On the one hand, this would seem to offset the knowledge asymmetry that characterises the problem as a whole at least to some extent, since the collective, at least potentially, "knows more" than the individual (and is also far more likely to be able to shoulder the costs of legal proceedings).

On the other hand, and above all, discrimination when using algorithms is precisely not an individual "outlier", but is inherent in the underlying technology, which is why a "bundling" of interests seems the obvious choice from the outset. A right of action by associations could (at least partially) remedy this.579

c) The idea of prevention

Quite independently of this, however, the question arises whether the issue of

"discriminatory algorithms" can be addressed with legal remedies that primarily aim to grant the affected party claims for damages and/or compensation. Incidentally, the "backward-looking, liability-focused model of legal regulation" is also subject to criticism in the USA (and in Europe, as well).580 Instead, more efforts should be made to counter discrimination preventively.581 Accordingly, many call for comprehensive operator obligations, compliance with which would have to be monitored, most likely by state authorities.582 What is remarkable in all of this, however, is the scepticism that exists in many places towards an approach that relies solely on transparency and explainability of AI. Reliability, security and fairness of AI could, according to a widespread assessment, ultimately only be achieved through measures such as algorithm impact assessments, auditing and

578 In this sense, e.g. also Grünberger, ZRP 2021, 231 (235) with further references: " Es ist daher dringend an der Zeit, über ein intelligentes Design kollektiver Rechtsschutzinstrumente nachzudenken und zu überlegen, wie man private und public enforcement auch im Nichtdiskriminierungsrecht sinnvoll kombiniert" ("It is therefore high time to think about an intelligent design of collective legal protection instruments and to consider how to combine private and public enforcement in a meaningful way in non-discrimination law as well".

579 See also Orwat, Diskriminierungsrisiken durch Verwendung von Algorithmen, p. 135: " Der Ansatz des lediglich punktuellen Vorgehens im Einzelfall erscheint mit Blick auf die gegebenenfalls systematische Schlechterbehandlung von vielen Betroffenen durch algorithmenbasierte Differenzierungen nicht sachgerecht." ("The approach of merely proceeding selectively in individual cases does not appear appropriate in view of the possibly systematic worse treatment of many affected persons through algorithm-based differentiations").

580 See Kim, Data-Driven Discrimination at Work, William & Mary Law Review 2017, 857 (867 f.).

581 In favour of linking to Section 12 AGG, most recently Sesing/Tschech, AGG und KI-VO-Entwurf beim Einsatz von Künstlicher Intelligenz, MMR 2022, 24 (26); but cf. also, for example, the Third Equality Report of the Federal Government, BT-Drs. 19/30750 of 10.06.2021, p. 168 f. with the demand for the specification of preventive organisational duties.

582 In this respect, the proposals range from the introduction of official controls of results to detect potential discrimination, if necessary using so-called control algorithms, to the establishment of official rights of information and inspection to control the processing mechanisms; cf. only Martini, Blackbox Algorithmus - Grundfragen einer Regulierung Künstlicher Intelligenz, 2019, p. 342 and 365 et seq.

118

HSI-Working Paper No. 17 December 2022

certification.583 At least it is encouraging that there are apparently increasing attempts to design algorithms in the sense of "built-in fairness".584 In fact, in view of the above, it seems urgent, not to say inevitable, to take appropriate technical precautions against indirect discrimination.585 From a German perspective, the so-called Hambach Declaration of the Data Protection Conference (DSK), the body of independent German data protection supervisory authorities of the Federation and the Länder, is also worth mentioning in this respect. This declaration contains the demand that "before AI systems are implemented the risks to the rights and freedoms of individuals shall be assessed with the aim inter alia of reliably excluding covert discrimination through countermeasures". Furthermore,

"appropriate risk monitoring must also be carried out during the use of AI systems".586

583 For example, Castelluccia/Le Métayer, Understanding algorithmic decision-making: Opportunities and challenges, 2019, p. 78; see also Koene/Clifton/Webb/Patel/Machad/LaViolette/Richardson/Reisman, A governance framework for algorithmic accountability and transparency, 2019.

584 See only Zehlike/Hacker/Wiedemann, Matching code and law: achieving algorithmic fairness with optimal transport, in:

Data Mining and Knowledge Discovery, 2020, p. 163; cf. on the whole also Barocas/Hardt/Narayanan, Fairness and Machine Learning Limitations and Opportunities, 2021: https://fairmlbook.org/pdf/fairmlbook.pdf.

585 Cf. Martini, Blackbox Algorithmus - Grundfragen einer Regulierung Künstlicher Intelligenz, 2019, p. 353 ff. This in particular by making algorithm-based decision-making "blind" to specific factors susceptible to discrimination; ibid., p. 357.

586 Declaration, p. 3 f. This was concretised in the position paper of the Conference of Independent Data Protection Authorities of the Federation and the Länder on recommended technical and organisational measures in the development and operation of AI systems of 6 November 2019; cf. on the whole also Martini, Blackbox Algorithmus - Grundfragen einer Regulierung Künstlicher Intelligenz, 2019, p. 360, who wants to impose an obligation on the operator to take technical precautions against indirect discrimination; similarly Wachter//Mittelstadt/Russell, Bias Preservation in Machine Learning.

360, who wants to impose an obligation on the operator to take technical precautions against indirect discrimination;

similarly Wachter//Mittelstadt/Russell, Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law, https://ssrn.com/abstract=3792772.

119

HSI-Working Paper No. 17 December 2022

Nel documento Artificial intelligence and labour law (pagine 115-119)