• Non ci sono risultati.

Inadmissibility of "machine decisions"

Nel documento Artificial intelligence and labour law (pagine 95-100)

G. Problem areas under labour law

III. AI and exercise of the right to issue instructions

2. Inadmissibility of "machine decisions"

If discretionary decisions cannot be left to machines, then it is because the decision of a machine can never be based on the exercise of discretion in the legal sense:

"Individual case justice" cannot be forced into an automation system, both conceptually and by its very nature,456 since this is necessarily based on schematisation and therefore the relevant circumstances can only be anticipated to a limited extent.457 Another factor is that decisions are always based on facts.

However, in addition to "hard" facts, there are also "soft" facts that cannot or not easily be quantified, and therefore cannot be imported into an automatic decision-making system.458 For decisions, the time of the decision regularly plays an

450 Nink, Justiz und Algorithmen, 2021, p. 53. These and other defects must also be taken into account when it comes to detecting errors in algorithmic decisions; cf. Rhue, Affectively Mistaken? How Human Augmentation and Information Transparency Offset Algorithmic Failures in Emotion Recognition AI, November 22, 2019:

https://ssrn.com/abstract=3492129.

451 Nink, Justiz und Algorithmen, 2021, p. 61 ff.

452 Nink, Justiz und Algorithmen, 2021, p. 63 ff.

453 See for example Lambrecht/Sen/Tucker/Wiertz, Algorithmic Recommendations and Earned Media: Investigating Product Echo Chambers on YouTube, 27 Oct 2021: https://ssrn.com/abstract=3951425.

454 Cf. on this Nink, Justiz und Algorithmen, 2021, p. 76 ff.

455 Cf. on this Nink, Justiz und Algorithmen, 2021, p. 66 ff; fundamental Esser, Vorverständnis und Methodenwahl in der Rechtsfindung: Rationalitätsgarantien der richterlichen Entscheidungspraxis, 1970; cf. on the whole topic also Möllers, Juristische Methodenlehre, 3rd ed. 2020, p. 24 ff.

456 Nink, Justiz und Algorithmen, 2021, p. 196: "It remains indispensable for judicial decisions that the decision-maker can understand and evaluate the individual case and all its aspects".

457 Cf. again Nink, Justiz und Algorithmen, 2021, p. 198, who at the same time draws attention to the danger "that algorithmic forecasts and decisions reduce them [the individual affected by the decision] to belonging to certain groups".

458 Similarly Nink, Justiz und Algorithmen, 2021, p. 179 f.

96

HSI-Working Paper No. 17 December 2022

essential role. This also sets limits to the "pre-programming" of decisions.459 In this context, it is important to realise that AI is necessarily "backward-looking" and that there is therefore always a danger that the past will simply be "written down" in AI decisions. Also, according to case law, the exercise of the right to issue instructions pursuant to Section 106, first sentence GewO requires "a weighing of the respective interests according to constitutional and legal value decisions, the general principles of proportionality and appropriateness as well as custom and reasonableness", whereby "all circumstances of the individual case (must) be included" in the weighing.460 However, no one will claim that AI systems can make such "value decisions".461 Closely related to this is the fact that the employer is obliged under Section 106, first sentence GewO to weigh up, that is, to "evaluate legal positions from the perspective of priority" with the aim of achieving a balance between conflicting interests and concerns.462 AI systems are not able to do this either, at least not at present.463

A parallel to administrative discretion464 may clarify the foregoing: Discretionary decisions by an administration are intended to enable decisions to be made that adhere to the facts with the aim of fairness in individual cases. The granting of discretion is based precisely on the fact that the legislature cannot assess the interests and concerns of the parties involved a priori and, moreover, cannot take into account the particularities of the individual case.465 Nor can this be fed into the process of "decision-making" of a machine: one cannot speak of "discretion", much less of "reasonable discretion".466

However, there are those who cite a difference between "employer's discretion"

and administrative discretion and would derive from it that decisions without (sufficient) consideration also meet the requirements of Section 106, first sentence

459 See also Nink, Justiz und Algorithmen, 2021, p. 194.

460 BAG, NZA-RR 2018, 568 (and para. 39).

461 Cf. also Rollberg, Algorithmen in der Justiz - Rechtsfragen zum Einsatz von Legal Tech im Zivilprozess, 2020, pp. 69 ff, 128 ff.

462 Thus (on consideration in company law) Freund, Die Abwägung im Gesellschaftsrecht, NZG 2020, 1328 (1328).

463 Which is why they can fail at the simplest tasks; cf. only Pavlus, The Easy Questions That Stump Computers – What happens when you stack logs in a fireplace and drop a match? Some of the smartest machines have no idea, 2 May 2020. https://www.theatlantic.com/technology/archive/2020/05/computers-common-sense/611050/; vgl. auch Choi, The Curious Case of Commonsense Intelligence, Daedalus 2022, 139. https://doi.org/10.1162/DAED_a_01906. Hutson, Can Computers Learn Common Sense? A.I. researchers are making progress on a longterm goal: giving their programs the kind of knowledge we take for granted, 5 Apr 2022. https://www.newyorker.com/tech/annals-of-technology/can-computers-learn-common-sense.

464 On AI in administrative practice, most recently Tischbirek, Zeitschrift für Digitalisierung und Recht (ZfDR) 2021, 307.

465 Similarly, Höpfner/Daum, ZfA 2021, 467 (477), also emphasise that there are "some similarities" between the equitable discretion within the meaning of section 106 sentence 1 GewO and the administrative discretion.

466 It is controversial whether "equity" is a uniform standard; cf. only Völzmann-Stickelbrock, in:

Herberger/Martinek/Rüßmann/Weth/Würdinger, jurisPK-BGB Vol. 2, 2020, § 315 marginal no. 16 et seq. However, this should not be relevant in the present context.

97

HSI-Working Paper No. 17 December 2022

GewO and are thus generally permissible.467 This, they claim, results from the fact that the administrative court's review of discretionary decisions of the administration - for reasons of separation of powers468 - is limited to the process of weighing (non-use or misuse of discretion), whereas determining the fairness of an instruction issued by the employer only depends on "whether the result, i.e. the content of the instruction, meets the legal requirements". Since the principle of separation of powers does not apply in the relationship between employer and employee, it is argued, "a limitation of judicial review to the process of weighing [...]

is not appropriate". However, whether the employer has carried out a comprehensive weighing of interests or "the instruction merely happened to be in accordance with equity" is irrelevant for the lawfulness of the instruction, in this view.469 With regard to the latter, it is argued, the wording of Section 315(3), first sentence BGB, according to which the provision is only binding on the other party

"if it is equitable", already indicates a mere review of the result.470

However, this view cannot be followed. First of all, it should be noted that if judicial review were limited to the result of the balancing process, possible impairments of the employee's interests and concerns would be deliberately ignored only because the result might "happen to be fair". In other words, an instruction would be valid because it is not inequitable, even though a different instruction might have better served the interests of the parties involved. If one realises this, it immediately becomes clear that it is misleading to speak of "a restriction of judicial review to the weighing process [...] being inappropriate" in connection with Section 106, first sentence GewO. In reality, it is not a question of whether judicial review is limited to the weighing process, but whether judicial review is limited to the result of the weighing process. In this respect, however, all factors argue in favour of understanding as the "(more detailed) provisions (of the performance)" referred to in Section 106, first sentence GewO and Section 315 BGB only those that are not only attributable to people as declarations of intent - which is not problematic - but are also the responsibility of people. One may still be content to accept certain impairments of workers' interests if the consideration is deficient, but at least "the result" is right. However, this cannot justify the use of automatic decision-making systems, if only because the corresponding deficits are already inherent in them

467 Cf. Höpfner/Daum, ZfA 2021, 467 (480), who only recommend "equipping instruction-issuing systems with a remonstration function and instructing employees to make use of this should an AI instruction be inequitable in their view";

as already Göpfert/Brune, NZA-Beil. 2018, 87 (90).

468 For more details, see Höpfner/Daum, ZfA 2021, 467 (479).

469 Thus Höpfner/Daum, ZfA 2021, 467 (478).

470 Cf. Höpfner/Daum, ZfA 2021, 467 (478 f.).

98

HSI-Working Paper No. 17 December 2022

and are therefore "structurally determined". To put it another way: While it would be one thing to hold back on the control of human decisions to a certain extent, if necessary, it is quite another to do so even if the decision is not the responsibility of humans in the first place. To decide otherwise would indeed be to open the door to chance. But there is all the less reason to do so, as it hardly seems justifiable for the employer's interest in using AI to prevail over the employee's interest in the

"best possible" decision by the employer.

It should only be hinted at here that the personal connection of the employment relationship, but above all the protection of human dignity according to Article 1(1) of the Basic Law, also speak in favour of excluding automatic decisions and reserving them for humans, who, unlike AIs, are capable of "empathy and the assessment of the social consequences of their decisions".471 Incidentally, demands by British trade union lawyers run along the same lines, aiming to establish a legal right to personal (analogue) participation as concerns decisions of considerable importance to the employee. One of the justifications for this reads:

“Machines and technology are not human, and we cannot have a personal relationship with them in the same way that we can and do with other humans. [...]

They can only be an aid to human interaction if the employment relationship is to remain personal and built on mutual trust and confidence. Employees are entitled to more than just a "relationship" with a machine”.472

Also only mentioned here in passing is the need to counter the danger that too much openness to the possibility of "machine decisions" will lead to a suppression of human judgements based on constant learning in and adaptation to complex socio-technological environments, which in the long run would be paid for with a weakening of human judgement.473 It is important to keep in mind the fundamental difference between human judgements and "machine judgements": Human judgement is about what is "appropriate, right, good, fair or just to do in an

471 Cf. Nink, Justiz und Algorithmen, p. 463: " Eine vollständig automatisierte Rechtsprechung, die den Einzelnen nur mehr als Input und Output einer formalisierten Zahlenlogik und damit als Objekt, aber nicht mehr als autonomes Individuum behandelt, ist auch mit Art. 1 Abs. 1 GG nicht in Einklang zu bringen." ("A completely automated administration of justice that treats the individual only as input and output of a formalised numerical logic and thus as an object, but no longer as an autonomous individual, cannot be reconciled with Article 1(1) of the Basic Law"); cf. also Martini, Blackbox Algorithmus - Grundfragen einer Regulierung Künstlicher Intelligenz, 2019, p. 96 ff. on "technikimmanenten Erkenntnisgrenzen", which the author includes, for example, social and emotional intelligence as well as common sense.

472 Thus Allen/Master, Technology Managing People - the legal implications, 2021, p. 107.

473 Cf. Moser/den Hond/Lindebaum, Morality in the Age of Artificially Intelligent Algorithms, 7 Apr 2021:

https://doi.org/10.5465/amle.2020.0287: "[...] we offer the strong thesis that we are at risk, now, that these algorithms change, perhaps irreversibly so, our morality in fundamental ways by suppressing judgment in decision-making"; cf. also Moser/den Hond/Lindebaum, What Humans Lose When We Let AI Decide - Why you should start worrying about artificial intelligence now, MIT Sloan, Feb 07, 2022: https://sloanreview.mit.edu/article/what-humans-lose-when-we-let-ai-decide/.

99

HSI-Working Paper No. 17 December 2022

ambiguous, troubled, problematic or puzzling situation, having explored and considered the various characteristics of that situation and having (creatively) developed and (carefully) evaluated multiple options in their respective potential to

‘better’ that situation. Judgment, therefore, requires imagination, reflection, empathy, and valuation. In judgment, it is acknowledged that data are value-laden, and that the identification of which values are relevant for decision-making is an inherent part of the process (…). Moral considerations thus inescapably come into play when developing judgment because they cannot be excluded or separated from the very situation that demands judgment.“474 In contrast, "’reckoning’ is the processing of data through calculation and formal rationality. It relies on data as correct representations of reality (‘facts’), and values can only find their place in reckoning as stable ex ante givens, indeed a form of ‘data’. Driven by predefined rules and goals, reckoning is insensitive to context and time. [...]. In this view, the world is understood in terms of logical and ‘objective’ relationships that are fully and unambiguously defined. [...] Data and information are seen as unproblematic representations of the world, rather than – from a pragmatist viewpoint – as discriminatively selected, assembled and created with the purpose of “affording signs or evidence to define and locate a problem, and thus give a clew [sic] to its resolution".475 Unsurprisingly, AI research is increasingly calling for collaboration with social scientists to lift the gaze of AI engineers beyond the realm of mere metrics.476

Due to the inability of machines to exercise discretion477 and due to the strong personal connection of the employment relationship, discretionary decisions of machines are ruled out according to Section 106, first sentence GewO, so that they cannot effectively give instructions to people, in the view of labour law.478

474 Moser/den Hond/Lindebaum, Morality in the Age of Artificially Intelligent Algorithms, p. 9.

475 Moser/den Hond/Lindebaum, Morality in the Age of Artificially Intelligent Algorithms, p. 10, with reference to Dewey, The quest for certainty, 1929, p. 178.

476 Cf. only Bartolo/Thomas, Qualitative humanities research is crucial to AI. https://www.fast.

ai/2022/06/01/qualitative/.

477 Cf. in this respect also Alkhatib/Bernstein, Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions, CHI 2019 Paper. https://doi.org/10.1145/3290605.3300760 referring to differences between human and algorithmic decision-makers: “Street-level bureaucrats are capable of making sense of new situations and then construct rationales that fill in the gaps. […] Street-level algorithms, by contrast, can be reflexive only after a decision is made, and often only when a decision has been made incorrectly. Even reinforcement learning systems, which require tight loops of feedback, receive feedback only after they take an action. Often, these algorithms only ever receive feedback after a wrong decision is made, as a corrective measure. Sometimes, algorithmic systems don’t receive corrective input at all. Algorithmic systems don’t make in-the-moment considerations about the decision boundary that has been formed by training data or explicit policies encoded into the program. Instead, the decision boundaries are effectively established beforehand, and street-level algrithms classify their test data without consideration of each case they encounter, and how it might influence the system to reconsider its decision boundary.”

478 Likewise Klebe, Soziales Recht 2019, 128 (134). In view of this, it is at most conceivable to stratify the degree of human

"responsibility" with regard to the potentially impaired employee interests. However, this will not be discussed further here.

Knitter, Digitale Weisungen – Arbeitgeberentscheidungen aufgrund algorithmischer Berechnung, 2022, p. 194 considers capable of automatisation "uniform equity judgments taken within a narrow margin".

100

HSI-Working Paper No. 17 December 2022

Nel documento Artificial intelligence and labour law (pagine 95-100)