• Non ci sono risultati.

Possible legal capacity of AI

Nel documento Artificial intelligence and labour law (pagine 89-93)

G. Problem areas under labour law

II. Possible legal capacity of AI

A question of obvious fundamental importance is whether a robot or an AI application should be endowed with its own legal personality. The discussion on this was triggered by the European Parliament's 2017 resolution on the question of civil liability of robots, where the Parliament had indeed considered creating a specific legal status for robots in the long term "so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause",

419 Proposal for a Directive of the European Parliament and of the Council on improving working conditions in platform work, 9.12.2021, COM(2021) 762 final, p. 2.

420 Similarly crit. Racabi, What Can U.S. Labor Take from the Proposed E.U. Directive of Regulations of Platform Workers?, https://onlabor.org: "Both the first and the second steps of the classification route are murky as to substance and procedure, which are left for E.U. states and judicial venues to develop and actualize, and for platform employers to exploit and leverage". See also Waas, ZRP 2022; similarly crit. Krause, NZA 2022, 521 (528); Junker, EuZA 2022, 141.

90

HSI-Working Paper No. 17 December 2022

while at the same time considering "applying electronic personality" to cases

"where robots make autonomous decisions or otherwise interact with third parties independently".421 In the labour law literature on the subject, the discussion on the

"legal capacity of AI" reverberates in contributions that invoke the "employer status of algorithms" in their titles.422 Some authors may be motivated by the desire to cleverly "market" their products. But as the resolution of the European Parliament shows, the idea of a "robo boss"423 is by no means pure science fiction.

However, the ensuing discussion promptly revealed that the granting of legal capacity considered by Parliament has hardly any supporters. This was already made clear in an open letter written by several experts on AI and robotics, leading figures in business and legal, medical and ethical experts, who rejected the idea of legal capacity for AI under all relevant aspects: an analogy with natural persons, an analogy with legal persons and the use of the trust model.424 In particular, the analogy to the recognition of legal persons, which has been used on various occasions in favour of granting legal capacity, does not in fact hold water. For while legal persons are given "capacity to act" by natural persons, this is precisely not the case with robots.425 Above all, however, machines are not (yet) capable of autonomous decision-making, which could justify putting them on an equal footing with natural persons in terms of liability law.426 Also, the recognition of legal persons is intended to enable individuals to pursue objectives they cannot pursue over a long term or only through a division of labour.427 Nothing comparable applies to AI.428 However, caution is also required with regard to the analogy to the legal person because not only does the term aim to consolidate very different organisational forms "under one roof", but also the legal person in some respects

421 European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL), paragraph 59.

422 See for example Aloisi/De Stefano, Introducing the Algorithmic Boss, April 20, 2021:

https://www.ie.edu/insights/articles/introducing-the-algorithmic-boss/.

423 Cf. HöpfnerDaum, ZfA 2021, 467.

424 http://www.robotics-openletter.eu/.

425 Cf. the Open Letter of 29.06.2020 (under 2b)): "The legal status for a robot can't derive from the Legal Entity model, since it implies the existence of human persons behind the legal person to represent and direct it. And this is not the case for a robot": http://www.robotics-openletter.eu/.

426 Cf. Bertolini, Artificial Intelligence and Civil Liability - Study requested by the JURI Committee, 2020, p. 36: "In particular, machines are things, products and artefacts of human intellect, and there are no ontological grounds to justify their equation to humans, so long as they do not display such a form of strong autonomy that amounts to freedom of self-determination in the outcomes the system pursues and in the ways it chooses to accomplish them. Currently there is no machine that would be able to display such a level of autonomy, and there is no reason to desire the development of such a system that being more intelligent and capable than any human life form, and being also independent, could pursue its own intended ends. Technological development does not justify acknowledging such a level of autonomy on the side of any AI application existing or being developed".

427 https://www.staatslexikon-online.de/Lexikon/Juristische_Person.

428 In conclusion, Haagen, Verantwortung für Künstliche Intelligenz - Ethische Aspekte und zivilrechtliche Anforderungen bei der Herstellung von KI-Systemen, 2021, p. 184; also Banteka, Artificially Intelligent Persons, Houston Law Review 2020: https://ssrn.com/abstract=3552269.

91

HSI-Working Paper No. 17 December 2022

enjoys a more advantageous position than natural persons (which is by no means unobjectionable). 429 Whether it is advisable to apply a blanket solution to AI that at the same time grants privileged status is anything but settled.430

This leaves the (albeit rather pragmatic) aim of granting legal capacity in order to close possible gaps in liability by granting a kind of electronic legal capacity. It should not, however, be considered as a foregone conclusion that these gaps exist, since both the development and the use of AI systems involve people who can as a rule be addressed. In the current situation, then, it is more a matter of

"clearing the way" to imposing liability on these persons – for instance by establishing rules of presumption or easing the burden of proof - but not of replacing it with liability for machines.431 In all of this, it must also be considered that it can only make sense to grant legal capacity to machines if they are also allocated recoverable assets that can be accessed if necessary. This would ultimately lead to a limitation of liability for "damage by machine", though, and this can hardly be the intended outcome. However, it would still be possible to create a duty to take out liability insurance with a certain minimum coverage. But even then there would still be a limitation of liability in every case, namely, up to the insured sum. Considering, furthermore, that the risk of a claim cannot have a deterrent effect on a robot and the imposition of liability thus fails to have a behaviour-controlling effect, the idea of subjective liability fails completely to offer an attractive argument.432

The European Parliament itself has also departed from its earlier position in a recent resolution on civil liability, stating that "all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are nearly always the result of someone building, deploying or interfering with the systems". Accordingly, it has also stated that "it is not necessary to give legal personality to AI systems".433 Furthermore,

429 https://www.staatslexikon-online.de/Lexikon/Juristische_Person.

430 Negri, Robot as Legal Person: Electronic Personhood in Robotics and Artificial Intelligence, frontiers in Robots and AI, Hypothesis and Theory: 10.3389/frobt.2021.789327.

431 See Expert Group on Liability and New Technologies - New Technologies Formation, Liability for Artificial Intelligence and other emerging digital technologies, 2019, p. 38: "Harm caused by even fully autonomous technologies is generally reducible to risks attributable to natural persons or existing categories of legal persons, and where this is not the case, new laws directed at individuals are a better response than creating a new category of legal person".

432 See also Wagner, VersR 2020, 717 (739).

433 European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for the use of artificial intelligence, 2020/2014(INL) at 7, where it further states that "opacity, connectivity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of AI-systems to specific human input or to decisions in the design", but "one is nevertheless able to circumvent this obstacle by making the different persons in the whole value chain who create, maintain or control the risk associated with the AI-system liable".

92

HSI-Working Paper No. 17 December 2022

the European Parliament even considers that "Any required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity".434 However, this argument does not quite seem to serve its purpose:

the fact that AI systems do not have a "human conscience" is reason enough not to grant them decision-making powers over humans, but is not enough to stand in the way of granting them legal capacity. And the fact that AI systems are intended to serve humanity is likewise not an argument against granting legal capacity if, for example, it should turn out that compensation for damage that has occurred would otherwise be endangered or even (practically) impossible.

However, the discussion on legal capacity does not seem likely to fall silent, if only because technical development is continuing and the "autonomy capacity" of AI systems will in all likelihood increase. There is also no getting around the fact that the proponents of giving AI systems legal capacity assess the chances of injured parties to always be able to find a respective injuring party considerably less favourably than, for example, the European Parliament,435 but consider access to a legally capable machine to be relatively straightforward.436 Moreover, flexible solutions could be developed, possibly also differentiating between individual areas of law.437 Nevertheless, with the current state of the art, the granting of legal capacity seems neither necessary nor sensible.438

434 Annex (6). The European Parliament took the same position in its resolution of 20 October 2020 on intellectual property rights in the development of AI technologies, but justified this (under 13.) with the protection of human creators: "[...] the autonomisation of the creative process of generating content of an artistic nature can raise issues relating to the ownership of IPRs covering that content [...] in this connection, [...] it would not be appropriate to seek to impart legal personality to AI technologies and points out the negative impact of such a possibility on incentives for human creators"; also instructive on the problem Chesterman, Artificial Intelligence and the Limits of Legal Personality, in: International and Comparative Law Quarterly 2020, 819 (834 et seq.).

435 However, there are also sceptical voices in this respect; cf. only Papakonstantinou/de Hert: Refusing to award legal personality to AI: Why the European Parliament got it wrong - European Law Blog: "Exactly because AI will infiltrate all of human activity, indistinguishable from any technology and embedded in all of our daily decision-making systems, it will be impossible to "trace back specific harmful actions of A" to a particular "someone". Any AI setup will most likely involve a number of (cross-border) complex agreements between many developers, deployers and users before it reaches an end-user. Identifying the "someone" liable within this international scheme will be extremely difficult for such end-users without the (expensive) assistance of legal and technical experts. On the contrary, end-users would be better served through a one-on-one relationship, whereby the AI effect that affects them is visibly caused by a specific entity; only by granting legal personality to AI may warrant that this will be an identifiable entity, rather than a string of opaque multinational organisations hiding behind complex licensing and development agreements".

436 See again Papakonstantinou/de Hert: Refusing to award legal personality to AI: Why the European Parliament got it wrong - European Law Blog: "Legal personality to AI will mean that each individual affected by it will have a specific legal entity facing him or her locally, in the same manner as is the case with legal persons today".

437 See also Papakonstantinou/de Hert: "Legal personality will mean that each field of law (civil law, tax law, employment law, penal law, competition law) will be allowed with the freedom to assess the legal issues posed by AI within its own boundaries and under its own rules and principles".

438 Chesterman, Artificial Intelligence and the Limits of Legal Personality, in: International and Comparative Law Quarterly 2020, 819 (843): "At least for the foreseeable future, the better solution is to rely on existing categories, with responsibility for wrongdoing tied to users, owners, or manufacturers rather than the AI systems themselves".

93

HSI-Working Paper No. 17 December 2022

Nel documento Artificial intelligence and labour law (pagine 89-93)