• Non ci sono risultati.

The inadequacy of jurists and Rule by Code

Nel documento DOTTORATO DI RICERCA IN SCIENZE GIURIDICHE (pagine 157-161)

3.1 From deus ex machina to the deus est machina: overcoming the Rule of Men

3.1.1. The inadequacy of jurists and Rule by Code

The tension between formulation and application of rules which, as illustrated above, runs through Western legal history and the Rule of Law doctrine, provides the starting point for the Rule of Machines narrative. The theme of the arbitrariness of the interpreter is appropriated together with the picture of rules advanced by the formalistic conceptions of law discussed in the previous chapters. In the perspective advanced by the Rule of Machines, the aporias resulting from such picture do not affect the desirability and feasibility of the project of formalism:

what is wrong with such project is that it has backed the wrong rule followers, i.e., humans. Enriched by the perspective of behavioural economics, Law and Economics and the “New Legal Realism”615, the Rule of Machines rearticulates the anthropological pessimistic picture of the formalist framework into a computational-cognitivist version of the doctrine of the “digestive jurisprudence”616. Framed into a cognitivist-behaviourist perspective, the critical aspects of human rule following are explained in terms of the cognitive limitations which determine a systematic – and therefore predictable - deviant and inconsistent

614 Herbert L.A. Hart, The Concept of Law, cit., p. 139

615 Thomas J. Miles, Cass R. Sunstein, The New Legal Realism, in University of Chicago Law Review, 2008, 75, p. 831; Christine Jolls, Cass R. Sunstein, Richard Thaler, A Behavioral Approach to Law and Economics, in Stanford Law Review, 1998, 50, pp. 1471; Jeffrey J. Rachlinski, A Positive Psychological Theory of Judging in Hindsight, in University of Chicago Law Review, 1998, 65, p.

571; Jeffrey J. Rachlinski, Sheri Lynn Johnson, Andrew J. Wistrich, Chris Guthrie, Does Unconscious Racial Bias Affect Trial Judges?, in Notre Dame Law Review, 2009, 84, p. 1195;

Christine Jolls, Cass R. Sunstein, The Law of Implicit Bias, in California Law Review, 2006, 94, p.

969; Ozkan Eren, Naci Mocan, Emotional judges and unluckily juveniles, in American Economic Journal: Applied Economics, 2018, 10, 3, p. 171

616 Benjamin Alarie, The Path of the Law: Towards Legal Singularity, cit., p. 450

151

behaviour617. The cognitive bias framework provides the Rule of Machines narrative with a vocabulary capable of bringing the gaps between law, human rule followers and the computational perspective. The most relevant pathological elements of law, i.e., “arbitrariness, political favouritism, covert influence, inconsistency, and discretionary justice” are reformulated in terms of cognitive biases618 and then formalized and analysed in computational terms. Once legal phenomena are represented through code and data, it is possible to identify and characterize patterns as much as the deviations from such patterns as manifestations of bias, or lack of consistency, and to adopt forms of measurement directed at weighting the factors determining outcomes of decision-making processes619. On this basis, the Rule of Machines perspective turns its object of criticism from arbitrary human rule followers into defective carbon-based biological machines, and redescribes their deviation from the rules-rails as a malfunctioning of internal cognitive mechanisms. By framing the problem of incorrect rule following in such terms, its solution can be envisaged in the substitution of legal rules and humans’ untrustworthy cognitive mechanisms with code and code-driven machines620. Drawing on the picture of machines as “rule following beasts”621, i.e., exact and incorruptible executors of the rules-instructions received, the Rule of Machines can claim the opportunity to substitute biased human rule followers with a computational version of Dworkin’s Hercules622. As Casey and Niblett point out, “[t]he biases and inconsistencies found in individual judgments can largely be washed away using advanced data analytics”623. Automation is the mean to both “increase legal certainty and facilitate the neutral application of law”624.

In this perspective, the Rule of Machines can claim to answer the demands of the normativist-decisionist paradigm of the Rule of Law and, at the same time, to

617 Daniel Kahneman, Thinking, Fast and Slow, Penguin Books, London, 2011; Daniel Kahneman, Olivier Sibony, Cass R. Sunstein, Noise: A Flaw in Human Judgment, Little, Brown Spark, New York, 2021

618 Anthony Casey, Anthony Niblett, The Death of Rules and Standards, cit., p. 1408

619 Daniel L. Chen, Judicial analytics and the great transformation of American Law, in Artificial Intelligence and Law, 2019, 27, 1, p. 15; Anthony Casey, Anthony Niblett, The Deaths of Rules and Standards, cit., p. 1428

620 In this perspective, once can read Lord Sales’s position according to which - even if “we are not there yet” - the “[a]pplication of rules of equity or recognition of hard cases, where different moral and legal considerations clash, is ultimately dependent on pattern recognition, which AI is likely to be able to handleLord Sales, Justice of the UK Supreme Court, Algorithms, Artificial Intelligence and the Law, The Sir Henry Brooke Lecture for BAILII Freshfields Bruckhaus Deringer, London 12 November 2019, p. 7; see, also, Anthony Casey, Anthony Niblett, Self-Driving Laws, cit., p. 436; Iid., The Death of Rules and Standards, cit., pp. 1429-30

621 Douglas Hofstader, Gödel, Escher, Bach. An Eternal Golden Braid, Penguing Books, Middlesex, 1980, p. 26

622 Michael A. Livermoore (eds.), Law as Data, cit., p. xiii; Daniel Goldsworthy, Dworkin’s dream:

Towards a singularity of law, in Alternative Law Journal, 2019, 44, 4, p. 289

623 Anthony Casey, Anthony Niblett, Self-Driving Laws, cit., p. 437

624 Michael A. Livermore, Rule by Rules, cit., p. 238

152

overcome its aporias625. The features of computer code, indeed, seem to afford the exact, impartial and predictable application of rules formulated by the programmer-Legislator and, on the other hand, to make it possible the implementation of a legal system which is capable of “running by itself”626. As discussed above, the formalist account of rules encountered a limit in the infinite regress problem627. The rules formulated into code make true what legal rules formulated in natural language can never obtain, i.e., the bridging of any gap and elicitation of any medium between the formulation of the rule and its application628. This results from the reformulation of the rule into a different language, that is, a formal calculus which, in the last instance, corresponds to causal relation at the level of the hardware629. Precisely by virtue of the simultaneous nature of language-calculus and causal mechanism, the “interpretation-reformulation” of rules into code does not “hang in the air”630: the hardness of the rule-calculus merges with the hardness of a material.

625 Indeed, as Wiese Schartum points out, code seems to offer the perfect implementation of the Rule of Law intended in a formal perspective, see Dag Wiese Schartum, From Legal Sources to Programming Code, cit., p. 327. While acknowledging that, on one hand, “effective human design and implementation” are not sufficient to ensure the respect of the Rule of law, and that, on the other,

“some forms of technology raise intractable problems”, Zalnieriute, Bennett Moses and Williams maintain that, once that a set of “questions of design” - i.e. transparency, accuracy, relevancy, significant control by a human in the loop- are appropriately addresses, “automation can improve the predictability and consistency of decision-making by removing the arbitrariness for which humans are well known”. This is possible in that “[a]utomation according to human-crafted rules (derived from statute or judge-made law) can ensure that the correct decision is made every time and can overcome issues with human error and corruption”. […] A system with pre-programmed rules can ensure that decisions are made based on factors recognised as legally relevant and hence avoid or minimise the risk of corruption or favouritism by officials”. Moreover, not only such systems can enhance consistency by giving “same answer when presented with the same inputs”, but also eliminate “both conscious and unconscious bias by only applying criteria that are truly relevant to making the decision”, see, Monika Zalnieriute, Lyria Bennett Moses, George Williams The Rule of Law and Automation of Government Decision-Making, cit., pp. 20, 25

626 Cfr. supra, § 2.3.1.4.; Carl Schmitt, Political Theology, cit., p. 48. For the debate on the regulative character of code, see Lawrence Lessig, Code, Version 2.0, Basic Books, New York, 2006; Joel R.

Reidenberg, Lex Informatica: The Formulation of Information Policy Rules Through Technology, in Texas Law Review, 1998, 76, 3, p. 553; Polk Wagner, On Software Regulation, in Southern California Law Review, 2005, 78, p. 457; James Grimmelmann, Regulation by Software, in The Yale Law Journal, 2005, 114, p. 1719

627 Supra, § 2.4. See, also, Robert Brandom, Making It Explicit. Reasoning, Representing, and Discursive Commitment, Harvard University Press, Cambridge, 1994; José Medina, The Unity of Wittgenstein’s Philosophy, cit., pp. 104 ff

628 Laurence Diver, Digisprudence: Code as Law Rebooted, Edinburgh University Press, Edinburgh, 2021; Id., Computational legalism and the affordance of delay in law, in Journal of Cross-disciplinary Research in Computational Law, 2020, 1, 1, p. 6

629 Stuart Shanker, The Decline and Fall of the Mechanist Metaphor, in Rainer Born (ed.), Artificial Intelligence: The Case Against, Routedge, London, 1987, p. 81 ff; Id,, Wittgenstein’s Remarks on the Foundations of AI, cit., p. 30; Cfr., also, supra, Jon Bing, Sistemi deontici: un tentativo di introduzione, in Antonio A. Martino, Enrico Maretti, Costantino Ciampi (eds.), Logica, Informatica e Diritto, Le Monnier, Firenze, 1978, p. 123

630 This aspect is in part concealed in the passage from low-level to high level programming languages, and such concealment is precisely that which makes possible for machines to work, that is, which facilitates making sense of computer outputs as meaningful signs and interact with them in a meaningful way, ascribing them the performance of correct inferences, or a correct.

153

As illustrated in the previous chapter, the research conducted by the AI and Law community have attempted to elaborate an even better formalization of legal knowledge and of the processes of legal reasoning, that is, more efficient methods to represent and process legal information and more accurate simulations of what jurists do. The difficulties encountered in such endeavour have resulted in the acknowledgement of the limitations of code and have motivated an ongoing redefinition of the scope of the research program631. The hardest challenges have been identified as resulting from specific features of the “nature of the law”, i.e., among others, the syntaxis required to representing law, the open texture of law, the dynamic character of legal systems, and from the obstacles posed by the knowledge representation bottleneck, or the difficulty of both identifying and formalizing the common-sense knowledge required for legal reasoning632. The way in which the obstacles encountered in the computational formalization of law are framed affords the elaboration of different positions with respect to the possibility of overcoming them. In this light, in section § 4.2. I will address the limitations related to the possibility of understanding rules as code. For the moment, I will examine the different path which can be taken, and is indeed taken, by the Rule of Machines narrative. As long as the limitations of the rules of code are understood in terms of knowledge representation, i.e., which and how much codified rules are required633, the solution of the limits of GOFAIL can be identified in the

631 In this respect, Ashley maintains that even the most promising advances in argumentation logics do not go beyond a level of toy examples: being dependent on the manual representation and input of all the relevant elements, such computational models perform well as much as they are ad hoc applications Kevin D. Ashley, Artificial Intelligence and Legal Analytics, cit., p. 144

632 As showed by the discussion on the kind of logic required to formalize legal rules, Helmut Schreiner, Information Systems and Artificial Intelligence in Law. Logical Procedures for the Application of Technical Intelligence in Juridical Decisions, in Costantino Ciampi (ed.), Artificial Intelligence and Legal Information Systems, cit., p. 165; As McCarty had emphasized at the turn of the Nineties, “The advancement of both rule-based expert systems and theories of legal reasoning are bounded to the ‘solution’ of the knowledge representation problem, Thorne L. McCarty, Artificial Intelligence and Law: How to Get There from Here, cit., pp. 196-197. Another relates to the problem of legal qualification, i.e. the gap between “world and regulation knowledge”. See, moreover, Trevor Bench-Capon, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges, Daniele Bourcier, Paul Bourgine, Jack G. Conrad, Enrico Francesconi, Thomas F. Gordon, Guido Governatori, Jochen L. Leidner, David D. Lewis, Ronald P. Loui, L. Thorne McCarty, Henry Prakken, Frank Schilder, Erich Schweighofer, Paul Thompson, Alex Tyrrell, Bart Verheij, Douglas N. Walton, Adam Z. Wyner, A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law, in Artificial Intelligence and Law, 2012, 20, p. 21; Joost Breuker, Nienke den Haan, Separating World and Knowledge: Where is Logic?, in Proceedings of the Third International Conference on Artificial Intelligence and Law. ACM Press, New York, 1991, p. 92. For what concerns the limitations of Legal Expert Systems, see the discussion betweenPhilip Leith and Marek Sergot, Marek Sergot, The Representation of Law in Computer Programs, in Trevor J. M.

Bench-Capon (ed.), Knowledge-Based Systems and Legal Applications, Academic Press, London, 1991, p. 27; Philip Leith, Clear rules and legal expert systems, cit., p. 661; Id., Rise and fall of expert systems, cit.

633 McCarty has maintained that the development of a “complete” computational model of reasoning would require the formalization of “an ontology of all human (and animal!) activities and interactions” , see Bench-Capon Trevor, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges, Bourcier Daniele, Bourgine Paul, Conrad Jack G., Francesconi Enrico, Gordon

154

possibility, offered by machine learning and deep learning, to automate the formalization of rules.

Nel documento DOTTORATO DI RICERCA IN SCIENZE GIURIDICHE (pagine 157-161)