Dear Author
Here are the proofs of your article.
•
You can submit your corrections online or by fax.
•
For online submission please insert your corrections in the online correction form.
Always indicate the line number to which the correction refers.
•
Please return your proof together with the permission to publish confirmation.
•For fax submission, please ensure that your corrections are clearly legible. Use a
fine black pen and write the correction in the margin, not too close to the edge of the
page.
•
Remember to note the journal title, article number, and your name when sending
your response via e-mail, fax or regular mail.
•
Check the metadata sheet to make sure that the header information, especially
author names and the corresponding affiliations are correctly shown.
•
Check the questions that may have arisen during copy editing and insert your
answers/corrections.
•
Check that the text is complete and that all figures, tables and their legends are
included. Also check the accuracy of special characters, equations, and electronic
supplementary material if applicable. If necessary refer to the Edited manuscript.
•The publication of inaccurate data such as dosages and units can have serious
consequences. Please take particular care that all such details are correct.
•
Please do not make changes that involve only matters of style. We have generally
introduced forms that follow the journal’s style.
Substantial changes in content, e.g., new results, corrected values, title and
authorship are not allowed without the approval of the responsible editor. In such a
case, please contact the Editorial Office and return his/her consent together with the
proof.
•
If we do not receive your corrections within 48 hours, we will send you a reminder.
Please note
Your article will be published Online First approximately one week after receipt of
your corrected proofs. This is the official first publication citable with the DOI.
Further changes are, therefore, not possible.
After online publication, subscribers (personal/institutional) to this journal will have
access to the complete article via the DOI using the URL:
If you would like to know when your article has been published online, take advantage
of our free alert service. For registration and further information, go to:
http://www.springerlink.com.
Due to the electronic nature of the procedure, the manuscript and the original figures
will only be returned to you on special request. When you return your corrections,
please inform us, if you would like to have these documents returned.
The printed version will follow in a forthcoming issue.
http://dx.doi.org/10.1007/s43154-020-00024-3Metadata of the article that will be visualized in OnlineFirst
1 Article Title Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues
2 Article Sub- Title 3 Article Copyright
-Year
The Author(s) 2020
(This will be the copyright line in the final PDF)
4 Journal Name Current Robotics Reports
5
Corresponding Author
Family Name Amoroso
6 Particle
7 Given Name Daniele
8 Suffix
9 Organization University of Cagliari
10 Division Department of Law
11 Address Cagliari, Italy
12 e-mail daniele.amoroso@unica.it
13
Author
Family Name Tamburrini
14 Particle
15 Given Name Guglielmo
16 Suffix
17 Organization University of Naples Federico II
18 Division Department of Electrical Engineering and Information Technology
19 Address Naples, Italy
20 e-mail tamburrini@unina.it 21 Schedule Received 22 Revised 23 Accepted
24 Abstract Purpose of Review: To provide readers with a compact account of ongoing
academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention.
Recent Findings: A précis of current debates is provided, which focuses on the
requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential
AUTHOR'S PROOF
policies for human control on weapons systems.
Summary: The review highlights the crucial role played by the robotics
research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.
25 Keywords separated by ' - '
Autonomous weapons systems Roboethics International humanitarian law -Human-robot shared control - Meaningful human control
26 Foot note information
This article is part of the Topical Collection on Roboethics
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
UN
CORRECT
ED
PRO
OF
12
3 ROBOETHICS (G VERUGGIO, SECTION EDITOR)
4
5
Autonomous Weapons Systems and Meaningful Human Control:
6
Ethical and Legal Issues
7 Daniele Amoroso1
&Guglielmo Tamburrini2
8 9
10 # The Author(s) 2020 11 Abstract
12 Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in
13 weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare
14 and take attendant life-or-death decisions without any human intervention.
15 Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including
16 autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully
17 employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and
18 prudential policies for human control on weapons systems.
19 Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates
20 about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates:
21 respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous
22 weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to
23 support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC).
24 Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared
25 control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.
26 Keywords Autonomous weapons systems . Roboethics . International humanitarian law . Human-robot shared control .
27 Meaningful human control
28
29
Introduction
30 Robotics has extensively contributed to modify defense
sys-31 tems. Significant examples from the recent past include
32 teleoperated robots detecting and defusing explosive devices
33 (e.g., PackBot) [1], in addition to unmanned vehicles for
re-34 connaissance and combat missions, operating on the ground
35 (e.g., Guardium [2] or TALON [3]) or in the air (e.g., MQ-1
36
Predator [4]). The deployment of these military robots has
37
been seldom objected to on ethical or legal grounds, with the
38
notable exception of extraterritorial targeted killings
accom-39
plished by means of unmanned aerial vehicles. These targeted
40
killings have raised concerns about the infringement of other
41
States’ sovereignty and overly permissive application of lethal
42
force in counter-terrorism operations [5–7].
43
One should carefully note that the release of destructive
44
force by any weaponized robot in the above list is firmly in
45
the hands of human operators. Accordingly, ethical and legal
46
controversies about these systems were confined to a handful of
47
their specific uses, and their overall acceptability as weapons
48
systems was never questioned. However, the entrance on the
49
scene of autonomous weapons systems (AWS from now on)
50
has profoundly altered this ethical and legal landscape.
51
To count as autonomous, a weapons system must be able to
52
select and engage targets without any human intervention after
53
its activation [8••,9,10]. Starting from this basic and quite
in-54
clusive condition, the Stockholm International Peace Research
This article is part of the Topical Collection on Roboethics
* Daniele Amoroso daniele.amoroso@unica.it Guglielmo Tamburrini tamburrini@unina.it 1 Department of Law
Q1 , University of Cagliari, Cagliari, Italy
2 Department of Electrical Engineering and Information Technology,
University of Naples Federico II, Naples, Italy Current Robotics Reports
AUTHOR'S PROOF
UN
CORRECT
ED
PRO
OF
55 Institute (SIPRI) [11] introduced additional distinctions between56 types of existing AWS: (i) air defense systems (e.g., Phalanx
57 [12], MANTIS [13], Iron Dome [14], Goalkeeper [15]); (ii)
ac-58 tive protection systems, which shield armored vehicles by
iden-59 tifying and intercepting anti-tank missiles and rockets (e.g.,
60 LEDS-150 [16] and Trophy [17]); (iii) robotic sentries, like the
61 Super aEgis II stationary robotic platform tasked with the
sur-62 veillance of the demilitarized zone between North and South
63 Korea [18]; (iv) guided munitions, which autonomously identify
64 and engage targets that are not in sight of the attacking aircraft (e.
65 g., the Dual-Mode Brimstone [19]); and (v) loitering munitions,
66 such as the Harpy NG [20], which overfly an assigned area in
67 search of targets to dive-bomb and destroy.
68 This classification stands in need of continual expansion on
69 account of ongoing military research projects on unmanned
70 ground, aerial, and marine vehicles that are capable of
auton-71 omously performing targeting decisions. Notably, research
72 work based on swarm intelligence technologies is paving the
73 way to swarms of small-size and low-cost unmanned weapons
74 systems. These are expected to overwhelm enemy defenses by
75 their numbers and may additionally perform autonomously
76 targeting functions [21–24].
77 The technological realities and prospects of AWS raise a
78 major ethical and legal issue: Is it permissible to let a robotic
79 system unleash destructive force and take attendant life-or-death
80 decisions without any human intervention? This issue prompted
81 intense and ongoing debates, at both academic and diplomatic
82 levels, on the legality of AWS under international law [25]. An
83 idea that has rapidly gained ground across the opinion spectrum
84 in this debate is that all weapons systems, including autonomous
85 ones, should remain under meaningful human control (MHC) in
86 order to be ethically acceptable and lawfully employed (see the
87 reports by the UK-based NGO Article 36 [26, 27]).
88 Nevertheless, the precise normative content of such requirement
89 is still far from being precisely spelled out and agreed upon.
90 This review provides a general survey of the AWS debate,
91 focusing on the MHC turning point and its ethical and legal
92 underpinnings. After recalling the initial stages of the debate, a
93 schematic account is provided of chief ethical and legal
con-94 cerns about autonomy in weapons systems. Then, the main
95 proposals regarding the MHC content are introduced and
an-96 alyzed, including our own proposal of a“differentiated and
97 prudential” human control policy on AWS. Finally, it is
point-98 ed out how our proposal may help overcome the hurdles that
99 are currently preventing the international community from
100 adopting a legal regulation on the matter.
101
Highlights from the AWS Ethical and Legal
102
Debate
103 Members of the robotics community, notably Ronald C. Arkin
104 and Noel Sharkey, were chief protagonists of early
105
discussions about the ethical and legal acceptability of
106
AWS. Arkin emphasized some ethical pros of autonomy in
107
weapons systems. He was concerned about the poor record of
108
human compliance with international norms governing the
109
conduct of belligerent parties in warfare (Laws of War or
110
international humanitarian law (IHL)). In his view, this state
111
of affairs ultimately depends on human self-preservation
112
needs and emotional reactions in the battlefield—fear, anger,
113
frustration, and so on—that a robot is immune to. Arkin’s own
114
research on military applications of robotics was inspired by a
115
vision of“ethically restrained” autonomous weapons systems
116
that are capable of abiding“by the internationally agreed upon
117
Laws of War” better than human warfighters. He presented
118
this vision and its ethical motivations in an invited talk at the
119
First International Symposium on Roboethics, organized by
120
Scuola di Robotica, chaired by Gianmarco Veruggio, and held
121
in 2004 at Villa Alfred Nobel in Sanremo, Italy. Arkin later
122
described this meeting as“a watershed event in robot ethics”
123
[28••,29,30].
124
In contrast with Arkin’s views, Sharkey emphasized
vari-125
ous ethical cons of autonomy in weapons systems. He argued
126
that foreseeable technological developments of robotics and
127
artificial intelligence (AI) offer no support for the idea of
au-128
tonomous robots ensuring a better-than-human application of
129
the IHL principles. He emphasized that interactions among
130
AWS in unstructured warfare scenarios would be hardly
pre-131
dictable and fast enough to bring the pace of war beyond
132
human control. And he additionally warned that AWS
threat-133
en peace at both regional and global levels by making wars
134
easier to wage [31–34]. Sharkey co-founded the International
135
Committee for Robot Arms Control (ICRAC) in 2009 and
136
played a central role in creating the conditions for launching
137
the Campaign to Stop Killer Robots. This initiative is driven
138
by an international coalition of non-governmental
organiza-139
tions (NGOs), formed in 2012 with the goal of“preemptively
140
ban[ning] lethal robot weapons that would be able to select
141
and attack targets without any human intervention.”
142
A similar call against“offensive autonomous weapons
be-143
yond meaningful human control” was made in the “Open
144
Letter from AI & Robotic Researchers,” released in 2015 by
145
the Future of Life Institute and signed by about 4500 AI/
146
robotics researchers and more than 26,000 other persons,
in-147
cluding many prominent scientists and entrepreneurs. Quite
148
remarkably, the Open Letter urges AI and robotics researchers
149
to follow in the footsteps of those scientists working in
biolo-150
gy and chemistry, who actively contributed to the initiatives
151
that eventually led to international treaties prohibiting
biolog-152
ical and chemical weapons [35].
153
Worldwide pressures from civil society prompted States to
154
initiate discussion of normative frameworks to govern the
155
design, development, deployment, and use of AWS.
156
Diplomatic dialogs on this topic have been conducted since
157
2014 at the United Nations in Geneva, within the institutional
Curr Robot Rep
UN
CORRECT
ED
PRO
OF
158 framework of the Convention on Certain Conventional159 Weapons (CCW). The CCW’s main purpose is to restrict
160 and possibly ban the use of weapons that are deemed to cause
161 unnecessary or unjustifiable suffering to combatants or to
af-162 fect civilians indiscriminately. Informal Meetings of Experts
163 on lethal autonomous weapons systems were held on an
an-164 nual basis at the CCW in Geneva, from 2014 to 2016.
165 Subsequently, the CCW created a Group of Governmental
166 Experts (GGE) on lethal autonomous weapons systems
167 (LAWS), which still remains (as of 2020) the main
institution-168 al forum where the issue of autonomy in weapons systems is
169 annually debated at an international level. Various members
170 of the robotics research community take part to the GGE’s
171 meetings. So far, the main outcome of the GGE’s work is
172 the adoption by consensus of a non-binding instrument, that
173 is, the 11 Guiding Principles on LAWS, which include broad
174 recommendations on human responsibility (Principles (b) and
175 (d)) and human-machine interaction (Principle (c)) [36].
176 A clear outline of the main ethical and legal concerns raised
177 by AWS is found already in a 2013 report, significantly
de-178 voted to“lethal autonomous robotics and the protection of
179 life,” by the UN Special Rapporteur on extrajudicial,
summa-180 ry, or arbitrary executions, Christof Heyns [37••]. These
con-181 cerns are profitably grouped under four headings: (i)
compli-182 ance with IHL, (ii) responsibility ascription problems, (iii)
183 violations of human dignity, and (iv) increased risk for peace
184 and international stability. Let us briefly expand on each one
185 of them, by reference to relevant sections in Heyns’ report.
186 (i) Compliance with IHL would require capabilities that are
187 presently possessed by humans only and that no robot is
188 likely to possess in the near future, i.e., to achieve
situa-189 tional awareness in unstructured warfare scenarios and to
190 formulate appropriate judgments there (paras. 63–74) (in
191 the literature, see [38–40] for a critique of this argument
192 and [41–43] for a convincing rejoinder).
193 (ii) Autonomy in weapons systems would hinder
responsi-194 bility ascriptions in case of wrongdoings, by removing
195 human operators from the decision-making process
196 (paras. 75–81) (for further discussion, see [44–46]).
197 (iii) The deployment of lethal AWS would be an affront to
198 human dignity, which dictates that decisions entailing
199 human life deprivation should be reserved to humans
200 (paras. 89–97) (see [47–49] for more in-depth analysis,
201 as well as [50] for a critical perspective).
202 (iv) Autonomy in weapons systems would threaten in
spe-203 cial ways international peace and stability, by making
204 wars easier to wage on account of reduced numbers of
205 involved soldiers, by laying the conditions for
unpre-206 dictable interactions between AWS and their harmful
207 outcomes, and by accelerating the pace of war beyond
208 human reactive abilities (paras. 57–62) (this point has
209 been further elaborated in [51]).
210
These sources of concern jointly make the case for
211
claiming that a meaningful human control (MHC) over
212
weapons systems should be retained exactly in the way of
213
their critical target selection and engagement functions.
214
Accordingly, the notion of MHC enters the debate on AWS
215
as an ethically and legally motivated constraint on the use of
216
any weapons systems, including autonomous ones. The issue
217
of human-robot shared control in warfare is thereby addressed
218
from a distinctive humanitarian perspective, insofar as
auton-219
omous targeting may impinge, and deeply so, upon the
inter-220
ests of persons and groups of persons that are worthy of
pro-221
tection from ethical or legal standpoints.
222
But what does MHC more precisely entail? What is
nor-223
matively demanded to make human control over weapons
224
systems truly“meaningful”? The current debate about AWS,
225
which we now turn to consider, is chiefly aimed to provide an
226
answer to these questions.
227
Uniform Policies for Meaningful Human
228
Control
229
The foregoing ethical and legal reasons go a long way towards
230
shaping the content of MHC, by pinpointing general functions
231
that should be prescriptively assigned to humans in shared
232
control regimes and by providing general criteria to
distin-233
guish perfunctory from truly meaningful human control.
234
More specifically, the ethical and legal reasons for MHC
sug-235
gest a threefold role for human control on weapons systems to
236
be “meaningful.” First, the obligation to comply with IHL
237
entails that human control must play the role of a fail-safe
238
actor, contributing to prevent a malfunctioning of the weapon
239
from resulting in a direct attack against the civilian population
240
or in excessive collateral damages [52••]. Second, in order to
241
avoid accountability gaps, human control is required to
func-242
tion as accountability attractor, i.e., to secure the legal
condi-243
tions for responsibility ascription in case a weapon follows a
244
course of action that is in breach of international law. Third
245
and finally, from the principle of human dignity respect, it
246
follows that human control should operate as a moral agency
247
enactor, by ensuring that decisions affecting the life, physical
248
integrity, and property of people (including combatants)
in-249
volved in armed conflicts are not taken by non-moral artificial
250
agents [53].
251
But how are human-weapon partnerships to be more
pre-252
cisely shaped on the basis of these broad constraints? Several
253
attempts to answer this question have been made by parties
254
involved in the AWS ethical and legal debate. The answers
255
that we turn to examine now outline uniform human control
256
policies, whereby one size of human control is claimed to fit
257
all AWS and each one of their possible uses. These are the
258
“boxed autonomy,” “denied autonomy,” and “supervised
au-259
tonomy” control policies.
AUTHOR'S PROOF
UN
CORRECT
ED
PRO
OF
260 The boxed autonomy policy assigns to humans the role of261 constraining the autonomy of a weapons system within an
262 operational box, constituted by“predefined [target]
parame-263 ters, a fixed time period and geographical borders” [54].
264 Accordingly, the weapons system would be enabled to
auton-265 omously perform the critical functions of selecting and
engag-266 ing targets, but only within the boundaries set forth by the
267 human operator or the commander at the planning and
activa-268 tion stages [55–57].
269 The boxed autonomy policy seems to befit a variety of
270 deliberate targeting situations, which involve military
objec-271 tives that human operators know in advance and can map with
272 high confidence within a defined operational theater. It seems,
273 however, unsuitable to govern a variety of dynamic targeting
274 situations. These require one to make changes on the fly to
275 planned objectives and to pursue targets of opportunity. The
276 latter are unknown to exist in advance (unanticipated targets)
277 or else are not localizable in advance with sufficient precision
278 in the operational area (unplanned targets). Under these
con-279 ditions, boxed autonomy appears to be problematic from a
280 normative perspective, insofar as issues of distinction and
281 proportionality that one cannot foresee at the activation stage
282 may arise during mission execution.
283 By the same token, a boxed autonomy policy may not even
284 suffice to govern deliberate targeting of military objectives
285 placed in unstructured warfare scenarios. To illustrate, consider
286 the loitering munition Harpy NG, endowed with the capability
287 of patrolling for several hours a predefined box in search of
288 enemy targets satisfying given parameters. The conditions
li-289 censing the activation of this loitering munition may become
290 superseded if civilians enter the boxed area, erratic changes
oc-291 cur, or surprise-seeking intentional behaviors are enacted [58].
292 Under these various circumstances, there is“fail-safe” work for
293 human control to do at the mission execution stage too.
294 In sharp contrast with the boxed autonomy policy, the
de-295 nied autonomy policy rules out any autonomy whatsoever for
296 weapons systems in the critical targeting function and
there-297 fore embodies a most restrictive interpretation of MHC [59].
298 Denied autonomy undoubtedly fulfills the threefold normative
299 role for human control as fail-safe actor, accountability
attrac-300 tor, and moral agency enactor. However, this policy has been
301 sensibly criticized for setting too high a threshold for machine
302 autonomy, in ways that are divorced from“the reality of
war-303 fare and the weapons that have long been considered
accept-304 able in conducting it” [60]. To illustrate this criticism, consider
305 air defensive systems, which autonomously detect, track, and
306 target incoming projectiles. These systems have been aptly
307 classified as SARMO weapons, where SARMO stands for
308 “Sense and React to Military Objects.” SARMO systems are
309 hardly problematic from ethical and legal perspectives, in that
310 “they are programmed to automatically perform a small set of
311 defined actions repeatedly. They are used in highly structured
312 and predictable environments that are relatively uncluttered
313
with a very low risk of civilian harm. They are fixed base,
314
even on Naval vessels, and have constant vigilant human
315
evaluation and monitoring for rapid shutdown” [61].
316
SARMO systems expose the overly restrictive character of
317
a denied autonomy policy. Thus, one wonders whether milder
318
forms of human control might be equally able to strip the
319
autonomy of weapons systems of its ethically and legally
320
troubling implications. This is indeed the aim of the
super-321
vised autonomy policy, which occupies a middle ground
be-322
tween boxed and denied autonomy, insofar as it requires
323
humans to be on the loop of AWS missions.
324
As defined in the US DoD Directive 3000.09 on
325
“Autonomy in Weapons Systems,” human-supervised AWS
326
are designed“to provide human operators with the ability to
327
intervene and terminate engagements, including in the event
328
of a weapon system failure, before unacceptable levels of
329
damage occur” (p. 13). Notably, human-supervised AWS
330
may be used for defending manned installations and platforms
331
from“attempted time-critical or saturation attacks,” provided
332
that they do not select“humans as targets” (p. 3, para. 4(c)(2);
333
see, e.g., the Phalanx Close-In Weapons System in use on the
334
US surface combat ships). While undoubtedly effective for
335
these and other warfare scenarios, supervised autonomy is
336
not the silver bullet for every ethical and legal concern raised
337
by AWS. To begin with, by keeping humans on-the-loop, one
338
would not prevent faster and faster offensive AWS from being
339
developed, eventually reducing the role of human operators to
340
a perfunctory supervision of decisions taken at superhuman
341
speed while leaving the illusion that the human control
re-342
quirement is still complied with [62]. Moreover, the
automa-343
tion bias—the human propensity to overtrust machine
344
decision-making processes and outcomes—is demonstrably
345
exacerbated by a distribution of control privileges that entrusts
346
humans solely with the power of overriding decisions
auton-347
omously taken by the machines [63].
348
To sum up, each one of the boxed, denied, and supervised
349
autonomy policies provides useful hints towards a
normative-350
ly adequate human-machine shared control policy for military
351
target selection and engagement. However, the
complementa-352
ry defects of these uniform control policies suggest the
im-353
plausibility of solving the MHC problem with one formula, to
354
be applied to all kinds of weapons systems and to each one of
355
their possible uses. This point was consistently made by the
356
US delegation at GGE meetings in Geneva: “there is not a
357
fixed, one-size-fits-all level of human judgment that should
358
be applied to every context” [64].
359
Differentiated Policies for Meaningful Human
360
Control
361
Other approaches to MHC aim to reconcile the need for
dif-362
ferentiated policies with the above ethical and legal constraints
Curr Robot Rep
UN
CORRECT
ED
PRO
OF
363 on human control. Differentiated policies modulate human364 control along various autonomy levels for weapons systems.
365 Autonomy levels have been introduced in connection with,
366 say, automated driving, surgical robots, and unmanned
com-367 mercial ships to discuss technological roadmaps or ethical and
368 legal issues [65–67]. A taxonomy of increasing autonomy
369 levels concerning the AWS critical target selection and
en-370 gagement functions was proposed by Noel Sharkey (and only
371 slightly modified here, with regard to levels 4 and 5) [68••].
372 L1. A human engages with and selects targets and
initi-373 ates any attack.
374 L2. A program suggests alternative targets, and a human
375 chooses which to attack.
376 L3. A program selects targets, and a human must approve
377 before the attack.
378 L4. A program selects and engages targets but is
super-379 vised by a human who retains the power to override its
380 choices and abort the attack.
381 L5: A program selects targets and initiates attack on the
382 basis of the mission goals as defined at the planning/
383 activation stage, without further human involvement.
384 The main uniform control policies, including those
exam-385 ined in the previous section, are readily mapped onto one of
386 these levels.
387 L5 basically corresponds to the boxed autonomy policy,
388 whereby MHC is exerted by human commanders at the
plan-389 ning stage of the targeting process only. As noted above,
390 boxed autonomy does not constitute a sufficiently
comprehen-391 sive and normatively acceptable form of human-machine
392 shared control policy.
393 L4 basically corresponds to the supervised autonomy
pol-394 icy. The uniform adoption of this level of human control must
395 also be advised against in the light of automation bias risks
396 and increasing marginalization of human oversight. In certain
397 operational conditions, however, it may constitute a
norma-398 tively acceptable level of human control.
399 L3 has been seldom discussed in the MHC debate. At
400 this level, control privileges on critical targeting functions
401 are equally distributed between weapons system (target
402 selection) and human operator (target engagement). To
403 the extent that the human deliberative role is limited to
404 approving or rejecting targeting decisions suggested by
405 the machine, this level of human control does not provide
406 adequate bulwarks against the risk of automation bias [69].
407 In the same way as L4, therefore, it should not be adopted
408 as a general policy.
409 L1 and L2 correspond to shared control policies where the
410 weapons system’s autonomy is either totally absent (L1) or
411 limited to the role of adviser and decision support system for
412 human deliberation (L2). The adoption of these pervasive
413 forms of human control must also be advised against insofar
414
as some weapons (notably SARMO systems) have long been
415
considered acceptable in warfare operations.
416
In the light of these difficulties, one might be tempted to
417
conclude that the search for a comprehensive and normatively
418
binding MHC policy should be given up and that the best one
419
can hope for is the exchange of good practices between States
420
about AWS control, in addition to the proper application of
421
national mechanisms to review the legality of weapons
422
[70–72]. But alternatives are possible, which salvage the idea
423
of a comprehensive MHC policy, without neglecting the need
424
for differentiated levels of AWS autonomy in special cases.
425
Indeed, the authors of this review have advanced the proposal
426
of a comprehensive MHC policy, which is jointly
differenti-427
ated and prudential [73,74].
428
The prudential character of this policy is embodied
429
into the following default rule: low levels of autonomy
430
L1–L2 should be exerted on all weapons systems and
431
uses thereof, unless the latter are included in a list of
432
exceptions agreed on by the international community of
433
States. The prudential imposition by default of L1 and
434
L2 is aimed at minimizing the risk of breaches of IHL,
435
accountability gaps, or affronts to human dignity, should
436
international consensus be lacking on whether, in relation
437
to certain classes of weapons systems or uses thereof,
438
higher levels of machine autonomy are equally able to
439
grant the fulfillment of genuinely meaningful human
440
control. The differentiated character of this policy is
em-441
bodied in the possibility of introducing internationally
442
agreed exceptions to the default rule. However, these
443
exceptions should come with the indication of what level
444
is required to ensure that the threefold role of MHC
(fail-445
safe actor, accountability attractor, moral agency enactor)
446
is adequately performed.
447
In the light of the above analysis, this should be done by
448
taking into account at least the following observations:
449
1. The L4 human supervision and veto level might be
450
deemed as an acceptable level of control only in case
451
of anti-materiel AWS with exclusively defensive
func-452
tions (e.g., Phalanx or Iron Dome). In this case, ensuring
453
that human operators have full control over every single
454
targeting decision would pose a serious security risk,
455
which makes the application of L1, L2, and L3
problem-456
atic from both military and humanitarian perspectives.
457
The same applies to active protection systems, like
458
Trophy, provided that their use in supervised-autonomy
459
mode is excluded in operational environments involving
460
a high concentration of civilians.
461
2. L1 and L2 could also be impracticable in relation to
462
certain missions because communication constraints
463
would allow only limited bandwidth. In this case,
mili-464
tary considerations should be balanced against
humani-465
tarian ones. One might allow for less bandwidth-heavy
AUTHOR'S PROOF
UN
CORRECT
ED
PRO
OF
466 (L3) control in two cases: deliberate targeting anddy-467 namic targeting in fully structured scenarios, e.g., in high
468 seas. In both hypotheses, indeed, the core targeting
deci-469 sions have actually been taken by humans at the
470 planning/activation stage. Unlike L4, however, L3
en-471 sures that there is a human on the attacking end who
472 can verify, in order to deny or grant approval, whether
473 there have been changes in the battlespace which may
474 affect the lawfulness of the operation. Looking at
475 existing technologies, L3 might be applied to sentry
ro-476 bots deployed in a fully structured environment, like the
477 South Korean Super aEgis II.
478 3. The L5 boxed autonomy level should be considered
in-479 compatible with the MHC requirement, unless
operation-480 al space and time frames are so strictly circumscribed to
481 make targeting decisions entirely and reliably traceable
482 to human operators.
483
Concluding Remarks
484 Recent advances in autonomous military robotics have raised
485 unprecedented ethical and legal issues. Regrettably,
diplomat-486 ic discussions at the GGE in Geneva not only have so far
487 fallen short of working out a veritable legal regime on
mean-488 ingful human control over AWS, but—what is worse—are
489 currently facing a stalemate, which is mainly determined by
490 the opposition of major military powers, including the US and
491 the Russian Federation, to the adoption of any kind of
inter-492 national regulation on the matter.
493 Our proposal of relinquishing the quest for a
one-size-fits-494 all solution to the MHC issue in favor of a suitably
differen-495 tiated approach may help sidestep current stumbling blocks.
496 Diplomatic and political discontent about an MHC
require-497 ment that is overly restrictive with respect to the limited
au-498 tonomy of some weapons systems might indeed be mitigated
499 recognizing the possibility of negotiating exceptions to L1–L2
500 human control, by identifying weapons systems and contexts
501 of use where milder forms of human control will suffice to
502 ensure the fulfillment of the fail-safe, accountability, and
mor-503 al agency properties whose preservation generally underpins
504 the normative concerns about weapons’ autonomy in targeting
505 critical functions.
506 In a broader perspective, a differentiated approach to MHC
507 may be of some avail as regards the general issue of human
508 control over intelligent machines operating in ethically and
509 legally sensitive domains, insofar as the MHC language has
510 been recently used about autonomous vehicles [75,76] and
511 surgical robots [77].
512 Funding Information Open access funding provided by Università degli 513 Studi di Cagliari within the CRUI-CARE Agreement.
514
Compliance with Ethical Standards
515 Conflict of Interest The authors declare that they have no conflict of
516 interest.
517 Human and Animal Rights and Informed Consent This article does not
518 contain any studies with human or animal subjects performed by any of
519 the authors.
520
Open Access This article is licensed under a Creative Commons 521 Attribution 4.0 International License, which permits use, sharing,
adap-522 tation, distribution and reproduction in any medium or format, as long as
523 you give appropriate credit to the original author(s) and the source,
pro-524 vide a link to the Creative Commons licence, and indicate if changes were
525 made. The images or other third party material in this article are included
526 in the article's Creative Commons licence, unless indicated otherwise in a
527 credit line to the material. If material is not included in the article's
528 Creative Commons licence and your intended use is not permitted by
529 statutory regulation or exceeds the permitted use, you will need to obtain
530 permission directly from the copyright holder. To view a copy of this
531 licence, visithttp://creativecommons.org/licenses/by/4.0/.
532
533
References
534
Papers of particular interest, published recently, have been
535 highlighted as: 536 • Of importance 537 •• Of major importance 538 1. Rudakevych P, Ciholas M. PackBot EOD firing system. SPIE
539 Proceedings, Volume 5804, Unmanned Ground Vehicle
540 Technology VII. 2005.
541 2. Egozi A. Robotics: Israel Aerospace Industries (IAI) ground robotic
542 systems. Asia-Pac Defence Rep. 2016;42(7):46–7.
543 3. Wells P, Deguire D. TALON: a universal unmanned ground
vehi-544 cle platform, enabling the mission to be the focus. SPIE
545 Proceedings, Volume 5804, Unmanned Ground Vehicle
546 Technology VII. 2005.
547 4. Visnevski NA, Castillo-Effen M. A UAS capability description
548 framework: reactive, adaptive, and cognitive capabilities in
robot-549 ics. 2009 IEEE Aerospace Conference, Big Sky, Montana. 2009.
550 5. Alston P. Report of the United Nations Special Rapporteur on
ex-551 trajudicial, summary or arbitrary executions. Addendum. Study on
552 targeted killings. UN Doc. A/HRC/14/24/Add.6. 28 May 2010.
553 6. O’Connell ME. The choice of law against terrorism. J Natl Sec Law
554 Policy. 2009;4:343–68.
555 7. Melzer N. Human rights implications of the usage of drones and
556 unmanned robots in warfare. Report requested by the European
557 Parliament’s Subcommittee on Human Rights. EXPO/B/DROI/
558 2012/12. May 2013.
559 8.•• US Department of Defense. Directive 3000.09 “Autonomy in
560 Weapons Systems”. 21 November 2012. The Directive crucially
561 contributed to the AWS debate by introducing a workable
def-562 inition of AWS and by establishing the principle that
“appro-563 priate levels of human judgment over the use of force” should
564 always be ensured over weapons systems.
565 9. International Committee of the Red Cross (ICRC). Views on
au-566 tonomous weapon system. Paper submitted to the Informal meeting
567 of experts on lethal autonomous weapons systems of the
Curr Robot Rep
UN
CORRECT
ED
PRO
OF
568 Convention on Certain Conventional Weapons (CCW), Geneva. 11569 April 2016.
570 10. Campaign to Stop Killer Robots. Urgent Action Needed to Ban 571 Fully Autonomous Weapons. Press release. 23 April 2013. 572 11. Boulanin V, Verbruggen N. Mapping the development of autono-573 my in weapon systems. Solna: SIPRI Report; 2017.
574 12. Stoner RH. R2D2 with attitude: the story of the phalanx close-in 575 weapons. 2009. Available at:www.navweaps.com/index_tech/
576 tech-103.htm.
577 13. NBS MANTIS Air Defence Protection System. Available at: 578 https://www.army-technology.com/projects/mantis/.
579 14. Landau EB, Bermant A. Iron dome protection: missile defense in 580 Israel’s security concept. In: Kurz A, Brom S, editors. The lessons 581 of operation protective edge. Tel Aviv: Institute for National 582 Security Studies; 2014. p. 37–42.
583 15. 30 mm (1.2″) Goalkeeper SGE-30. Available at: http://www.
584 navweaps.com/Weapons/WNNeth_30mm_Goalkeeper.php. 585 16. SAAB Group. LEDS full spectrum active protection for land vehi-586 cles Available at:https://saab.com/globalassets/commercial/land/
587 force-protection/active-protection/leds/leds-product-sheet.pdf. 588 17. Trophy Active Protection System; 10 April 2007. Available at: 589 https://defense-update.com/20070410_trophy-2.html.
590 18. Parkin S. Killer Robots: the soldiers that never sleep. BBC Future; 591 16 July 2015. Available at:
www.bbc.com/future/story/20150715-592 killer-robots-the-soldiers-that-never-sleep.
593 19. UK Royal Air Force. Aircraft & weapons. 2007: p. 87.
594 20. Gettinger D, Michel AH. Loitering munitions. Center for the Study 595 of the Drone; 2017.
596 21. Scharre P. Robotics on the battlefield part II. The coming swarm. 597 Center for a New American Security; October 2014.
598 22. Brehm M, De Courcy Wheele A. Swarms. Article36 discussion 599 paper for the Convention on Certain Conventional Weapons 600 (CCW), Geneva. March 2019.
601 23. Verbruggen M. The question of swarms control: challenges to en-602 suring human control over military swarms. Non-Proliferation and 603 Disarmament Paper No. 65; December 2019.
604 24. Ekelhof MAC, Persi Paoli G. Swarm robotics: technical and oper-605 ational overview of the next generation of autonomous systems. 606 United Nations Institute for Disarmament Research (UNIDIR); 607 2020.
608 25. Amoroso D. Autonomous Weapons Systems and International 609 Law. A study on human-machine interactions in ethically and le-610 gally sensitive domains. Naples/Munich: ESI/Nomos; 2020. 611 26. Roff HM, Moyes R. Meaningful Human Control, Artificial 612 Intelligence and Autonomous Weapons. Article36 briefing paper 613 prepared for the CCW informal meeting of experts on lethal auton-614 omous weapons systems. April 2016.
615 27. Moyes R. Key elements of meaningful human control. Article 36 616 Background paper to comments prepared for the CCW Informal 617 Meeting of Experts on Lethal Autonomous Weapons Systems. 618 April 2016.
619 28.•• Arkin RC. Governing Lethal Behavior in Autonomous Robots. 620 Boca Raton: CRC Press; 2009. Arkin’s book represents the 621 first—and, so far, the best articulated—attempt by a roboticist 622 to argue for the desirability of AWS from an ethical and legal 623 perspective. It deeply influenced the subsequent debate by pro-624 viding major military powers with a convenient argument to 625 assert the legality of AWS.
626 29. Arkin RC. Lethal autonomous systems and the plight of the non-627 combatant. AISB Quarterly. 2013;137:1–9.
628 30. Arkin RC. A roboticist’s perspective on lethal autonomous weapon 629 systems. In: Perspectives on lethal autonomous weapon systems. 630 UNODA Occasional Papers No 30; November 2017: p. 35–47. 631 31. Sharkey NE. Automated killers and the computing profession. 632 Computer. 2007;40(11):122–3.
633 32. Sharkey NE. Cassandra or false prophet of doom: AI robots and
634 war. IEEE Intell Syst. 2008;23(4):14–7.
635 33. Sharkey NE. Saying‘no!’ to lethal autonomous targeting. J Military
636 Ethics. 2010;9(4):369–83.
637 34.• Sharkey NE. The evitability of autonomous robot warfare. Int Rev
638 Red Cross. 2012;94:787–9 This article provides the most
com-639 prehensive outline, from a roboticist’s perspective, of the ethical
640 and legal concerns raised by autonomy in weapons systems.
641 35. Autonomous weapons: an open letter from AI & robotics
re-642 searchers. 28 July 2015. Available at:https://futureoflife.org/
643
open-letter-autonomous-weapons/?cn-reloaded=1.
644 36. Final Report of the 2019 Meeting of the high contracting parties to
645 the CCW. UN Doc CCW/MSP/2019/CRP2/Rev1, Annex III 15
646 November 2019.
647 37.•• Heyns C. Report of the Special Rapporteur on extrajudicial,
sum-648 mary or arbitrary executions. UN Doc. A/HRC/23/47. 9 April 2013.
649 Heyns’ report is a milestone in the AWS debate: it raised
650 awareness within the United Nations as to the ethical and legal
651 implications of autonomy in weapons systems, by illustrating—
652 concisely, but also comprehensively—the main normative
is-653 sues at stake.
654 38. Schmitt MN, Thurnher JS.“Out of the loop”: autonomous weapon
655 systems and the law of armed conflict. Harvard Natl Secur J.
656 2013;4:231–81.
657 39. Sassòli M. Autonomous weapons and international humanitarian
658 law: advantages, open technical questions and legal issues to be
659 clarified. Int Law Stud. 2014;90:308–40.
660 40. Anderson K, Waxman M. Debating autonomous weapon systems,
661 their ethics, and their regulation under international law. In:
662 Brownsword R, Scotford E, Yeung F, editors. The Oxford
hand-663 book of law, regulation, and technology. New York: Oxford
664 University Press; 2017. p. 1097–117.
665 41. Krupyi T. Of souls, spirits and ghosts: transposing the application of
666 the rules of targeting to lethal autonomous robots. Melbourne J Int
667 Law. 2015;16(1):145–202.
668 42. Brehm M. Defending the boundary: constraints and requirements
669 on the use of autonomous weapon systems under international
hu-670 manitarian and human rights law. Geneva Academy of International
671 Humanitarian Law and Human Rights, Academy briefing No. 9:
672 May 2017.
673 43. Geiss R, Lahmann H. Autonomous weapons systems: a paradigm
674 shift for the law of armed conflict? In: Ohlin JD, editor. Research
675 handbook on remote warfare. Cheltenham: Edward Elgar
676 Publishing; 2017. p. 371–404.
677 44.• Sparrow R. Killer robots. J Appl Philos. 2007;24(1):62–77
678 Interested readers find here a first, ground-breaking
illustra-679 tion of the risk that increasing autonomy in weapons system
680 creates“accountability gaps” in case of unlawful targeting
681 decisions.
682 45. Amoroso D, Giordano B. Who is to blame for autonomous
683 weapons systems’ misdoings? In: Lazzerini N, Carpanelli E,
edi-684 tors. Use and misuse of new technologies. Contemporary
685 Challenges in International and European Law. The Hague:
686 Springer; 2019. p. 211–32.
687 46. McDougall C. Autonomous weapon systems and accountability:
688 putting the cart before the horse. Melbourne J Int Law.
689 2019;20(1):58–87.
690 47. Asaro P. On banning autonomous weapon systems: human rights,
691 automation, and the dehumanization of lethal decision-making. Int
692 Rev Red Cross. 2012;94:687–709.
693 48. Leveringhaus A. Ethics and autonomous weapons. London:
694 Palgrave; 2016.
695 49. Heyns C. Autonomous weapons in armed conflict and the right to a
696 dignified life: an African perspective. South African J Hum Rights.
697 2017;33(1):46–71.
AUTHOR'S PROOF
UN
CORRECT
ED
PRO
OF
698 50. Birnbacher D. Are Autonomous weapons systems a threat to human699 dignity? In: Bhuta N, et al., editors. Autonomous weapons systems. 700 Law, ethics, policy. Cambridge: Cambridge University Press; 2016. 701 p. 105–21.
702 51. Altmann J, Sauer F. Autonomous weapon systems and strategic 703 stability. Survival. 2017;59(5):117–42.
704 52.•• Scharre P. Army of none. Autonomous weapons and the future of 705 war. New York/London: W.W. Norton; 2018. This volume repre-706 sents one of the few book-size studies taking stock of the discus-707 sion and attempting to delineate workable solutions to the prob-708 lems raised by autonomy in weapons systems. As an added 709 value, it has been written by a key-actor in the AWS debate, 710 since Scharre coordinated the Drafting process that eventually 711 brought about the US DoD Directive 3000.09.
712 53. ICRC. Ethics and autonomous weapon systems: an ethical basis for 713 human control? Working paper submitted to the Group of 714 Governmental Experts on lethal autonomous weapons of the 715 CCW, Geneva. UN Doc. CCW/GGE.1/2018/WP.5. 29 716 March 2018.
717 54. International Panel on the Regulation of Autonomous Weapons 718 (iPRAW). Focus on Human Control. August 2019.
719 55. Dutch Advisory Council on International Affairs (AIV) and 720 Advisory Committee on Issues of Public International Law 721 (CAVV). Report on Autonomous weapon systems: the need for 722 meaningful human control. No. 97 AIV / No. 26 CAVV. 2015. 723 56. Roorda M. NATO’s targeting process: ensuring human control over 724 and lawful use of‘autonomous’ weapons. In: Williams A, Scharre 725 P, editors. Autonomous systems: issues for defence policymakers. 726 The Hague: NATO Communications and Information Agency; 727 2015. p. 152–68.
728 57. Ekelhof MAC. Moving beyond semantics on autonomous weapons 729 systems: meaningful human control in operation. Glob Policy. 730 2019;10(3):343–8.
731 58. Akerson D. The illegality of offensive lethal autonomy. In: Saxon 732 D, editor. International humanitarian law and the changing technol-733 ogy of war. Leiden/Boston: Brill/Nijhoff; 2013. p. 65–98. 734 59. Chengeta T. Defining the emerging notion of‘meaningful human 735 control’ in autonomous weapon systems. New York J Int Law 736 Politics. 2017;49:833–90.
737 60. Horowitz MC, Scharre P. Meaningful human control in weapon 738 systems: a primer, CNAS Working Paper, March 2015.
739 61. International Committee for Robot Arms Control. Statement on 740 technical issues. Informal meeting of experts on lethal autonomous 741 weapons, Geneva. 14 May 2014.
742 62. Schwarz E. The (Im)possibility of meaningful human control for 743 lethal autonomous weapon systems. Humanitarian Law & Policy 744 (Blog of the ICRC). 29 August 2018.
745 63. Skitka LJ, Mosier KL, Burdick M. Does automation bias decision-746 making? Int J Hum-Comput Stud. 1999;51(5):991–1006. 747 64. United States. Human-machine interaction in the development, de-748 ployment and use of emerging technologies in the area of lethal 749 autonomous weapons systems. Working paper submitted to the
750 Group of Governmental Experts on lethal autonomous weapons
751 of the CCW, Geneva. UN Doc. CCW/GGE.2/2018/WP.4. 28
752 August 2018.
753 65. SAE International. Taxonomy and definitions for terms related to
754 driving automation systems for on-road motor vehicles. 15
755 June 2018. Available at:https://www.sae.org/standards/content/
756
j3016_201806/.
757 66. Yang G-Z et al. Medical robotics– regulatory, ethical, and legal
758 considerations for increasing levels of autonomy. Sci Robot. 2017:
759 2(4).
760 67. DNV GL. Autonomous and remotely operated ships. September
761 2018.
762 68.•• Sharkey NE. Staying the Loop: human supervisory control of
763 weapons. In: Bhuta N, et al., editors. Autonomous weapons
sys-764 tems. Law, ethics, policy. Cambridge: Cambridge University Press;
765 2016. p. 23–38. Sharkey’s chapter offers a valuable comparison
766 of strengths and weaknesses of human and machine
decision-767 making processes, paying particular attention to implications
768 for autonomous robotics in military applications. Also, by
pro-769 viding a clear taxonomy of possible forms of human-weapon
770 partnership, it laid the groundwork for subsequent research on
771 MHC.
772 69. Cummings ML. Automation and accountability in decision support
773 system interface design. J Technol Stud. 2006:23–31.
774 70. Switzerland. A“compliance-based” approach to autonomous
775 weapon systems. Working paper submitted to the Group of
776 Governmental Experts on lethal autonomous weapons of the
777 CCW, Geneva. UN Doc. CCW/GGE.1/2017/WP.9. 10 November
778 2017.
779 71. United States. Statement for the General Exchange of Views. Group
780 of Governmental Experts on lethal autonomous weapons of the
781 CCW, Geneva. 9 April 2018.
782 72. Israel. Statement on Agenda Item 6(d). Group of Governmental
783 Experts on lethal autonomous weapons of the CCW, Geneva. 29
784 August 2018.
785 73. Amoroso D, Tamburrini G. What makes human control over
weap-786 on systems“meaningful”? ICRAC working paper #4. August 2019.
787 74. Amoroso D, Tamburrini G. Filling the empty box: a principled
788 approach to meaningful human control over weapons systems.
789 ESIL Reflections. 2019:8(5).
790 75. Santoni de Sio F, Van Den Hoven J. Meaningful human control
791 over autonomous systems: a philosophical account. Front Robotic
792 AI. 2018.
793 76. Mecacci G, Santoni de Sio F. Meaningful human control as
reason-794 responsiveness: the case of dual-mode vehicles. Ethics Inf Technol.
795 2020;22:103–15.
796 77. Ficuciello F, et al. Autonomy in surgical robots and its meaningful
797 human control. Paladyn J Behav Robot. 2019;10(1):30–43.
798 Publisher’s Note Springer Nature remains neutral with regard to
jurisdic-799 tional claims in published maps and institutional affiliations.
800
Curr Robot Rep