• Non ci sono risultati.

Humanised Weapons or Killer Robots? Brief analysis of unmanned and autonomous weapons compliance with International Humanitarian Law

N/A
N/A
Protected

Academic year: 2021

Condividi "Humanised Weapons or Killer Robots? Brief analysis of unmanned and autonomous weapons compliance with International Humanitarian Law"

Copied!
77
0
0

Testo completo

(1)

Master in Human Rights and Conflict Management

XVI Edition

Humanised Weapons or Killer Robots?

Brief analysis of unmanned and autonomous weapons

compliance with International Humanitarian Law.

Student: Marina Del Greco

Professor: Emanuele Sommario

(2)

II

Statement of Authenticity

I do certify that the intellectual content of this thesis is the product of my own work and that all the information hereby contained are correct and accurate, to the best of my knowledge.

I am aware of the University’s policy on plagiarism and all sources and quoted materials are identified and duly acknowledged in the bibliography.

(3)

III

Abstract

Since when wars started being fought, humans have tried to fight each other from as far as possible, building weapons capable of hitting targets from afar. The idea of reducing risks for soldiers, keeping them away from battlefields, lied behind the invention of bows, catapults, rifles and missiles. Today, the very same imperative pushes governments to invest in the development of unmanned and autonomous weapon systems. The peculiarity of both these kinds of robotic weapons is that they do not feature the presence of human pilots on board. As a consequence, they can get extremely close to enemy’s facilities and gather large amounts of high-quality intelligence information, or perform particularly dangerous offensive operations, without putting at risk soldiers’ lives. Furthermore, autonomous machines i.e. weapons capable of using lethal force without remote human inputs, have an additional ability: as they cannot be influenced by emotions, they cannot commit human errors, nor suffer post-traumatic psychological damages, nor perpetrate war crimes, as a revenge against enemy troops.

Notwithstanding this, however, it is fair to say that unmanned and autonomous weapons present also several criticalities. In particular, before an attack, international humanitarian law (IHL) prescribes the need to carefully evaluate several factors, to determine whether the use of force would be legitimate or not. In theory robotic weapons, whether remotely-controlled or fully autonomous, could comply with these legal requirements and consequently, existing treaties do not prohibit the use of such weapon systems per se. Nevertheless, in reality, operational circumstances and current technological developments cannot ensure that these machines could actually be capable of correctly taking into account variable factors such as the military advantage that a precise strike could entail, the risk of collateral damages and other complex proportionality evaluations. In addition, IHL obliges every State to take full responsibility for the acts committed by its armed forces.

(4)

IV

However, in case of malfunctioning or breaches of law committed by autonomous weapon systems, there would be an accountability gap, as it would be almost impossible to identify a responsible person for such incidents.

Following these considerations, this paper will assess whether unmanned and autonomous weapon systems do ultimately comply with the requirements set by the law of armed conflicts. To answer this question, firstly there will be a preliminary clarification regarding the differences between these two kinds of robotic systems, explaining the different categories to which they might belong. Then, IHL’s most relevant principles will be recalled, trying to understand if, and why, they might represent an issue for unmanned or autonomous weapons. Subsequently, there will be a more in-depth description of the functioning of some of these systems, to better understand which are their characteristics and how they work. Finally, some examples of operations that feature unmanned and autonomous weapons will be provided, underlining the aspects that raise concerns.

(5)

V

Index

Introduction and Methodology . . . 1

1 Preliminary Considerations on Weapon Systems Categorisation

. 4

1.1 Automaticity vs autonomy . . . 4

1.2 The concept of man “in”, “on” and “out” of the loop . . . 7

1.3 Means and methods of warfare . . . 8

2 Legal Framework

. . . 11

2.1 Principle of distinction . . . 12

2.2 Principle of proportionality . . . 17

2.3 Prohibition of unnecessary sufferings . . . 19

2.4 Principle of necessity . . . 22

2.5 Accountability gap . . . 23

2.6 Moral and ethical considerations . . . 28

3 Different Kinds of Weapon Systems

. . . 31

3.1 Weapon systems employed in the air . . . 31

3.2 Weapon systems employed on the land . . . 36

3.3 Weapon systems employed in the sea . . . 42

4 Operations Featuring Unmanned and/or Autonomous Weapons

. . 46

4.1 Signature strikes and sim cards – based attacks . . . 46

4.2 Double-tap strikes . . . 50

5 Conclusion

. . . 57

(6)

1

Introduction and Methodology

Since when primitive humans started hunting, or fighting the first battles against each other, having weapons capable of hitting a target from afar, represented an immeasurable advantage, as they expose their bearer to a lower risk. For this reason, humankind has always invested its time and resources in the development of spears, bows, catapults, rifles, missiles and so on. Today, this trend has not changed and, considering the amount of money spent by every State to feed, train and remunerate its soldiers, governments started trying to keep their militaries away from danger, as much as possible. On top of that, the need to try to preserve soldiers’ lives was emphasized by the Gulf war; which, during the nineties, spread an

expectation for short and almost casualties-free wars among westerns.1

Having said so, obtaining such result, in modern conflicts, it is anything but easy. While keeping soldiers away from the danger sounds almost paradoxical, defeating in a short amount of time an elusive enemy, like terrorist organisations, can at least be defined “hard”. Nevertheless, to try to succeed in both these objectives, being able to rely on a large quantity of high-quality intelligence data is fundamental. Such information, however, can be acquired exclusively by getting really close to the enemy, after long, risky, monitoring operations. So, how to break this vicious circle? Thanks to the latest discovery in the field of weapon systems: unmanned and autonomous weapons.

As it will be explained, these weapons are robots capable of performing reconnaissance missions, defensive tasks and even offensive operations, without the presence of a human being on board. Unmanned and autonomous systems share

1 Birkeland J, ‘The Concept of Autonomy and the Changing Character of War’ (2018), Oslo Law

(7)

2

several similarities, but are also characterised by important differences, with the main one being that autonomous weapons do not require human inputs to carry out any of their tasks, including using lethal force. While it is self-evident that the employment of these weapons is linked with many advantages, it is also undeniable that unmanned, and especially autonomous, weapons raise some concerns regarding their compliance with International Humanitarian Law (IHL). As a consequence, a heated debate on the legality of “killer robots” started, and today, it

is more controversial than ever.2

The aim of this study is to establish whether the employment of unmanned and autonomous weapons is legal, under IHL. Notwithstanding the fact that, from a legal perspective, the first appear significantly less problematic than the latter, this paper will take into consideration both robotic systems. The reason of this choice lies with the fact that some of the issues raised by autonomous weapons are actually common also to unmanned ones (e.g. the perplexities related to targeting techniques). Hence, the appropriateness of addressing the issues originated by the two systems jointly. Additionally, considering that both unmanned and autonomous systems are currently in the spot light, this discussion would be somehow incomplete, if it was focused only on one of the two robotic systems.

After a preliminary explanation of the various categories in which these weapons can be divided, the study will proceed to describe the similarities and the differences between them. Subsequently, main IHL’s principles will be recalled, trying to highlight which aspects might be endangered by the employment of unmanned and autonomous systems. Then, in order to understand how these weapons function in practice, there will be a description of some of the most famous types, categorised by their operational environment. In addition, to reflect the

(8)

3

difference between means and methods of war, some tactics that feature robot’s presence will be analysed. Eventually, it will be possible to draw some conclusions and understand which weapon systems’ profiles present criticalities and if there is a way to overcome them, rendering the employment of unmanned and autonomous weapons legal, under IHL.

Finally, regarding the methodology applied to realise this study, the information contained in this paper were acquired mostly thanks to desk research, using a wide variety of sources. For instance, to establish the legal framework relevant for unmanned and autonomous weapons, treaties, case law, and academic literature on the interpretation of Geneva Conventions and Additional Protocols were examined. For data on civilian casualties, caused by unmanned and autonomous weapons, the study relies mostly on independent fact-finding missions’ and NGO’s reports. For information on recent events and latest developments in the technological field, several newspapers’ articles were compared and cross-verified. The technical specifications of the weapons, instead, were found on the websites of the producing companies or on renowned military magazines. In addition, a long series of UN documents and ICRC papers, together with the findings of several journal articles on the topic, were examined and taken into consideration.

(9)

4

1 Preliminary Considerations on Weapon Systems

Categorisation

1.1

Automaticity vs autonomy

1.2 The concept of man “in”, “on” and “out” of the loop

1.3 Means and methods of warfare

Colloquially, the words “unmanned” and “autonomous” are sometimes used as synonyms, although the two concepts present important differences. Similarly, the entire category of unmanned weapons seems to comprehend only drones, such as the famous Predator or Reaper. As a consequence, before diving in the legal and ethical considerations related to the employment of unmanned and autonomous systems, it appears fundamental to proceed with a preliminary clarification of the terminology that will be used in this paper.

1.1 Automaticity vs autonomy

The first distinction that must be made regards the concepts of “automatic” and “autonomous”, as the two are often confused. The notion of “automaticity” consists in the ability to perceive the surrounding environment and carry out specific tasks, as predetermined responses to the input received from the setting, without the need

for any action from another entity.3 The notion of autonomy, instead, comes from

two ancient Greek words: “auto” i.e. “self” and “nomos” that means “law”, which together entail the ability of someone / something to create its own regulations and act according to those, without any kind of supervision. Accordingly, it is fair to say

3 Florian R, ‘Autonomous artificial intelligent agents’, Centre for Cognitive and Neural Studies

(10)

5

that all autonomous entities are also automatic, but not vice versa, as the idea of autonomy encompasses the one of automaticity, requiring in addition also the ability to respond to unforeseen situations.

Basing on these reflections, throughout the years, there have been several attempts to define autonomous weapons. One of the most relevant is the United States Department of Defence’s Directive 3000.09, published in November 2012, which was the first governmental policy on autonomous weapon systems. According to this directive, an autonomous weapon “once activated, can select and engage targets

without further intervention by a human operator’”.4 The same definition was adopted

also by Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or

arbitrary executions, in his report.5 Similarly, the campaign to stop killer robots, a

coalition of NGOs that aims at banning fully autonomous weapons, defined this kind of armaments as systems with “a significant autonomy in the critical functions of

selecting and attacking targets”, and as weapons capable of deciding “who lives and dies, without further human intervention”.6 Also the ICRC developed an analogous

definition, stating that all systems that have autonomy in their critical functions, namely the ability to choose and engage targets, must be considered autonomous

weapons.7 As a consequence, weapons capable only of moving autonomously (e.g.

taking off, landing or flying without the need for human intervention) should not

4 United States Department of Defence, Directive n. 3000.09, 21th November 2012.

5 UNGA ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof

Heyns 23/47’ (2013), UN Doc A/HRC/23/47.

6 Campaign to stop killer robots website (https://www.stopkillerrobots.org/learn/).

7 ICRC, ‘Views of the ICRC on autonomous weapon systems’, paper submitted to the Convention on

Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), 11 April 2016.

(11)

6

be considered “autonomous” for the purpose of this discussion, as these functions are not particularly relevant for IHL. Likewise, weapons capable of scanning a certain area, to identify a person or a signal, cannot be considered autonomous, but only automatic. In fact, these robots, merely look for a pre-programmed target, something particular that they expect to find, because someone selected that target

in advance and programmed them to identify it.8

Moving on from these reflections, it is possible to define automatic weapons as machines that “repeatedly perform pre-programmed actions and operate only within

tightly set parameters and time frames in comparably structured and controlled environments”.9 In other words, although automatic weapons do not need constant

human supervision to move, select and attack targets, they must have been at least pre-programmed to carry out these tasks. On the contrary, autonomous robots can also adapt to unexpected situations, readjusting their goals, without human

inputs.10 Consequently, an autonomous weapon can be described as “a system

situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future.”11

8 Heather R, ‘Distinguishing Autonomous from Automatic Weapons’ (2016) Bulletin of the Atomic

Scientists.

9 Sauer F, ‘Stopping “Killer Robots”: Why Now Is the Time to Ban Autonomous Weapons Systems’

(2016), Arms Control Association.

10 Boothby W, ‘Some Legal Challenges Posed by Remote Attack’ (2012), International Review of the

Red Cross, Vol. 94, No. 886, 579–595.

11 Franklin S, Graesser A ‘Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents’,

Institute for Intelligent Systems University of Memphis, Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag, 1996.

(12)

7

1.2 The concept of man “in”, “on” and “out” of the loop

Under the umbrella of the definitions outlined above, it is possible to include a wide range of robotic systems, capable of operating with different degrees of human supervision. For example, in fact, it is possible to have machines that do not need human intervention to perform their tasks, but that can be overridden if needs be; or machines that, once activated, operate on their own, without any kind of communication between them and the base.

This distinction introduces the concept of man “in”, “on” and “out” of the loop of the decisions regarding targeting and attacking: a notion that permits to categorise robotic weapons in three different groups, depending on their degree of independence from human control.

The first category is the one that features the presence of a man in the loop; it comprehends human-controlled weapons, i.e. armaments that need some kind of input or authorisation from a human being, before performing all, or some of their tasks. Usually, drones and unmanned aerial vehicles in general constitute an example of this first group. In fact, they can be controlled remotely, or pre-programmed, to perform different activities (e.g. locate a target, fly over it, acquire images etc.); however, they cannot fire a missile without a real-time input from a human, who “pushes the button”, authorising the attack.

The situation is slightly different in relation to the robots that belong to the “man on the loop” class. These weapons can theoretically perform all their tasks independently, including the engagement of targets, without the need to receive an authorisation from a human operator. Nevertheless, they can be overridden in any moment, hence they are also referred to as “human-supervised” weapons.

(13)

8

Finally, those armed systems designed to execute all their tasks without human inputs, in contexts where even simple communications are not possible, hence

unsupervised, compose the “man out of the loop” category.12 Such class can be

further subdivided into two groups: automated weapons and fully autonomous ones. Both can move, identify targets and attack, without the need for an authorisation from a human; however, as previously explained, while automated weapons can carry out these tasks only in restricted and somehow predetermined environments, fully autonomous weapons can instead act everywhere, in uncircumscribed areas and in unpredictable situations, as they are capable of

“learning” and adapting their behaviour to new circumstances.13

According to this categorisation, autonomous systems can be seen as an evolution of “simple” unmanned weapons. While the latter are essentially regular arms, merely controlled from afar, fully autonomous weapons represent a totally new concept of armament. Undoubtedly, this is exactly the reason why their

compatibility with IHL entails several problems.14

1.3 Means and methods of warfare

The last point that it is necessary to address, before moving forward to the main issues, regards the difference between means and methods of warfare. Indeed, IHL

12 Press M, ‘Of Robots and Rules: Autonomous Weapon Systems in the Law of Armed Conflict’ (2017)

48 Georgetown Journal of International Law, 3.

13 European Union Directorate-General for External Policies, ‘Human Rights Implications of the

Usage of Drones and Unmanned Robots in Warfare’ (2013).

14 Birkeland J, ‘The Concept of Autonomy and the Changing Character of War’ (2018), Oslo Law

(14)

9

regulates both these aspects and Article 35 of Additional Protocol I states that: “in

any armed conflict, the right of the Parties to the conflict to choose methods or means of warfare is not unlimited”.15 This means that belligerents must refrain from using

means (i.e. weapons) and methods (i.e. tactics) that do not comply with IHL’s principles.

Among the strategies (i.e. methods of war) considered forbidden, it is possible to mention acts of perfidy, starvation, actions intended to spread terror among civilian

population, pillage, reprisals against civilians etc.16 Similarly, means of warfare such

as chemical or biological weapons, cluster bombs and booby-traps are prohibited; consequently, their utilisation must always be avoided, irrespectively of the context. The reasons behind these prohibitions lie with the fact that such tactics and such weapons cause unnecessary sufferings or are, by nature, indiscriminate, meaning that their impact cannot be limited or that an attack that features them cannot be

directed exclusively against a military objective.17 Having said so, the majority of

weapons is not IHL-incompatible per se and the admissibility of its employment depends on the circumstances of its usage. This aspect is relevant for the discussion because, according to some, unmanned and autonomous weapons are “mala in se”, as they would simply not be capable of complying with IHL’s fundamental principles. As a consequence, they should be banned completely. According to others, instead, these weapon systems do not present particular issues and could be

15 Additional Protocol I to the Geneva Conventions of 12 August 1949, relating to the Protection

of Victims of International Armed Conflicts (1977).

16 Sassoli M, Bouvier A, Quintin A, ‘How Does Law Protect in War? Cases, Documents and Teaching

Materials on Contemporary Practice in International Humanitarian Law’, ICRC, 2014.

17 ICRC, ‘Rule 71. Weapons That Are by Nature Indiscriminate’, IHL database

(15)

10

legitimately used, provided that the specific circumstances of the attacks do not render their employment incompatible with IHL.

(16)

11

2 Legal Framework

2.1

Principle of distinction

2.2

Principle of proportionality

2.3

Prohibition of unnecessary sufferings

2.4

Principle of necessity

2.5

Accountability gap

2.6

Moral and ethical considerations

After having made some preliminary clarifications, it is necessary to briefly analyse the legal framework applicable to all the different kinds of weapons. Obviously, the most relevant branch of law is IHL: the part of international law that regulates armed conflicts. The primary purpose of this set of norms is the protection of civilians, prisoners of war, soldiers hors de combat and, in general, of all the people that do not, or no longer, take part in hostilities. IHL’s most important principles are contained in the four Geneva Conventions of 1949 and in the two additional protocols of 1977, related to the protection of victims of armed conflicts. The first protocol focused on victims of international armed conflicts, i.e. fights that involve at least two states; the second, instead, on victims of non-international (i.e. internal) armed conflicts, which take place in the territory of only one country, between such State’s armed forces and irregular groups, or exclusively between those groups. In addition to these norms, IHL is also composed by customary law: a series of international obligations that are not based on written treaties, but that originated from “a general and consistent practice of states that they follow from a sense of legal

obligation”.18

18 Cornell Law School ‘Definition of Customary International Law’, Legal Information Institute.

(17)

12

The peculiarity of this body of laws is that its norms are binding on all belligerent

parties, independently from the treaties they have ratified.19

Having said so, it is important to specify that IHL’s field of application is limited to armed conflicts. Situations of mere tensions are instead considered peace time and, as such, the applicable law is human rights law. IHL’s prescriptions are usually less strict than human rights law’s ones, in the sense that some conducts that are allowed during the conduction of hostilities, would not be permitted in peace time. However, even if human rights law grants a wider variety of rights and to a greater extent, it is important to underline that its prescriptions can be derogated, while IHL’s ones must always be respected in their entirety. Since the subject of this paper is the compatibility of different kinds of weapons with IHL, it is superfluous to specify that the analysis will mostly be focused on this branch of law and not on human rights. Nevertheless, occasional references to human rights law norms might also be made, if particularly relevant for the addressed topic.

2.1 The principle of distinction

IHL has several important principles: but the fundamental ones are: distinction,

proportionality, prohibition of unnecessary suffering and necessity.20 The first of

these principles is stated in article 48 of the first Additional Protocol and it obliges combatants to always distinguish between those who are taking part in hostilities and those who are not. The reason is that the first constitute legitimate military

19 European Union Directorate-General for External Policies, ‘Human Rights Implications of the

Usage of Drones and Unmanned Robots in Warfare’ (2013), 19.

20 Sehrawat V, ‘Legal Status of Drones Under Law of Armed Conflicts and International Law’ (2017),

(18)

13

targets; hence, if all other legal requirements are respected, they can be lawfully attacked. This category comprehends soldiers, military bases, military vehicles etc. On the contrary, non-combatants such as civilians, medical and religious personnel and people hors de combat, cannot be attacked, as they enjoy the protection granted by international humanitarian law. The same protection extends also to civilian infrastructures such as houses, schools, hospitals, churches etc. Pursuant the principle of distinction, it is forbidden to use weapons that cannot distinguish between legitimate military targets and protected subjects, as it is allowed to direct offensive power only against the first. Similarly, weapons capable of discriminating between civilians and combatants and, hence, of directing their deadly potential only against the latter, are still prohibited, if they have effects that cannot be limited exclusively to the targets of the attack, affecting also protected people with collateral damages. Attacks of this kind are defined in article 51 of Additional Protocol I and are called “indiscriminate attacks” because, despite having been launched against lawful targets, they eventually hit both military and civilian subjects and

infrastructures, indiscriminately.21

In order to establish if a weapon is by its nature indiscriminate, it is necessary to understand whether that weapon has the capacity to target a specific objective and strike it accurately or not. This criterion has been quoted several times and, for example, in the advisory opinion on the legality of the threat or use of nuclear weapons, it was stated that: “a weapon will be unlawful per se if it is incapable of being

targeted at a military objective only”,22 concluding that nuclear weapons are “generally

21 “Those which employ a method or means of combat the effects of which cannot be limited as required by

this Protocol; and consequently, in each such case, are of a nature to strike military objectives and civilians or civilian objects without distinction” Additional Protocol I, article 51.

22 Advisory opinion on “The Legality of the Threat or Use of Nuclear Weapons”, 8th July 1996, Judge

(19)

14

contrary to the rules of international law applicable in armed conflict, and in particular to the principles and rules of humanitarian law".23

The second element that needs to be assessed regards the range of the weapon’s effects, considering also the type of munitions it uses. In fact, as explained before, weapons capable of being directed exclusively against their targets could still be prohibited if their effects could potentially spread to civilians and other protected subjects, or if the destruction they cause would be too widespread in space and/or time. Biological and chemical weapons were prohibited precisely because their

effects are “uncontrollable and unpredictable”24, as there is no effective way to limit

their virulence’s diffusion. On this regard, it is necessary to underline that some weapons might lack the ability to discriminate, but only in certain situations, meaning that they are not indiscriminate per se. Consequently, those weapons could legitimately be employed in all those contexts, such as uninhabited zones or

the high sea, where they could not provoke excessive collateral damages.25 Basing

on the two criteria illustrated above, several weapons such as incendiary and nuclear ones, cluster bombs, anti-personnel land mines, booby traps etc. have been

identified as indiscriminate and, consequently, banned.26 Nowadays, however, no

treaty stated that autonomous weapons or remotely-controlled ones have an

23 Advisory opinion on “The Legality of the Threat or Use of Nuclear Weapons”, 8th July 1996, Judge

Higgins dissenting opinion, dispositive para 2 E, 44.

24 UN General Assembly, Resolution on “Question of Chemical and Bacteriological (Biological)

Weapons”, 16th December 1969, UN Doc A/RES/2603(XXIV).

25 Press M, ‘Of Robots and Rules: Autonomous Weapon Systems in the Law of Armed Conflict’ (2017)

48 Georgetown Journal of International Law, 5.

26 ICRC, ‘Rule 71. Weapons That Are by Nature Indiscriminate’, IHL database

(20)

15

indiscriminate nature.27 Nevertheless, it is important to highlight that, for a weapon

to be prohibited, it is not necessary to have a specific treaty that explicitly bans it;

indeed, if a weapon does not respect IHL principles, it is automatically unlawful.28

In order to understand if unmanned and autonomous weapons abide by the principle of distinction, it is necessary to take into account a number of factors: first of all, it is difficult to deny that these systems are more precise in targeting than any other weapon. Secondly, as they do not need to rest, they can collect greater amount of data, providing better intelligence which, in turns, diminishes the probability of possible mistakes. Additionally, remotely-piloted and autonomous weapons do not risk soldiers’ lives, therefore, they can get really close to targets and, thanks to sophisticated cameras and/or infra-red technology, they can confirm the identity of the objectives, before attacking them. As a consequence, the remote operator can abort the mission at the very last moment, if he spots civilians approaching the targeted area. On the contrary, traditional missiles, fired from a faraway military base, hit the designated area only several hours after their launch, with the hope to still find there the subjects they expected. In addition, once a missile is launched, there is no way to stop it, in case protected people are seen on the spot that is to be

struck.29 Lastly, while missiles’ strikes usually affect a pretty wide area, modern

27 European Union Directorate-General for External Policies, ‘Human Rights Implications of the

Usage of Drones and Unmanned Robots in Warfare’ (2013), 27.

28 Egeland K, ‘Lethal Autonomous Weapon Systems under International Humanitarian Law’ (2016) 85

Nordic Journal of International Law, 96.

29 Zenko M, ‘Reforming us. Drone Strike Policies’, Council of Foreign Relations, Special Report 65

(21)

16

unmanned aerial vehicles are equipped with small bombs, capable of hitting precise

spots, without provoking extensive damages.30

On the other hand, some experts believe that unmanned (and especially autonomous) weapons are not technologically advanced enough to effectively distinguish between combatants and protected subjects. Modern conflict scenarios do indeed present complex contexts, where combatants try to mix with civilians on purpose, to benefit from their protected status. Guerrillas seldom wear uniforms and often they are not the only ones who carry arms openly, as in certain areas it is a common behaviour, also for civilians. Additionally, even without picturing such a complex scenario, many doubt that machines would be able to tell the difference between an active combatant and a soldier who wants to surrender and that, therefore, should be granted protection. Furthermore, even remotely-piloted weapons could not be a hundred percent accurate in targeting and striking, for two main reasons. The first is “latency”: an unavoidable delay between what is happening on the spot and the moment in which the corresponding images arrive

to the pilot, in the headquarter, via satellite.31 The second is the fact that, although

unmanned weapons could theoretically be sent extremely close to targets, in order to check their identity, this is seldomly done, to avoid spoiling the surprise element.

In conclusion, unmanned and autonomous weapons present some features that make them the best option to minimise collateral damages; however, some others

30 Brooks R, The Constitutional and Counterterrorism Implications of Targeted Killing: Hearing Before

the S. Judiciary Subcomm. On the Constitution, Civil Rights, and Human Rights, 113th Cong., 23rd

April 2013. (Statement by Professor Rosa Brooks, Geo. U. L. Center), 2.

31 International Human Rights and Conflict Resolution Clinic (Stanford Law School) and Global

Justice Clinic (NYU School Of Law), ‘Living Under Drones: Death, Injury and Trauma to Civilians

(22)

17

render them unreliable in certain contexts. Consequently, it is not possible to state, generically, whether these weapons fully comply with the principle of distinction.

2.2 The principle of proportionality

The application of the proportionality principle entails the execution of a prognostic judgement, aimed at weighing the benefits and the harms that would probably follow an attack. In other words, this precept requires commanders to try to foresee the entity of a strike’s possible collateral damages and then, to compare it with the relevance of the military advantage they expect to gain. The greater the presumed

utility of an attack, the more tolerance towards collateral civilian casualties.32 On

this point, it is important to specify that the incidental harm must not be “excessive

in relation to the concrete and direct military advantage” predicted for that specific

attack.33 This means that, while balancing the positive and the negative outcomes of

a strike, it should be weighted only the military advantage that would be obtained thanks to that single attack, without considering its impact towards the winning of the whole war. Additionally, this principle states that lethal force can only be used as extrema ratio, when no other measure can effectively be employed to counter an equally dangerous and serious threat. These requirements entail that, in the planning of every strike, commanders must evaluate all the available options, in terms of modalities, weapons’ choices and timing of the offensive and take all the possible precautions, considering, for example, the possibility of releasing evacuation warnings for the civilian population.

32 Additional Protocol I, article 51 (5) (b).

33 Sehrawat V, ‘Legal Status of Drones Under Law of Armed Conflicts and International Law’ (2017),

(23)

18

Outside the conduction of hostilities, i.e. when human rights law is applicable, the principle of proportionality sets even stricter requirements. In such case, in fact, the principle of proportionality does not apply with regards to civilians only, but also to the individuals that are about of being hit. In other words, in peacetime, the use of force (especially potentially lethal one), should always be kept at a minimum level, to avoid causing excessive damages to both innocent bystanders and criminals. Having said so, this does not mean that, outside wartime, incidental harm is never admitted. The matter is indeed very problematic and, according to some scholars, in extreme situations, it would be possible even to intentionally provoke

the certain death of innocent people, in order to protect several others.34 According

to another school of thought, instead, the possibility of killing bystanders on purpose, or with a very high probability rate, although with the aim of protecting several other innocents, must be excluded, as it would be a violation of the human right to life. In particular, in 2006, the German Federal Constitutional Court was asked whether it would be lawful to shoot down a hijacked plane directed by terrorists towards a civilian target. The Bundesverfassungsgericht stated that a law authorising such behaviour would be incompatible with the German constitution,

which protects the right to life.35

In the light of these reflections, do unmanned and autonomous weapons comply with the principle of proportionality? While the first do not present particular issues, because their remote pilots could assess the proportionality of the attacks from afar, the second appear problematic. The reason is that it seems unlikely that

34 European Union Directorate-General for External Policies, ‘Human Rights Implications of the

Usage of Drones and Unmanned Robots in Warfare’ (2013), 32.

35 Youngs R, ‘Germany: Shooting down aircraft and analyzing computer data’, International Journal

of Constitutional Law, Volume 6, Issue 2, April 2008, 331–348. See: German Federal Constitutional Court, “Luftsicherheitsgesetz“case.

(24)

19

robots would be capable of carrying out complex judgements, such as the ones required by the proportionality principle. In addition, outside of warlike contexts, it would be quite difficult to prove that the individuals targeted by these weapon systems were posing an immediate threat to human life, worth requiring the use of lethal force, considering that they would be threatening inanimate machines and not soldiers. Nevertheless, these considerations do not imply that these weapons are per se incompatible with the principle of proportionality. Indeed, autonomous weapons could be employed in contexts where the probability of causing incidental harm to civilians is minimal; this way, the proportionality judgement would be carried out by human commanders, before deciding to use autonomous machines in that area, avoiding the necessity for the weapon systems to do the judgement themselves.

2.3 Prohibition of unnecessary sufferings

This principle, already stated in the Preamble of the 1868 St. Petersburg Declaration, in the Hague regulations of 1899 and in article 35(2) of Additional Protocol I, is now

considered customary law.36 It restricts adversaries’ freedom of choice of means and

methods of warfare. In fact, due to the prohibition of unnecessary sufferings, combatants are not entitled to employ weapons, or tactics, designed to inflict superfluous or unnecessary sufferings to the enemies; nevertheless, they can still lawfully kill adversary soldiers. In other words, although it is almost inevitable to somehow injure adversaries taking part in hostilities, IHL prohibits to make them suffer beyond what is strictly necessary to put them hors de combat. This principle,

36 ICRC, ‘Rule 70. Weapons of a Nature to Cause Superfluous Injury or Unnecessary Suffering’, IHL

(25)

20

therefore, provides some kind of protection to all those individuals that are identified as legitimate military targets.

The problem with this precept is that it is quite difficult to establish whether a mean or method of warfare causes unnecessary sufferings, as the discriminating factor resides in its ability to serve a specific military purpose, rather than simply injuring the enemy. The International Court of Justice simply defined unnecessary suffering

as a “harm greater than that unavoidable to achieve legitimate military objectives”.37

Consequently, some States deduced that this prohibition requires to carry out a judgement similar to the one prescribed by the proportionality principle. In particular, a weapon or a tactic would not violate the prohibition of causation of unnecessary sufferings if its use would lead to a legitimate military advantage that outweighs the sufferings that the attack is expected to cause to enemy fighters. The intensity of suffering should be calculated basing on: the level of pain, the eventual

permanent health problems, and the likelihood of death.38 Additionally, according

to some, in the balancing judgement it should also be taken into account the

potential availability of alternative means or methods of warfare. 39

Due to the prohibition to cause unnecessary suffering, the use of several weapons has been prohibited or restricted. Although there is not unanimous consensus,

37 International Court of Justice, ‘Legality of the Threat or Use of Nuclear Weapons’, Advisory

Opinion Of 8 July 1996, para 78.

38Weaponslaw Encyclopedia, ‘Superflous Injury or Unnecessary Suffering’

(http://www.weaponslaw.org/glossary/superfluous-injury-or-unnecessary-suffering).

39 These tests are contained in the military manuals of Australia, Canada, Ecuador, France,

Germany, New Zealand, South Africa, United States and Yugoslavia and in statements of Belarus, India, Netherlands, United Kingdom and United States. ICRC, ‘Rule 70. Weapons of a

(26)

21

among forbidden weapons it is possible to find lances or spears with a barbed head, explosive and expanding (also known as “dum-dum”) bullets, poison and poisoned weapons, biological and chemical weapons, bullets with fragments that cannot be

detected by X-rays, incendiary weapons, blinding lasers and nuclear weapons.40

Similarly, prohibited methods of warfare include attacking combatants treacherously, ordering not to give quarter, killing or wounding an enemy that has surrendered and making an improper use of the flag of truce, of Geneva

conventions badges or of military or national symbols.41

Nowadays, there are no explicit references to unmanned or autonomous weapons compliance with the prohibition to cause unnecessary suffering. This uncertainty is most likely due to the difficulties linked with the identification of a precise and stable threshold. According to the legal test described above, to establish if these weapons could legally be employed, for example, it would be necessary to calculate the likelihood of death for victims of their attacks. However, it would be extremely difficult to predict such a thing, as the simple advance of medical techniques constantly increases victims’ survival chances, making former banned weapons acceptable. Similarly, it is extremely difficult to rate the intensity of pain suffered by someone. Additionally, assessing the compliance of autonomous weapons with the principle of unnecessary suffering is more complicated than assessing the lawfulness of any other weapon. This is due to the fact that a commander has to make all the calculations illustrated above, as soon as he decides to employ autonomous weapons, i.e. a lot before the actual strike. This means that, right before the attack, the situation could radically change, without the possibility for the

40 Henckaerts J, Doswald-Beck L, ‘Customary International Humanitarian Law. Volume I: Rules’,

ICRC Cambridge (2005), 243.

41 Hague Regulations 1907, Convention (IV) respecting the Laws and Customs of War on Land,

(27)

22

commander to modify autonomous weapons’ directives; consequently, to be lawfully employed, autonomous weapons should feature an extremely reliable and

predictable operativity, even in unexpected situations.42

2.4 Principle of necessity

The prohibition of unnecessary sufferings examined above is strictly related to the idea of military necessity, as IHL essentially tries to balance the opposite pushes of these two concepts. On this regard, the principle of necessity states that combatants can employ whatever means and methods, provided that they are not otherwise prohibited by IHL, to accomplish a legitimate military objective. In armed conflicts, the only legitimate purpose is to weaken the military power of the enemy and not

to completely defeat it.43

The concept of military necessity is significantly more relaxed than the one of “absolute necessity” related to the use of lethal force; in fact, it does not require the presence of an imminent threat for an attack to be lawful. Nevertheless, even the notion of military necessity has limits: an attack against protected subjects could never be justified claiming that it was useful to weaken the enemy. In this respect, article 52(2) of Additional Protocol I specifies that “Attacks shall be limited strictly to

military objectives. In so far as objects are concerned, military objectives are limited to those objects which by their nature, location, purpose or use make an effective contribution to

42 Davison N, ‘A Legal Perspective: Autonomous Weapon Systems Under International Humanitarian

Law’, International Committee of the Red Cross, 31 January 2018, 10.

(28)

23

military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage”.44

Consequently, autonomous weapons should be programmed in a way that enables them not only to distinguish between civilians and combatants, but also to understand the difference between an improvised military camp and a shelter for displaced civilians. Additionally, autonomous weapons should obey to the rule set forth in article 52 para 3 and, in case of doubts whether an object usually designed for civilian purposes is actually used for military ones, they should always presume

that it is not and refrain from attacking it.45 Once again, unmanned weapons do not

present particular problems in respect to this principle, as their compliance depends on how they are employed by their remote pilots. Autonomous weapons, instead, should have a software sophisticated enough to allow them to respect the rules described above.

2.5 Accountability gap

After having examined some of the most important IHL principles, someone could wonder what would happen if it was proven that a robot violated one of those principles. Clearly, the fact that unmanned and autonomous weapons can perform several tasks without human inputs, does not imply the ability to bear responsibility for their own actions. Consequently, the employment of such weapons results particularly complicated in relation to the aspect of accountability for IHL

44 Additional Proòtocol I to the Geneva Conventions of 12 August 1949, relating to the Protection

of Victims of International Armed Conflicts (1977), art 52, para 2.

45 Additional Protocol I to the Geneva Conventions of 12 August 1949, relating to the Protection

(29)

24

violations, for two main reasons. The first is related to the general possibility of holding a state accountable for the IHL violations it committed, trough the employment of unmanned or autonomous weapon systems. The second is related to the difficulties inherent to the identification of the person actually responsible for the unlawful acts.

In relation to the first issue, Additional Protocol I, in article 91, states that: “A Party

to the conflict which violates the provisions of the Conventions or of this Protocol [...] shall be responsible for all acts committed by persons forming part of its armed forces”.46 Clearly,

in 1977, the construction of robotic weapons was considered sci-fi, making it difficult to imagine that phrasing the norm this way would have originated an accountability gap. Indeed, a literal interpretation of the article significantly restricts the scope of the rule, excluding all the violations committed through the employment of autonomous (and perhaps also unmanned) weapons, from those that can be presented to hold a State accountable for its crimes. However, to fully understand the meaning and the scope of the quoted article, it is necessary to conduct a systematic interpretation of the rule, examining also the content of two

other regulations: articles 50 and 51 of the 1st Geneva Convention.

Article 50 lists a number of IHL violations such as “causing great suffering or serious

injury to body or health, and extensive destruction and appropriation of property, not justified by military necessity and carried out unlawfully and wantonly”. According to

article 50, such behaviours are considered grave breaches of IHL, when committed

against protected subjects.47 Undoubtedly, a weapon that does not respect IHL’s

46 Additional Protocol I to the Geneva Conventions of 12 August 1949, relating to the Protection

of Victims of International Armed Conflicts (1977), art 91 (2).

47 1st Geneva Convention for the Amelioration of the Condition of the Wounded and Sick in

(30)

25

fundamental principles would fall in the applicability range of this regulation, as it would surely somehow inflict unlawful damage to protected people. However, if such weapon was a fully autonomous one, it would be extremely difficult to hold a state accountable for its violations, due to the wording of article 91 of Additional Protocol I. Luckily, article 51 of the 1st Geneva Convention widens States’ accountability, specifying that: “No High Contracting Party shall be allowed to absolve

itself [...] of any liability incurred [...] in respect of breaches referred to in the preceding Article.”48 Consequently, no matter the way a grave breach of IHL is realised: a State

will be responsible for it, even if the violation was not committed “by persons forming

part of its armed forces”.

What does this imply? When a State violates international law, it must cease the actions or the omissions that caused the violation and offer an appropriate

reparation to the victims.49 Additionally, it must investigate and prosecute the

people that materially committed IHL breaches, leading to individual criminal

responsibility.50 In fact, as prescribed in the Rome Statute, the authors of serious IHL

breaches will be subjected to the jurisdiction of the International Criminal Court, in

complement to national courts.51 Having established that, however, it is still

necessary to identify who should be considered the actual responsible for the

48 1st Geneva Convention 1949, article 51.

49 Diakonia, ‘Legal Accountability for Violations of International Humanitarian Law: An introduction

to the legal consequences stemming from violations of international humanitarian law’, (2013), chapter

2 para a.

50 International Human Rights and Conflict Resolution Clinic (Stanford Law School) and Global

Justice Clinic (NYU School Of Law), ‘Living Under Drones: Death, Injury and Trauma to Civilians

from US Drone Practices In Pakistan’ (2012), 122.

(31)

26

mentioned violations. For this purpose, while unmanned weapons could be considered mere tools in the hands of mindful pilots, who would hence retain responsibility for their actions, once again, the issue is much more complicated when it comes to autonomous weapons. Indeed, to solve the accountability gap issue, it is possible to hypothesise and consider many potential solutions, each of them with strong and weak points.

The first possibility would be to consider responsible for the violations the State that employed the weapons. In case of malfunctions, instead, it could be argued that the producing company should be held accountable. The good aspect of this option is that both the State and the producing company would probably be rich enough to grant an appropriate reparation to potential victims. However, the State or the company could only be obliged to pay compensation to the victims and no action for individual criminal responsibility could be brought against them, as they are not individuals. Hence, this option must be rejected.

To avoid impunity, a second option could consist in establishing that the engineers who programmed the weapons must be personally responsible in case of IHL violations committed due to malfunctions. While this might prima facie appear as an acceptable solution, it is necessary to consider that both unmanned and autonomous weapon systems operate thanks to software based on complex lines of coding. These codes are written by teams of people and no single programmer has a clear overview of the whole software and of how it would operate in every context. This circumstance represents an insurmountable obstacle for the attribution of criminal responsibility, as the latter only plays a role when there is something an individual can be blamed for. In other words, if nobody retains a meaningful degree of control over these weapons’ software, it is quite difficult to demonstrate that someone should be held accountable for the violations committed by a machine

(32)

27

he/she could not fully control.52 In reality, every engineer would have contributed

to the weapon’s behaviour (and perhaps malfunctioning) only in a small percentage and such minimal contribution would not be enough to hold him/her responsible. Clearly, if it were possible to demonstrate that a programmer intentionally inserted a malicious code in the software, the situation would be completely different and

that person could be prosecuted.53

Another possible solution for the accountability issue is represented by command responsibility. Indeed, a commander has criminal responsibility for the crimes committed by his subordinates if he knew, or should have known, that such crimes were being committed and he did not take all feasible measures to prevent or

repress such conducts.54 Consequently, if it was established that autonomous

weapons must be considered as human soldiers, commanders would bear full responsibility for all the breaches committed by the machines they decided to employ. Having said so, if the violations were committed due to malfunctioning, it would be unreasonable to punish the commanders, as they would have had a reasonable expectation to have received weapons without defects and no way to foresee that an autonomous weapon was about to commit a crime. Similarly, it would be unfair to hold commanders responsible in case of IHL breaches caused by enemies’ cyber-attacks. Indeed, according to some scholars, in this case, it could be possible also to invoke the force majeure clause, to avoid liability, considering that the malfunctioning would have been originated by external factors, outside

52 Heyns C, ‘Human Rights and the Use of Autonomous Weapons Systems (AWS) during Domestic

Law Enforcement’, 38 Human Rights Quarterly, 350-378, (2016), 373.

53 Egeland K, ‘Lethal Autonomous Weapon Systems under International Humanitarian Law’ (2016) 85

Nordic Journal of International Law, 111.

(33)

28

commander’s control.55 However, on the contrary, if an autonomous weapon

committed a long series of violations, for a prolonged period of time, a commander could actually be considered responsible for these violations. In fact, he should have noticed that something was wrong with that machine and, consequently, should have taken measures to stop the violations (e.g. by repairing it or, at least, ceasing

the employment of that autonomous weapon).56

The last solution for the accountability gap that it is worth considering, is the introduction of a specialised figure, with the precise scope to constantly monitor the behaviour of autonomous weapons. Such person would have the duty to override the weapon systems, if and when, they are about to commit a violation. In case of failure to do so, the “guardian” could be held responsible for the violations. However, it is important to highlight that, in this case, the weapons would not be fully autonomous, as there would be a person on the loop.

2.6 Moral and ethical considerations

From the brief analysis of IHL principles conducted above, it is clear that the fundamental purpose of this branch of law is to limit the negative effects of armed conflicts, on both combatants and non-combatants. To protect the first ones, IHL restricts the means and the methods that adversaries can employ against each other, trying to limit the amount of sufferance that soldiers might end up bearing. Similarly, to protect the second ones, IHL introduces several other limitations,

55 European Union Directorate-General for External Policies, ‘Human Rights Implications of the

Usage of Drones and Unmanned Robots in Warfare’ (2013), 39.

56 Egeland K, ‘Lethal Autonomous Weapon Systems under International Humanitarian Law’ (2016) 85

(34)

29

aimed at keeping people hors de combat and civilians safe during armed conflicts, by reducing the risk of collateral casualties. These goals can ultimately be summed up in the so-called “principle of humanity”. This precept is not precisely defined but, in its essence, it forbids the infliction of all kinds of superfluous sufferings, by requiring to treat combatants and non-combatants humanely, preserving the

“inherent worth and dignity of the person”,57 at all times.58

Moving on from this reflection: is it possible for a weapon system to treat individuals “humanely”? According to many, unmanned weapons, which are piloted by faraway soldiers like videogames, deprive victims of their dignity. The reason is that killing people with this method is equivalent to tell them that their

lives are not worthy of putting at risk not even a single soldier.59 Obviously, the case

of autonomous weapons represents an even worse scenario. In such a situation, in fact, the decision to use lethal force against a human being is taken by a computer algorithm which, by nature, does not have morality nor a sense of what dying means. As a consequence, the life of the targeted person “is literally reduced to

numbers: the zeros and the ones of bits”.60 Indeed, this circumstance appears at least

paradoxical, considering that, according to the European Directive 95/46/EC, no person can “be subject to a decision which produces legal effects concerning him which is

based solely on automated processing of data intended to evaluate certain personal aspects

57 Fast L, ‘Unpacking the principle of humanity: Tensions and implications’, International Review of

the Red Cross (2016), 97 (897/898), 111–131.

58 ICRC, ‘International Humanitarian Law: Answers to Your Questions’, ICRC (2015), 6.

59 Roff H, ‘The Strategic Robot Problem: Lethal Autonomous Weapons in War’, Journal of Military

Ethics, Volume 13, Issue 3, 214.

60Heyns C, ‘Human Rights and the Use of Autonomous Weapons Systems (AWS) during Domestic Law

(35)

30

relating to him”.61 Clearly, if such a right is granted in relation to creditworthiness or

work performances, under no doubt the same right should a fortiori be guaranteed in relation to life and death matters.

According to another school of thought, however, no weapon system would harm human dignity and, actually, autonomous machines might end up acting more “humanely” than real human beings. The first reason behind this opinion lies with the fact that robots do not experience fear nor stress, hence, they are not hurried, during their decision-making processes, by the survival instinct that urges men to do whatever they can to preserve themselves. This implies that autonomous weapons can collect more data to base their decisions on, and if correctly programmed, they could actually make more accurate distinctions, between combatants and civilians, than real soldiers. Furthermore, fully autonomous robots cannot feel hate or rage; hence, they do not seek revenge on the enemy they are fighting. Certainly, these weapon systems would not torture prisoners, nor rape civilians, or pillage. In other words, their employment might actually contribute to the reduction of war crimes, by ensuring a better protection to both combatants and

people that do not take part in hostilities.62

61 European Union Directive 95/46/EC, of the 24th October 1995, on the Protection of Individuals

With Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 15.

62 Lark M, ‘The Future of Killing: Ethical and Legal Implications of Fully Autonomous Weapon Systems’,

(36)

31

3 Different Kinds of Weapon Systems

3.1

Weapon systems employed in the air

3.2

Weapon systems employed on the land

3.3

Weapon systems employed in the sea

After having briefly recalled IHL’s key provisions, and before examining how unmanned and autonomous weapons are employed in practice during military operations, it is appropriate to get a more precise idea of what these systems actually look like. Although all these weapons present many similar characteristics, for the sake of clarity, this chapter will be divided into three sections, each of them dealing with weapon systems employed in different environments: i.e. air, land and sea.

3.1 Weapon systems employed in the air

The first category of weapon systems that will be taken into consideration is perhaps the most famous one. Indeed, in collective imaginary, unmanned and autonomous weapons are immediately, and almost exclusively, associated with drones. The two most famous kind of drones are the RQ-1 “Predator” and the MQ-9 “Reaper”, both produced by an American company: General Atomics. The two models are respectively 8 and 11 meters-long, have a wingspan of 15 and 20 meters and, to take off, both need at least a 600 meters-long track. These unmanned aerial vehicles are equipped with high resolution cameras and infrared technology and can display in real time the images they are acquiring everywhere in the world, via satellite connection. Both models can fly for a maximum of 24-28 hours (depending on the load); however, the Predator can reach a maximum speed of 217 km/h, while the Reaper can arrive up to 482 km/h. The two weapons are armed with Hellfire

(37)

32

missiles63 and Paveway bombs64 and can be employed to destroy both buildings and

moving vehicles, as well as to keep a certain area under surveillance, or to go on

patrol and offer support to ground forces. 65

Most Hellfire missiles and Paveway bombs are guided by a laser system, meaning that the drone, or the ground forces, must continuously hit the designed target with a high-intensity laser beam (configured on a particular pulse pattern, set up also on the missile/bomb) until the strike has been completed. This aiming system is very precise and accurate; however, if the laser beam is suddenly hindered by an obstacle, or if the weapons lose sight of the reflected laser’s light, they have no way to find their targets. Additionally, since the laser beam needs to keep hitting the target until the strike has been performed, the laser operator is obliged to stand in front of the target, without any coverage in between, exposed to enemy fire. To fix

63 “Each missile is a miniature aircraft, complete with its own guidance computer, steering control and

propulsion system. The payload is a high-explosive, copper-lined-charge warhead powerful enough to burn through the heaviest tank armor in existence.” Harris T, ‘How Apache Helicopters Work’, How

Stuff Works, 2nd April 2002. Available at:

https://science.howstuffworks.com/apache-helicopter.htm

64 “Laser Guided Bombs (LGB) were introduced in 1968 to meet the requirements for precision guided

bombs of the US military. The semi-active LGBs home on reflected laser beam energy directed on the target. The target illumination can be done by the launching aircraft, by a third aircraft or by ground-based troops operating a laser designator. The LGBs are in fact a laser guidance kit applicable to current conventional unguided bombs. The Laser Guided Bombs have reduced the number of weapons requested to destroy a single target while enhancing accuracy, reliability and cost-effectiveness in strike missions”. The latest model of Paveway bombs “is suitable against small, hardened targets such as battle tanks and other armored vehicles. This bomb also features a reduced collateral damage probability due to its lightweight warhead.” Deagel, ‘GBU-49 Paveway II’, Deagle.com, 3rd May 2019.

(38)

33

these issues, more recent models of bombs and missiles have been equipped with

radars or GPS systems to identify the targets.66

Both the Predator and the Reaper are able to navigate in a pre-determined area, where they can autonomously look for specific targets. Once the drones identify the targets, however, they need an input from a remote pilot, to engage their

objectives.67 In general, their performances, in terms of speed and evasive actions,

are not comparable with traditional military aircrafts. However, considering that most modern wars are asymmetrical, drones proved themselves perfect to fight

armed groups devoid of a proper anti-aircraft artillery.68 Precisely for this reason,

the whole field of unmanned aerial vehicles technology has experienced an exponential development in the last years. In comparison to the Predator, which was used for the first time in 1995, during the NATO operation “Deliberate Force” in Bosnia, and to the Reaper, which flew for the first time in 2005, in the Balkans, modern drones present significant improvements under many different perspectives.

One of the newest models is the Russian “OWL”, presented just a few months ago, during the 2019 Defence Exhibition Army held in Kubinka (Moscow). The robot can fly for up to 40 minutes and features a laser beam capable of guiding artillery. It weighs only 5 kilograms and can easily be transported, launched and then

66 Harris T, ‘How Apache Helicopters Work’, How Stuff Works, 2nd April 2002.

67 Lark M, ‘The Future of Killing: Ethical and Legal Implications of Fully Autonomous Weapon Systems’,

Salus Journal, Volume 5, Issue 1 (2017).

68 European Union Directorate-General for External Policies, ‘Human Rights Implications of the

Riferimenti

Documenti correlati

We studied a dynamic-contracting model with adverse selection and limited commitment. In our benchmark model, the relationship ends when the worker rejects all contracts from the

Vi è inoltre una parte di questo commercio che trae profitti molto elevati dal trasferimento di armamenti ex- Sovietici via aerea verso diversi Paesi africani con

pendent intellectual attentive to contemporary geopoliti- cal systems, declares himself “interested in how architects can perhaps use their skills and the value of architecture as

Tips roots from Swingle citrumelo and Carrizo citrange (sensitive and tolerant rootstocks, respectively), growing in pots with control and chlorotic soil, were used for

La sentenza 50/2015, infatti, definisce quali siano i trattati le cui norme sono idonee a essere evocate come «parametri interposti» nel giudizio di costituzionalità e, una

Table 6.3 Clinical Features of Viral Hemorrhagic Fevers Caused by Agents with a High Risk of Being Used as Bioweapons BiologicalIncubationMortality AgentDistinguishing