Rise of the Machines?

Rise of the Machines?

The U.S. Approach to Autonomous Weapons Systems.
Matt Donnelly

The rapid advancement of Autonomous Weapon Systems (AWS) will have a significant geopolitical impact that requires policy that strikes a balance between recognizing the moral and tactical advantages while guarding against the ethical and strategic dangers of militarizing Artificial Intelligence. The definition of what AWS are is directly linked to the way policies are written and what the future of AI and robotics will be in the military. When does a weapon become autonomous? When does a aim-assistance computer become the weapon itself? Debate on this issue has created three types of artificial intelligence in weapons: in the loop, on the loop, and out of the loop. [1] In the loop A.I. means “a human is still “in the loop,” making the pivotal decisions about which target to engage and whether to authorize the attack.” [1] This means the machine is only semiautonomous and the human is responsible for the authorization of its actions. The second category is “on the loop” which means the machine is “capable of detecting, selecting and engaging targets on their own, but are supervised by a human operator, who retains the ability to intervene.” [1] The final type of AWS is “out of the loop” which means “robots capable of selecting targets and delivering force without any human input or interaction.” [2] This third category of AWS is the subject of greatest concern to advocates of treaties and bans on A.I. warfare.


One of the greatest criticisms of AWS is that its tactical drawbacks outweigh its tactical advantages; however, an examination of AWS systems illustrates this isn’t the case. First, it is important to establish what advantages AWS offers militaries. First, it is important to establish what advantages AWS offers militaries. Russia explained its interest by claiming, “The Russian Federation is enthusiastically pursuing fully autonomous systems that can identify and engage targets completely independent of human control.” [3] Robots also make excellent patrol and border security because of their unaffected vigilance. [4] Out-of-loop machines are very good at creating an uncrossable red line between forces, like the border between North Korea and South Korea. [5] However, there are legitimate points about the tactical dangers of AWS that should be acknowledged by policy makers and military officials. For example, AWS make decisions based only on a set of rules or instructions given. [2] This means unpredictable and unusual situations could cause the AWS to act in a harmful manner when the situation didn’t necessitate lethal force. Developers of AWS should be aware of the dangers of AWS and take precautions that preclude the unnecessary use of lethal force.


Another critical element of debate concerning AWS is whether the use of autonomous weapons in warfare is ethical. The issue is more complex than the justification that the replacement of human life at risk with machinery means the technology is moral. Some argue that this lower risk of human casualties will lower the threshold required to enter battle. [6] There are several arguments against the ethical validity of autonomous weapons. First, many of the most proficient scientists in the field of robotics have publicly decried the dangers of developing artificial intelligence that not only can make independent decisions but also possesses the capacity for lethal force. [7] In “An Open Letter On Autonomous Robotics,” signed by 3978 AI/Robotics researchers and 22539 others, it says “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [6] Another argument for the immorality of AWS is that “the alleged moral disengagement by remote pilots will only be exacerbated by the use of autonomous robots.” [8] However, there are strong counterarguments to these objections which legitimize enlisting AWS in the military. “Military ethicist George Lucas Jr. points out, for example, that robots cannot feel anger or a desire to “get even” by seeking retaliation for harm done to their compatriots.” [5] This clarity of judgement offered by autonomous machines outweighs their potential improper reactions to complex situations, especially since human agents suffer from that same problem as well as blinding emotions. Both sides considered, it is likely that AWS offer a greater degree of tactical efficiency, although it, like all software, runs the risk of a costly malfunction.


The current policies regarding AWS may be sufficient for the technology currently developed, but the near limitless potential of autonomous weapons means it might not be long before the technology outpaces the current policy. In November 2017, the United Nations Group of Governmental Experts discussed the best policy approach regarding AWS. There are currently three approaches to policy on this issue. First, there are countries that reject the creation of binding treaties or legal code regarding AWS. This includes countries like the United States, Russia, and China. The second group advocates for reasonable political control over autonomous weapons. This view is supported by France and Germany. The last and largest group, with over 100 of the 125 contracting nations agreeing, argues “that the use of LAWS will fundamentally change the nature of relations in war and peace.” [9] Policy-makers must strike a reasonable balance between allowing the development of more efficient and reliable AWS while implementing helpful safeguards that guard against their misuse.
Considering these viewpoints, the best recommendation for U.S. policy would be to agree to the ban of out-of-loop AWS.

While it may seem counterintuitive for the U.S. to willingly limit technology it is on the cutting edge of, there are two reasons why this is the best policy. First, out-of-loop AWS are dangerous. The technology is too new and there is a grave threat of loss of property and life if they were to malfunction. Secondly, the U.S. is already the dominant global military force. There’s a possibility that the further development of AWS, especially out-of-loop systems, could lead to a more even combat field where the U.S. comparatively is not as dominant. For those two reasons, it would be reasonable for the U.S. to adopt a policy that supports autonomous weapons systems, with the exception of out-of-loop mechanisms.


[1] Natalie Salmanowitz, ” Explainable AI and the Legality of Autonomous Weapon Systems,” Lawfare, 21 November 21, 2018. https:// www.lawfareblog.com/explainable-ai-and-legality -autonomous-weapon-systems
[2] Amitai Etzioni, PhD and Oren Etzioni, PhD, “Pros and Cons of Autonomous Weapons Systems,” Army University Press, May-June 2017 https://www.armyupress.army.mil/Journals/ Military-Review/English-Edition-Archives/MayJune-2017/Pros-and-Cons-of-AutonomousWeapons-Systems
[3] Doug Livermore, “Balancing Effectiveness and Ethics in Future Autonomous Weapons,” Small Wars Journal, http://smallwarsjournal.com/ jrnl/art/balancing-effectiveness-and-ethics-futureautonomous-weapons#_edn3
[4] Kelsey D. Atherton, ” Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons,” NY Times, 15 November, 2018. https://www.nytimes.com/2018/11/15/ magazine/autonomous-robots-weapons.html
[5] James Foy, “AUTONOMOUS WEAPONS SYSTEMS: TAKING THE HUMAN OUT OF INTERNATIONAL HUMANITARIAN LAW,” Dalhousie Journal of Legal Studies, 2014.
[6] Future of Life Institute, “AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS,” Future of Life Institute, 28 July, 2015. https://futureoflife.org/ open-letter-autonomous-weapons/
[7] Ariel Conn, “The Risks Posed By Lethal Autonomous Weapons,” Future of Life Institute, 4 September, 2018. https:// futureoflife.org/2018/09/04/the-risks-posed-bylethal-autonomous-weapons/
[8] Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” Journal of Military Ethics,16 December 2010. https://www.tandfonline.com/ doi/abs/10.1080/15027570.2010.537903
[9] Denise Garcia, “Governing Lethal Autonomous Weapon Systems,” Ethics & International Affairs, December, 2017. https:// www.ethicsandinternationalaffairs.org/2017/ governing-lethal-autonomous-weapon-systems/

About Samuel Lisanti

Samuel Lisanti has created 30 entries.

Post A Comment

YOUR CAPTCHA HERE