Pandora’s Bots: AI Technology in Warfare

By: Oliver Greive, HCI Foundations, Indiana University M.S. HCI/d Fall 2019

"We knew the world would not be the same. A few people laughed, a few people cried, most people were silent. I remembered the line from the scripture, the Hindu Bhagavad-Gita…‘Now I am become death, the destroyer of worlds.’ I suppose we all thought that, one way or another."

-Robert Oppenheimer, The Decision to Drop the Bomb (1965)

Introduction

World War I began with infantry standing shoulder to shoulder on the battlefield, cavalry on horseback with swords sheathed at the hip. The year was 1914, and for centuries European warfare had been a gentlemanly affair. Families would gather on hilltops to watch the spectacle of battle below like fans at a football stadium. The Great War concluded in 1918 with the  the tank, machine gun, trench warfare, and mustard gas. No families would attend these battles. This conflict of the early 20th Century is said to have marked the transition into“modern warfare” as the trends of large-scale industrialization and mass production were applied to the battlefield for the first time. Technology and warfare have developed side by side for millennia, and have mutually shaped one another as a result. As warfare is arguably the most extreme and consequential example of human-human interaction imaginable, the design and implementation of technology in the context of war provides an interesting case to examine from the perspective of Human-Computer Interaction.

Today, world governments allocate extraordinary sums of money to the development of military technologies. As such, the artifacts and systems produced by these states are inherently political in nature, fulfilling the needs to defend one’s own troops, and to attack, injure, or intimidate enemy forces. As computers have become increasingly prevalent in warfare since the 1950s, HCI plays a crucial role in the design and implementation of military technologies today. In particular, the increasing adoption of artificial intelligence (AI) in warfare has generated significant controversy worldwide, and is especially important to consider for the field of HCI.This paper will explore the topic of technology in warfare from an HCI perspective with an emphasis on agency, technological politics, and ethical considerations. This is neither a critique nor an endorsement of war, rather a broader examination of technology within this context.

A Technological Survey of Warfare

Warfare can be defined as an armed conflict between two or more opposing parties . For the purposes of this paper, the semantic distinctions of an “armed conflict” and “opposing parties”will be avoided by directly referring to current and historical examples of warfare. To begin, it is helpful to view the phenomenon of warfare from the perspective of Cila et. al’s Actor NetworkTheory, which presents an “ontological symmetry of humans and nonhumans in networks of relations” (Cila et. al, 2017, p. 449). Using this theoretical perspective, human combatants on either side of a conflict are put on equal footing with the nonhuman technologies, or weapons, that each side employs in said conflict. In this way, a battle between two armed forces is viewed as:

Network A (Human Actors A + Nonhuman Actors A)
vs.
Network B (Human Actors B + Nonhuman Actors B)


Using ANT to view an armed conflict, one can see the battlefield as a struggle between opposing networks, each strategically vying for a dominant position over the other. In the words of Cila et.al., “Each human or nonhuman actor in the system exerts impact on others.” In the case of an armed conflict, the nonhuman actors of weaponry exert a dramatic and often fatal impact.

With this theoretical framework in mind, the historical development of military technology has followed a trend of both augmenting and offloading the agency of individual combatants onto weapons of war. Using the above example of opposing Networks A and B, as military technology advances, the impact and agency of the human actor decreases while these qualities of the nonhuman actor increases. As a concrete example, the degree of impact afforded to hand-to-hand (Human Actor to Human Actor) combat is relatively low compared to that of combat with automatic rifles (Human+Nonhuman Actors to Human+Nonhuman Actors). In other words, two human actors punching and kicking one another doesn’t amount to much in terms of impact, in this case fatal damage. However, two humans with assault weapons have drastically more impact, and are therefore capable of much more damage. In terms of agency,“the ability to act for producing effects” (Cila et. al. 2017, p 449), nonhuman actors cannot specifically employ their own agency as they aren’t conscious beings, however, they can be used to augment the agency of the subject. As Cila et. al. state on the topic of subject and object agency: “Subjects have needs that drive them for acting in the world, which turns the agency manifested by the subject into a special character: it is the ability and also the “need” to act. For this reason, only living things can be subjects. Nonhuman have the ability to act but not the need to act, which makes the relationship between the subject and the object asymmetrical.” (p.449-450). With this in mind, an important question remains: What happens if AI-enhanced weaponry develops to the point of shifting from object to subject in the context of warfare? In this scenario, several complicating factors come into play, such as: attributing responsibility for the AI actor’s actions, placing limitations on how much agency and impact an AI actor can have in warfare, and the legal regulation or banning of AI in warfare altogether.

Technological Politics

The design, development, and implementation of military technologies is inherently political in nature. The desired results of these technologies are to further the agenda of the government that has funded its development. More concretely, such technologies are intended to effectively defend their own side of a conflict, and to harm the opposing side. As Winner (1986) states in Do Artifacts Have Politics? : “The theory of technological politics draws attention to the momentum of large-scale socio-technical systems, to the response of modern societies to certain technological imperatives, and to the ways human ends are powerfully transformed as they are adapted to technical means.” (p. 21). In this way, the development of military technologies funded by major world governments carry a great deal of political weight.

Perhaps the most politically fraught military technology is the Atomic Bomb. In Winner’s words“…the atom bomb is an inherently political artifact. As long as it exists at all, its lethal properties demand that it be controlled by a centralized, rigidly hierarchical chain of command closed to all influences that might make its workings unpredictable.” (Winner, 1986 p.34). Specifically relating to Winner’s “…response of modern societies to certain technological imperatives,” theAtomic Bomb has had a marked impact on the way societies worldwide view the moral limitations of technology in warfare. Despite many countries having their own stockpiles of nuclear weapons, these military technologies have not been used in an armed conflict since America’s decision to bomb Hiroshima and Nagasaki in August, 1945. The global hesitance to use these weapons in war hints at the immense political and ethical weight that such weapons carry.

On a strictly practical level, a common understanding of Mutually Assured Destruction (MAD) employed by two nuclear-capable states may have an effect on this era of nuclear reluctance in the decades following 1945. However, a deeper association with cultural norms and taboos may explain this phenomenon as well. In Steven Pinker’s 2012 work The Better Angels of Our Nature: Why Violence Has Declined, Pinker states on the topic of the nuclear taboo: “The use of a single tactical nuclear weapon, even one comparable in damage to conventional weaponry, would be seen as a breach in history, a passage into a new world with unimaginable consequences.” (p. 269-270). In this way, the adoption of a cultural taboo may be interpreted asa culturally-shared recognition of an act of war’s ethical and political consequences, or an acknowledgement of a given artifact’s political weight. Referring back to the first World War, a similar cultural taboo formed around the use of poison gas. In this instance, the entire world saw the widespread devastation that these weapons were capable of. Not long after, the use of these weapons was banned – their use constituting an international war crime. Currently, due to their capacity for widespread and indiscriminate harm falling outside of the scope of war, nuclear and chemical weapons occupy their own classification in international law: Weapons of Mass Destruction.

The political weight of AI military technology is difficult to determine. However, the term “AI Arms Race” has frequently been applied to the competing development of AI military systems by the United States, the United Kingdom, Russia, and China, among others. In this way, political associations with AI military technology are still developing as these technologies become ingrained in both military protocol and popular discourse. If these technologies become standard practice in warfare, they will likely develop their own array of political and moral connotations as well.

Artificial Intelligence and LAWs

The history of computing technology in wartime can be traced back to Alan Turing’s role in deciphering encrypted Nazi transmissions during WWII. Since then, computers have played a pivotal strategic role for militaries around the world. On the battlefield, the use of Lethal Autonomous Weapons (LAWs) demonstrates a key transition in both computer and military science. However, due to the recency of these technologies, the term LAWs falls under several different definitions, as both the terms “autonomous” and “weapon” may be construed differently depending on the context.

For example, Rebecca Crootof from Yale School of Law defines a LAW as: “a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.” (Crootof 2014). In this instance, LAWs are defined as following a series of deductive steps to perform its primary function. Meanwhile, The United Kingdom’s Ministry of Defense defines LAWs as “Systems that are capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control - such human engagement with the system may still be present, though.” - (Ministry of Defence, 2012). In this example, the bar of ‘autonomy’ forLAWs is set significantly higher, as the systems described are said to possess a broader degree of“understanding” of their surroundings, as well as higher-order objectives. Finally, from a historical perspective, LAWs were described by Dr. Stuart Russell as: “[T]he third revolution in warfare, after gunpowder and nuclear arms.” (Russell, 2015).

With these varying definitions in mind, it is also helpful to recall how non-AI computing technologies have been employed by military forces in recent decades. Computing technologies have provided militaries with: enhanced translation capabilities, access to real-time satellite imagery, instant communication with leadership, and the development of more efficient supply chains, among many other applications. With the emergence of Artificial Intelligence for use in wartime, a significantly advanced AI system may be able to collect and interpret data from any of these sources, resulting in an all-encompassing strategic view of military resources and engagements. In terms of LAWs, the systems in use today appear as single-purpose attack or defense systems, while not incorporating data from other sources. It remains to be seen, however, what LAWs may be capable of if and when the AI underlying these weapons matures to the degree of full autonomy.

Legal and Ethical Considerations

There are several ongoing attempts to regulate or ban the use of LAWs. A recent example from the Institute of Engineering and Technology Magazine stated in 2017 that “116 founders of robotics and artificial intelligence (AI) companies have signed an open letter to the United Nations calling for a ban on the development of autonomous weapons.” Among the signatories were Elon Musk and Mustafa Suleyman, founder of the deep learning company DeepMind. This letter proposes a ban on LAWs in similar fashion to an extant UN ban on landmines, which was enacted in order to protect civilians. The letter states on the topic of LAWs: “Once developed, [LAWs] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.” (Future of Life Institute). As of Spring 2018, no ban or regulation has officially been released, although several initiatives are currently underway. (Moyes, Future of Life Institute).There are also initiatives underway to classify LAWs as a violation to the 1949 GenevaConvention, which requires: “[A]ny attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage.” (Russell, 2015).

The use of LAWs in warfare presents a particular type of ethical dilemma for military leadership considering their use. In particular, a type of Trolly Problem is presented by this situation, wherein a participant determines the ethical weight of directly killing one person by pulling a lever on a trolly, or indirectly allowing several people to die by not pulling the lever. On the surface, this problem appears as a simple exercise in balancing utilitarianism with an accountability following one’s actions. When applied to the problem of LAWs in warfare, however, several factors make the situation much more consequential and worthy of discussion.On one hand, a primary interest of the military is the economy of human cost to strategic goals, in which case the adoption of LAWs may provide major benefits. For example, Vincent C. Müller states in Autonomous Killer Robots Are Probably Good News : “They reduce the risk to one’s own soldiers, reducing the human and political costs of war. They can be cheaper than human soldiers in the long run, not needing a salary, pension, housing, food or hospitals, etc. They can also outperform humans and human-controlled systems, especially in terms of speed, accuracy and ability to function without rest.” (Müller 2016). Regarding the short-term benefits of these weapons to those employing them, this stance appears to satisfy several key requirements of military leadership. However, given the trend of nonlinear growth in computing power, coupled with the unpredictable nature of warfare itself, granting relatively early AI systems the capacity and liberty to take human lives without oversight or intervention makes for a disturbing medium to long-term future.

With this in mind, the original Trolly Problem becomes much more complex as scenarios of future development, abuse, and collateral damage begin to surface. For example, LAWs may be stolen, hacked, or mass-produced by malicious states or non-state actors. Citizen populations of specific ethnic, religious, political or socio-economic groups may be systematically targeted and killed if these weapons are misused. Given the largely speculative nature of AI futures in general, there are a near endless number of potential disaster scenarios regarding LAWs. In this way, the negative downstream effects of these weapons on citizen populations may outweigh their proposed strategic benefits to military leadership.

Implications for HCI

Artificial Intelligence may present a paradigm shift for the field of Human-Computer Interaction as a whole. For example, new classifications and theoretical models will need to develop withinHCI in the case of AI systems becoming able to express genuine agency. In the case of AI systems in the context of war, additional ethical and design considerations will need to enter the discussion among HCI practitioners, as those in the field play a critical role in how technology is adopted and implemented in society at large. In terms of the responsible design of AI weaponry, the United States Department of Defense stated in 2012: “Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” (US Department of Defense 2012). While appearing practical from a policy standpoint, the above statement does not take into consideration the definition of “autonomous,” leaving the statement open to interpretation.

Additionally, Prof. Stuart Russell has called for more outspoken action on these weapons by those in the technology sector, stating: “The AI and robotics science communities, represented by their professional societies, are obliged to take a position, just as physicists have done on the use of nuclear weapons, chemists on the use of chemical agents and biologists on the use of disease agents in warfare.” (Russell, 2015). In this case, the HCI community can contribute a great deal to this discussion. Researchers in HCI may view the issue from a philosophical, societal, and historical perspective while maintaining an understanding of the technology itself.If the discussion and debate of this issue are of primary importance, HCI practitioners may serve the vital function of understanding and communicating the desires and rights of the people to those with the technical capabilities of creating autonomous weapons – promoting dialogue among political leaders, regulatory bodies, engineers, and citizens alike.



References:

“The Better Angels of Our Nature: Why Violence Has Declined.” The Better Angels of OurNature: Why Violence Has Declined, by Steven Pinker, Penguin Books, 2012, pp. 269–270.

Crootof, Rebecca. “The Killer Robots Are Here: Legal and Policy Implications.” SSRN , 8 Dec.2014, poseidon01.ssrn.com/delivery.php

Matthewman, Steve. “Theorizing Technology.” Technology and Social Theory , 2011, pp. 8–28.,doi:10.1007/978-0-230-34395-5_2.

Moyes, Richard. “Article 36 Target Profiles.” Article 36 Target Profiles www.article36.org/wp-content/uploads/2019/08/Target-profiles.pdf.

Müller, Vincent C. “Autonomous Killer Robots Are Probably Good News 1.” Drones and Responsibility , 2016, pp. 67–81., doi:10.4324/9781315578187-4.

Russell, Stuart. “Robotics: Ethics of Artificial Intelligence.” Nature News , Nature Publishing Group, 27 May 2015, www.nature.com/articles/521415a.

staff, E&T editorial. “Elon Musk and Other Experts Call for UN Ban on Autonomous Weapons.”RSS , 21 Aug. 2017, eandt.theiet.org/content/articles/2017/08/elon-musk-and-other-experts-call-for-un-ban-on-autonomous-weapons/.

Winner, Langdon. “Do Artifacts Have Politics?” Computer Ethics , 2017, pp. 177–192.,doi:10.4324/9781315259697-21.




Back to Writing

Get In Touch

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.