Prosecuting Asimov’s Nightmare: Killer Robots and the Law of War
The United States and China are in a breathless race to master artificial intelligence (AI), accrue its benefits, and, in so doing, gain an advantage over the other. They are sprinting to rub a proverbial magic lamp, release its genie, and have their technological wishes granted. And, why not? AI seems likely to be the ultimate dual-use technology and its military potential is obvious—not only will sophisticated AI enable its masters to create new weapon systems, but our already powerful weapons will be made more powerful by the incorporation of AI. But, like many fairy tales involving mysterious power, our characters are enamored of the benefits, while failing to appreciate important ethical and legal complexities. Unleashing AI on the battlefield may be inevitable, but will its use comport with our present conceptions of law and international treaty?
Science fiction author Isaac Asimov famously wrote four “laws” of robotics in his short story “Runaround” in 1942. These “laws” were not laws in the legal sense, but parameters that he hoped would prevent robots from harming humans and humanity. They were likely curious musings over 80 years ago, but now they seem prescient. Today, AI increasingly powers autonomous weapon systems (AWS) and, in the future, could drive lethal autonomous weapon systems (LAWS)—systems that could, with little-to-no direction from human operators, apply deadly force. But as the U.S. Department of Defense (DoD) pushes forward with AI and autonomy, including AWS and LAWS, it is increasingly apparent that the Law of War is outdated. Asimov’s nightmare, killer robots, may soon be ready to take human life—and when an AI-powered system kills innocent people, the U.S. government is not ready for what comes next. Updating our legal frameworks is imperative to prepare for the future of warfare.
Insufficient Legal Frameworks
The insufficiency of the Law of War to govern tomorrow’s AI-driven wars is not based on AI’s theoretical ability to execute an operator’s orders or perform above the cognitive level of an operator. AI has proven its ability, under certain circumstances, to make decisions faster and with more accuracy than humans. However, AI’s increasing ability to make independent decisions means DoD’s efforts to incorporate it into their processes cannot be undertaken without sufficient review of the legal terminology and frameworks governing its use. The U.S. government is a rule-bound, law-abiding organization that bases its decisions on established authorities, handed down to DoD by law (Congress) and executive order (the President). As a result, AI’s ability to “decide” with any degree of independence from a human operator presents unique challenges to these codified systems of law, executive order, and agreement that all presume a cause-and-effect relationship traceable to human decision-making. This, in turn, challenges interpretations of law surrounding the use of military force, particularly our understanding of “inherently governmental functions,” like the conduct of war.
DoD’s “Law of War” explains in its foreword that the “law of war is of fundamental importance to the Armed Forces of the United States” and is “part of who we are.” It further states that during the American War of Independence, the Continental Army and the British government mutually prosecuted a war “carried on agreeable to the rules which humanity formed.” Later, in the first sentence of its preface, as well as section 1.1.1 of its “Purpose and Scope,” the manual prescribes its use for “DoD personnel.” And, similarly, under section 1.3.4 “Purposes of the Law of War,” we find the first purpose is “protecting combatants, noncombatants, and civilians from unnecessary suffering.” Despite the DoD’s admission that it cannot stop suffering in war, the Law of War hopes to limit its scope by identifying certain acts of war as inconsistent with the principles of humanity. These definitions are permeated by human-specific concepts: humanity, mutual agreement, hope, moral categorization (i.e., sorting people into categories like combatants and noncombatants), an understanding of violence, and the experience of suffering. In other words, our current conceptions of war’s reasonable limits are human-centric. The Law of War does not sufficiently address AI’s role on the battlefield, because the Law of War addresses what humans can or cannot do in the course of war.
Furthermore, the Federal Activities Inventory Reform (FAIR) Act of 1998 (P.L. 105-270) makes explicit what the Law of War implies— war must be conducted by employees of the federal government (i.e., humans). According to the FAIR Act, war is an “inherently governmental function,” defined as a “function so intimately related to the public interests as to require performance by Federal Government employees.” These functions include “interpretation and execution of the laws of the United States so as… to determine, protect, and advance United States economic, political, territorial, property, and other interests by military or diplomatic action,” as well as “to significantly affect the life, liberty, or property of private persons.” These definitions are clear: the conduct of war and the taking of life must be done by humans directly employed by the U.S. government. It is here that AWS and LAWS, powered by AI, generate uncertainty about the real actor in human-machine teaming scenarios in combat since AWS are increasingly capable of making iterative, independent, and sequential decisions without the oversight of humans. This raises questions about whether AI-driven AWS or LAWS can be compliant with our conception of an inherently governmental function and, more broadly, whether they can be compliant with the Law of War, U.S. law, executive order, and treaties or agreements (like International Human Rights Law).
The Pentagon’s Insufficient Response
To address these issues, DoD has made two essential claims: 1) for use of lethal force scenarios involving AWS/LAWS, there will be a human “in the kill chain,” a reference to the phased cycle of military strikes, and; 2) a human being will always be held responsible for the use of an AI-assisted weapons system, to include that system’s subsequent decisions.
For the first claim, DoD argues that human involvement will make AWS more responsible or, at least, its users accountable. However, the ability for human involvement to increase responsibility or accountability is predicated upon an AI system’s theoretical reliability. In other words, AI reliability can be explained as ‘does the AI do exactly what its operator asks it do to, every time the operator asks the AI to do it?’ Where this reliability is high, AI becomes a trusted tool that allows post hoc adjudication of the human’s decisions, not the AI’s performance: if a soldier takes X action, then Y result can be ascribed to the soldier. In other words, when AI works properly and executes the clearly identified goals of the human operator, the cause-and-effect chain is clear and unbroken. This allows human operators of AI to predict AI-assisted outcomes, enabling the process of accountability, legal or otherwise. On the other hand, where AI reliability is inadequate, outcomes are unpredictable because AI makes a choice inconsistent with the human operator’s demands.
This disconnect undermines the predictability of human decision-making and, as a result, the very accountability that human presence “in the kill chain” provides. With AI-driven LAWS, the machine will have carried out an act it determines to be consistent with a human’s input—an imperfect expression of the operator’s intention—and the unresolvable difference between the two will challenge our legal understanding of both cause-and-effect and negligence. This holds for war crimes as well, where intentions and “recklessness” (also referred to as “gross negligence”) are used to determine whether a war crime has been committed. Therefore, the moral and legal benefits accrued by the first claim are specious because the reliability of AI systems in LAWS cannot be perfect, and human accountability “in the kill chain” requires perfect AI reliability to be covered by current domestic and international legal frameworks.
DoD’s second argument states that human operators of AI will be held responsible for AI’s decisions. Others posit military commanders will be held ultimately responsible, even if AWS operators are not. As discussed above, ownership of outcomes in human-machine teaming may not always be clear due to inevitable AI reliability challenges. Nonetheless, DoD would assert that a human is still ultimately responsible because someone must be. This is a laudable position; however, this argument does not acknowledge the inescapable reality of AI-powered AWS/LAWS support to soldiers and commanders. Because nothing in war is 100% reliable due to the Clausewitzian fog and friction of war, AI’s imperfect actions will introduce doubt into another legal concept: “duty of care.” “Duty of care” can be understood, simply, as whether a human exercised reasonable care and judgment in taking their actions, based on the totality of their circumstances and the information available to that human at that specific time.
When AI-driven AWS or LAWS interrupt the cause-and-effect chain of human action (through error or miscalculation), the outcomes of human-machine teaming will be less predictable than our legal and moral systems demand. In other words, if AI is making the decisions for the operator, based on its interpretations of the operator’s orders, can the operator be held legally responsible for those decisions? This question highlights the difference between: 1) DoD’s claims, which indicate the morally defensible claim that a human “should” be responsible for loss of life; and 2) the practical, legal reality that a human cannot be held responsible for the decisions of AI in such scenarios. Neither AWS operators nor military commanders can be determined to be at fault or reckless for an outcome they could not reasonably understand or predict.
For example: if a soldier fires an AI-assisted weapon system at an area reasonably believed to contain an enemy vehicle, but the AI-assisted system makes a subsequent and erroneous decision to engage a civilian vehicle full of noncombatants instead of an enemy vehicle full of combatants, then legal fault for the unintentional loss of civilian life is unclear. In this scenario, the soldier did not decide to engage the civilian vehicle, nor did the soldier mistakenly select the civilian vehicle as a lawful target. Does the soldier’s outsourcing of target identification represent irresponsibility and legal liability? Did the soldier behave reasonably, given the information available? More questions emerge: how many decisions must an AWS make, following the human’s decision to unleash it, before we cannot reasonably hold the human responsible for the outcome? What must the human know about the onboard AI system and its reliability before making decisions? And, in these examples, what fault can we reasonably lay at the feet of military commanders who can neither predict nor explain these outcomes any better than the AWS operator?
A Path Forward
It seems almost certain that we will encounter scenarios where we cannot reasonably hold either AI or the AI’s operator responsible for their partnered actions. Here, it is tempting to consider heavily restricting or banning AI-enabled AWS/LAWS, but this is neither realistic nor does it position the United States to win wars of the future. Instead, DoD must bring their expert lawyers, military AI/autonomy experts, and AI vendors into a room together to create “AI-enabled kill chains” that will survive legal scrutiny when AI-enabled AWS/LAWS fail and innocent life is taken. These new kill chains must make it maximally possible to attribute the taking of human life to humans. In other words, human fingers must stay on the proverbial triggers of the AI-enabled weapons we field in war.
Consistent with the above, DoD must evolve the Law of War itself, in accordance with applicable treaties and U.S. law, to clearly separate human and AI roles and ensure technology is developed to match those standards. It is not enough to evolve DoD’s tradecraft—the U.S. military must also evolve its understanding of the law and standards to which it will be held responsible.
Similarly, the U.S. Congress, in consultation with relevant affected agencies of government, must also expand the definition of an inherently governmental function to identify “inherently human functions” and those permitted by AI. In its wisdom, Congress knew it must create meaningful separations between commercial entities and government officials in the conduct of a great many things. It must now take up the task of ensuring machine intelligence does not supplant human intelligence in the execution of inherently governmental duties—like war.
Such changes will also de-risk U.S. government processes outside of the military, where agencies will rely upon AI provided by the private sector (the original point of the FAIR Act of 1998 being to separate government and commercial functions). Changes to this law, and those above, will position DoD to develop or acquire AWS appropriately, train for war alongside AI responsibly, and hold itself truly accountable on tomorrow’s battlefields.
Congress and DoD must be proactive to address this gap. Our laws were written for mankind, not for our machine partners. Asimov might have been wrong to proscribe robots’ involvement in war, but his concerns that killer robots presented a complicated problem are just as timely today as the day they were written. Wars of the future will be fought by humans and machines, partnered together. Innocent people will someday die at the hands of these human-machine teams and the United States must be ready to hold itself and its soldiers accountable. Congress and DoD must act to resolve this challenge. If we cannot hold ourselves accountable in war and we cannot prosecute AI for its decisions, the United States will have sacrificed one of its most effective weapons in any war: justice.
Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: C4ISRnet