Defense
Technology

From Sling and Stone to Autonomous Drone? Key Questions for Determining Whether Autonomy Favors Davids or Goliaths

Suppose that the year is 2035 and America is engaged in counterinsurgency operations in the Middle East: would autonomous drones favor the insurgents or counterinsurgents? Scholars such as T.X. HammesPaul Scharre, and Sarah Kreps have argued that autonomous military systems (AMS) will tend to favor conventionally “weaker” or poorer actors, including insurgents or terrorists. Rather than framing the question purely in terms of “offense vs. defense”—which can be hard to delineate when territorial control is ambiguous and combatants employ irregular tactics (e.g., ambushes)—these scholars point to asymmetries in target identification challenges and moral restraints, the potential for swarms to overwhelm quality with quantity, and the benefits of cheaper destructive capabilities for poorer actors. These arguments may seem facially convincing and likely do have some merit. However, one can also make facially compelling arguments for why AMS could benefit counterinsurgents or wealthier militaries: they may allow rich, casualty-averse governments to deploy fewer troops abroad, conduct widespread surveillance of conflict zones, reduce collateral damage among civilians, or otherwise better leverage their economic advantages. Additionally, the pace and impact of technology developments are often hard to predict with high confidence, as illustrated by numerous inaccurate predictions or surprises over the past 150 years regarding torpedo boats, machine guns, cavalry, strategic bombing, nuclear weapons, and more.

The potential impact of artificial intelligence (AI) on warfare has received much more attention recently. For example, the U.S. Department of Defense’s Replicator initiative seeks to field autonomous, attritable drones as a response to “China’s advantage in mass.” However, given the many surprises in AI’s progress in the past decade (alongside some cases of overoptimism), analysts should more clearly acknowledge and address the critical points of technological uncertainty/dispute before concluding that AMS will favor one side or the other (as opposed to leaning on heuristics like “cheaper technology favors the poorer actor”).

Although this article’s analysis cannot provide robust forecasts about AMS’ effects, it does lay out a series of key questions that analysts should grapple with before asserting that AMS will have asymmetric effects. Does the technology have diminishing marginal returns or some form of asymmetric usefulness? How reliable are the systems’ autonomous mobility and targeting (especially if one side is highly reliant on “open-source” models)? What will the impact of the systems be on non-lethal military activities, such as logistics and surveillance? How accessible are drone components (e.g., can insurgents easily manufacture or smuggle them en masse)? How effective and accessible are countermeasures?

Ultimately, accurately forecasting the effects will be very important in the near term as the United States and other governments decide how to regulate AI development and where to invest in AMS—or countermeasures to such systems—that might undermine their own military advantage if the technology diffuses to other states or non-state actors

Why Expect Disparities? Marginal Effects and Asymmetric Value

If both sides can access or use a new technology, why might it disproportionately benefit weaker or poorer forces, given that the stronger or richer side “should” (naïvely speaking) be able to employ the technology at a greater scale or afford higher-quality versions of the technology? Why would it not simply maintain or even magnify existing disparities? 

There are many patterns of explanations. For example, the technology might only be useful for “countering” a capability only held by one side (e.g., stingers vs. helicopters). Relatedly, some technology may enhance the effectiveness of tactics which mitigate the importance of conventional disparities (e.g., ambushes). Another reason to expect a new technology will favor weaker actors is if it has diminishing marginal returns (especially when the original technology had larger “barriers to entry” or fixed costs), such that spending 10x as much as an opponent on the technology does not yield 10x the benefits. The deterrence effects of small nuclear arsenals partially illustrate this. It seems quite unlikely that AMS’ marginal value would diminish as rapidly as nuclear weapons’, but it is unclear what their value curve looks like: are 20 lethal autonomous drones at least 10x as valuable as one such drone in a firefight?

Dependability and Access to Autonomy: Mobility, Targeting, and Beyond

It is easy to imagine scenarios such as those depicted in the activist video “Slaughterbots,” where swarms of small drones can navigate in urban environments while tracking and killing targets via facial recognition. However, this scenario assumes a variety of AI capabilities that may prove more difficult to achieve than some expect. Even if personal identification systems prove to be capable in complex/adversarial settings (or combatants are less discriminate with targeting), this does not solve other challenges such as rapidly navigating in urban or contested environments while planning actions to engage targets. It may be the case that the necessary models (algorithms) will be dual-purpose and publicly available so that non-state actors can access them. However, performance in combat conditions may require specialized, non-public training data for fine-tuning—if only given the potential for countermeasures, as discussed later.

Still, as long as the technology enables insurgents to inflict more casualties against their targets, they may be willing to accept collateral damage from error rates that would be unacceptable for normal commercial tasks (e.g., autonomous driving) and might slow adoption by liberal democracies (notwithstanding some countries’ demonstrated willingness to incur civilian casualties via drone strikes). Also, in favor of insurgents, there will likely be stark asymmetries in targets: states have more infrastructure, public officials, and expensive military platforms, and many installations or platforms (e.g., tanks) should prove easier for target discrimination. Nonetheless, analysts need to carefully consider whether future AI models will actually be capable of the envisioned image recognition and planning tasks in realistic (adversarial) settings—and whether both sides will have access to those models.

Potential Impacts on Intelligence, Surveillance, and Reconnaissance

Even if autonomous systems could not achieve high reliability for lethal action in the coming decade, errors are much more tolerable for Intelligence, Surveillance, and Reconnaissance (ISR) activities (especially where the alternative would be nothing), and the United States is already using at least weakly-autonomous systems for ISR. If autonomous platforms become very cheap to produce and automated video analysis capabilities improve, AMS could enable widespread monitoring for contraband smuggling and military equipment movement, observing IED/AMS placement or ambush preparations, locating and tracking persons of interest, drawing fire to reveal enemy positions, etc. Insurgents may also be able to access and utilize such technology, but AMS seem more likely to benefit regular forces given the asymmetries in operational openness (e.g., government buildings vs. mountain caves or urban tunnels) and possibly also due to the need for large data storage and analysis infrastructure. Additionally, foreign occupying militaries are typically far more casualty averse and thus less likely to send troops into dangerous areas. Liberal democracies might face some political pushback for employing widespread surveillance, but this seems unlikely to prevail if AMS clearly saves soldiers’ lives. In summary, if commentators assume that autonomous weapons are possible, they need to grapple with the non-lethal applications that would likely also be possible—and which plausibly could favor counterinsurgents that have less local knowledge and want to stifle insurgent production/importation of drones.

Access to (High-Quality) Drone Components

Even if hobbyist-made cheap commercial drones exist, this does not guarantee that resource-poor insurgents in contested environments can acquire large numbers or high-quality drones, especially without also jeopardizing their operations (via exposure). Hammes has repeatedly insisted that insurgents could utilize emerging technologies such as 3D printing to meet their needs. Despite lacking clear evidence of demonstrated capabilities, past experience with IEDs suggests determined insurgencies could produce or import some drones (unless AMS or other innovations radically improve interdiction and related efforts). However, without deeper technical analysis, it seems premature to claim that insurgents could produce thousands of “slaughterbots” every day —especially with high-quality components that are resistant to countermeasures.

Efficacy of and Access to Countermeasures

History is filled with military technologies touted as revolutionary that faced hindrances from countermeasures. Since Hammes and other scholars have emphasized the impact of swarm tactics, one particularly relevant example is the development of small, fast, and comparatively cheap torpedo boats in the late 19th and early 20th centuries. Some scholars (especially the Jeune École in France) claimed that this development would undermine the survivability of large ships of naval superpowers (especially Britain). In reality, the threat of torpedo boats were mitigated in most contexts by countermeasures such as torpedo nets and bulges, fast-firing close-range guns, and even new ships—destroyers—which co-opted the “small and fast” principle against the new threats. Around this time period, submarines also began to pose similar threats against large ships (based upon stealth rather than quantity and speed), but even they eventually faced some countermeasures that preserved the viability of Allied surface fleets through WWII. In the early 20th century, Giulio Douhet proclaimed that strategic bombing would be devastating and quickly decisive, but these expectations failed to account for the possibility of radar and better anti-aircraft weaponry (along with erroneous beliefs about the impact of bombing on defender morale). 

Of course, there have been technologies that have outpaced countermeasures, such as advanced ballistic missiles and American nuclear missile submarines (SSBNs). The previous examples are not meant to claim that countermeasures always prevail but rather to highlight how any confident prediction of one-sided benefits that fails to discuss countermeasures seriously seems to border on negligence.

There is ongoing debate regarding the potential for countermeasures against autonomous drones. Some of the major categories of potential countermeasures for destroying/disabling drones include directed energy, electronic disruption, barriers or tangles, and kinetic projectiles. One may also employ deception or other countermeasures against the AI models or sensors on drones, including target decoys, obscurants (e.g., smoke), camouflage, and potentially data poisoning.

Ultimately, there are a few key questions for AMS countermeasures: Who will have access to which countermeasures, and how effective will each be? For example, it could be that the only highly-effective countermeasures against large swarms will be sophisticated and conspicuous platforms (e.g., stationary or vehicle-mounted directed energy systems) that insurgents generally cannot field. Alternatively, it may be that no countermeasures are thoroughly effective, which could deter conspicuous foreign troop presence but allow extensive drone-based operations.

Autonomy, Asymmetry, and Uncertainty 

Will AMS be equalizers or separators? This question is especially important in light of the recent progress in AI capabilities and the U.S. push for the Replicator initiative to counter China’s mass. Although some readers may be disappointed that this article does not offer a forecast of the effects, it does highlight critical questions that analysts should address before confidently asserting that the effects will benefit one side disproportionately. Multiple scholars have suggested that the technology will benefit the weaker side, but these assessments were not accompanied by rigorous analyses that address the key points of uncertainty highlighted above. 

Historical technologies such as IEDs and AK-47s do support the “cheaper capabilities will benefit poorer actors” argument, and my prior research suggested that machine guns likely benefited insurgents more than counterinsurgents; a research project that calculates frequencies for asymmetric impacts of military technology might be quite informative. However, the factors in the sections above could largely negate or even reverse the impacts of AMS, such as if poorer/irregular forces are unable to reliably field AMS or countermeasures or if AMS allow casualty-averse counterinsurgents to put significantly fewer troops at risk while supporting governmental function, delivering aid, interdicting smuggling, or neutralizing enemy combatants/equipment. Ultimately, arguments about insurmountable technical barriers or countermeasures could show that AMS will not have a significant effect for either side, but appeals to individual examples of past technologies or dramatic narratives of AMS’ potential usefulness for one side are a flimsy basis for concluding that the technology will have an asymmetric benefit. 


Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: Wikimedia Commons