Intelligence & National Security
Middle East & North Africa
Technology

The Dehumanization of ISR: Israel’s Use of Artificial Intelligence In Warfare

“At 5 a.m., [the air force] would come and bomb all the houses that we had marked,” B. said, an anonymous IDF soldier. “We took out thousands of people. We didn’t go through them one by one—we put everything into automated systems, and as soon as one of [the marked individuals] was at home, he immediately became a target. We bombed him and his house.” This is the reality of the war in Gaza. Israel employs sophisticated artificial intelligence (AI) tools to enhance its Intelligence, Surveillance and Reconnaissance (ISR), which then allows it to strike targets all over Gaza. However, this augmentation of military capabilities raises profound ethical concerns and may carry geopolitical implications. AI-assisted airstrikes, which, in some cases rely almost solely on the assessment of an algorithm, may lead to a disregard for basic rules of war, such as discrimination and proportionality. They raise issues of errors, cognitive biases, overreliance on imperfect assessments, and the role of human supervision. Further, as these algorithms give Israel an edge in its warfighting capabilities, they may contribute to the further unraveling of the balance of power in the Middle East. 

Israel’s Covert Arsenal

Israel’s affinity for AI started long before the current conflicts. Israel has used AI-driven facial recognition software at checkpoints in the West Bank for several years and has deployed AI in surveillance of Palestinian residents of Gaza well before the war. Israel was also no stranger to AI in war. Operation Guardian of the Walls in 2021 was Israel’s “first AI war,” wherein the IDF leaned heavily on machine learning to identify and strike Hamas and PIJ targets. After October 7th, new algorithms and updates have been developed to expand Israel’s arsenal. Several AI algorithms have been identified as the main tools Israel uses in the war in Gaza. Naturally, as these tools are highly sensitive and remain classified, these algorithm’s true nature and capability remain opaque. However, some things are known. ‘Gospel,’ ‘Where’s Daddy?,’ and ‘Lavender’ are three algorithms that have had significant use in the war in Gaza to support targeting and decision-making. These algorithms work together to create a honed and effective kill chain. They simultaneously analyze data from all ISR sources, satellite imagery, drone footage, signals intelligence (SIGINT), and information gathered from monitoring individuals and groups. This information is then synthesized to identify potential targets for the IDF. The algorithms work similarly but are tailored to their task. Gospel focuses on infrastructure targets, identifying militant infrastructure such as suspected rocket launch sites, tunnel entrances, or militant hideouts. Lavender focuses on human targets, identifying thousands of individuals the algorithm suspects to be Hamas or PIJ operatives.  Where’s Daddy? tracks targeted individuals and sends a signal to the IDF when they enter their homes, allowing special forces to make arrests–or for drones to liquidate them.

The Muddied Reality of AI-Driven Warfare

In principle, AI-enabled weapons have the potential for efficient warfighting and the minimization of civilian casualties. Israel has repeatedly insisted that it utilizes ‘precision’ airstrikes and operates with surgical accuracy. In practice, the opposite has occurred in Gaza. The use of the aforementioned algorithms has created thousands of targets, dramatically expanding Israel’s kill lists. Lavender marked 37,000 Palestinians as targets at one point in the early stages of the war. This in and of itself is not the problem: taken at face value, it would mean that Israel’s ISR has simply gotten better at identifying militants. However, Lavender is estimated to have an error rate of 10%. This means that many people who land on AI-generated kill lists are not combatants, and this has led to the deaths of numerous civilians. A common source of mistakes is communication patterns. Lavender has identified Gazan civilians as militants because they had similar communication patterns to previously identified militants; sometimes membership in a flagged group chat is sufficient for Lavender to make a fatal judgment. Misidentified targets include police, aid workers, militants’ relatives, and people with the same name or nickname. Where’s Daddy? specifically targets individuals once they enter their family home, thereby executing the target’s family and destroying their home in the process. 

This practice raises serious questions about proportionality and discrimination of targets. Such AI-driven ISR may lead the IDF to inflate the damage it does—all under the guise of seemingly increased combat efficiency. Further, reliance on AI-driven ISR may also reinforce confirmation biases. Similarly, an NIH study found that drone operators may experience moral disengagement because they may not fully grasp the level of destruction they cause since they are removed from the battlefield. If AI tools mark a person or structure as a target, decision-makers responsible for authorizing strikes often create mental shortcuts, and simply trust the algorithm because it is believed to be accurate. Indeed, IDF soldiers have often served as “rubber stampers” for the machine’s decision, normally devoting approximately 20 seconds to each target before authorizing an airstrike. This cognitive bias, authorizing airstrikes without questioning the validity and relying on the machine, contributes to the problematic nature of AI-driven warfare. And increasingly, warfare is no longer AI-driven, but AI-controlled. In several instances, unidentified IDF officials have even claimed that some strikes were conducted without human supervision, or that they missed the identified militant because the strike was not verified in real-time. Rather than supporting decision-making, it appears that these algorithms are increasingly becoming the decision-makers. 

The Implications of AI Proliferation

War is not without error, and civilian casualties are a harsh reality of warfighting. However, with its AI-driven ISR, Israel is challenging international norms of war. By relying on AI, in some cases almost granting it executive authority, Israel is undermining principles of proportionality, distinction, and precaution. To give some perspective on how much Israel is pushing this boundary, one may consider the acceptable casualty rates of the IDF and the U.S. military. During the U.S. campaign against ISIS, any strike projected to cause more than fifteen civilian casualties deviated from the standard operating procedure and required special permission from the head of U.S. Central Command. In contrast, Israeli algorithms generally deem 15 to 20 civilian casualties in case of a planned strike on a low-ranking militant, and up to one hundred for a senior commander, as perfectly acceptable, and hence such strikes are routinely authorized without much oversight. Despite these issues, AI-driven warfare is here to stay. AI algorithms and their integration with weapon systems are unlikely to face a comprehensive global ban. For that, they are simply too brutally efficient and militarily useful. No state would want to find itself in a conflict facing an opponent capable of making decisions at machine speed, a million times faster than humans can react. However, the case of Israel’s use of AI in warfighting does demonstrate the perils of this revolution in military affairs, which should be addressed by U.S. decision-makers and the international community at large.  

AI-driven warfighting contributes to violations of the Just War Theory and its principles of proportionality and discrimination. The 42,000 Palestinian casualties, a significant part of which were civilians, many indeed children, are a dire testament to this abject ethical failure. Additionally, 59% of buildings, 87% of schools, and 68% of roads in the Gaza Strip have been damaged or destroyed. Considering that Israeli algorithms have been continuously employed from the beginning of the war, it is not a far reach to suggest that these weapons have been instrumental in creating these staggering statistics. Due to their seemingly unbiased nature and brutal efficiency, AI algorithms contribute to the intensification, and to some extent the dehumanization, of war. Even the IDF admitted that “proportionality did not exist,” and that in many cases, it does not consider collateral damage during targeted strikes, thereby normalizing substantive civilian casualties.  

Ethics in war is a murky subject. However, especially since AI in war is still relatively nascent, there is still the potential to implement more ethical standards for this technology. Beyond the moral imperative to respect the laws of war and avoid unnecessary harm, there is also an interest at play for the United States. Setting and enforcing norms such as those detailed in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” published by the State Department, would entail a lower likelihood of such AI-driven capabilities proliferating, and potentially being used against the U.S. Israel may be a place for the U.S. to start actively enforcing such standards. If U.S. allies do not sign the declaration, the United States should ensure compliance by withholding military support from such states. Israel did not sign the declaration, however, as of November 27, 2024, they have at the very least endorsed the declaration. Nonetheless, Israel’s use of AI-driven ISR and its conduct of AI-based warfighting continues. Should the U.S. decide that the time is right to begin working towards an international governance of AI-driven warfare, reigning in the Israeli use may be a good place to start. 

Further, AI-enabled warfare may further destabilize power dynamics in the Middle East and beyond. They are already being used in Gaza and likely in Lebanon. Considering Iran is Israel’s principal enemy and has supported proxy war against Israel, Israel might use its potential and finished AI capacities to target the Iranian leadership. Iran does not stand a chance against Israel’s arsenal, and neither do its proxies have the capabilities to effectively counter AI weapons. Coupled with Israel’s superior conventional military capabilities and the prowess of its intelligence services, this further shifts the regional balance of power in Israel’s favor. Taken at face value, this simply means that Israel is strengthening its security situation and preventing adversaries from attacking it. However, given the volatility of the region and the players therein, overly aggressive Israeli conduct may trigger unpredictable reactions by regional foes of Israel. If the Iranians are pushed too far, they may choose to pursue nuclear armament, simply to even the scales. It is all but impossible to isolate the direct effect AI-driven ISR has on the regional balance of power. What can however be said is that it gives Israel a tremendous edge, which it may seek to use to advance its position in the Middle East. AI may hence accelerate existing dynamics of geo-political upheaval.   

The reality is that these weapons are here to stay. They are extremely potent, diminish the risk to ground troops, and can win wars. The Israeli use case is not the only conflict where these weapons will be prevalent. Similar algorithms are being used in the Russia-Ukraine war, and will likely be employed in future conflicts around the world. Countries simply cannot be left behind and not use the advantages that AI brings to combat. The face of warfare is changing rapidly, and without enforced global governance there will be a stark asymmetry between states. As seen in the war in Gaza, such asymmetry can be devastating to civilians and destabilize regional dynamics. It is critical that the United States and its partners and allies enforce a global governance as outlined in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” The United States should investigate and implement enforcement mechanisms, such as withholding military aid and critical technologies necessary for these AI systems, from states that are out of compliance. This could be coupled with diplomatic and economic measures, particularly in wartime scenarios to ensure that even in conflict, states are adhering to an ethical framework. Gaza shows a glimpse into a dark future of AI-enabled warfare. As B. stated, the fate of thousands of lives was at the discretion of an automated system. Warfare is not pretty, but it is deeply personal, and to remove even the slightest shred of humanity from it leaves us all as potential targets.


Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: CNN