When looking at the history of military imaging, it appears that this self-referentiality is the defining feature of knowledge production in the context of warfare: a closed circuit that pictures the enemy, perceived as other, as a potential target, justifying its own destruction, and the “self” as sovereign self, meaning an enlightened self—a surveillant self at war.
Since the turn of the century, unmanned aerial systems generate high-resolution images covering large areas, which are enriched with other data, such as metadata from mobile phone networks and general population data. The drone surveillance takes place mainly outside the US-American territory and often in continuity with former colonial settings. The flood of data-enriched images is increasingly overwhelming for the human eye or for human cognition. Networked algorithms and, more recently, machine-learning applications are now used to automatically interpret images and help with the identification of military targets. The workplace environment of a drone operator has been described by Peter Asaro as the “bureaucratizing of killing.” The automatisation of targeting inevitably means that large parts of the decision-making processes are outsourced to private corporations. The “self-referentiality,” the closed-circuit of target production, is made even tighter when the production of targets follows the logic of those private stakeholder’s interests, rather than of governments and the people they represent.
A recent report by the National Security Commission on Artificial Intelligence was directed by the former CEO of Google, Eric Schmidt. The reason why Google has been so successful as an advertising platform, and not only as a search engine, is the company’s effective way of extracting networked user data that it harvests from their user’s search histories to create very effective targeted ads. Targeted advertising not only paved the way to Google’s enormous financial success, they also laid the cornerstone for the methods of what Shoshana Zuboff has called “surveillance capitalism,” including its “economic imperatives defined by extraction and prediction” of behavior. As Eric Schmidt’s involvement in the National Security Commission on AI-report already suggests, the way targeted ads work can provide insights on the ways that military AI is used for targeting individuals or groups or even whole populations: If it is technically feasible to predict the behavior of individuals and populations, the ”fog of war” is lifted and those individuals and populations become nearly ideal military targets.
In March of 2025 the security company Palantir Technologies delivered the first two prototypes of the “Tactical Intelligence Targeting Access Node“ (TITAN) system to the U.S. Army. Palantir, which was co-founded by Peter Thiel, is known for its security software products, which are based on pattern analysis technologies that were previously developed for PayPal. In Ukraine, Palantir has been embedded in the war effort since mid-2022 and, according to CEO Alex Karp, has been involved in identifying a large proportion of military targets for the Ukrainian military in its attempts to fend off Russian attacks. Palantir’s civilian software system Gotham is now used not only by the US police and military and intelligence services around the world, but also for instance by the Hessian police under the name Hessendata. Gotham collects a wide variety of personal and population data and compiles it on a user-friendly interface in order to visualise “criminogenic” patterns and connections, such as the social environment of criminal suspects, in the form of graphs.
Targeting today not only means to picture, to represent a potential military target in an image to render it destructible; by operating on the level of data structures within the behavioral data of whole populations, targeting goes way beyond the “world picture,” in the sense of Rey Chow. Some of the technologies that the Israeli Defense Forces use in their assault on the Gaza Strip since October 2023 seem like the realisation of fantasies about the degree of autonomy of “military AI” that have dominated the military discourse so far mainly from a US-American perspective. In April 2024, an investigative report by Yuval Abraham revealed that the IDF used an AI-supported system called Lavender for the identification of many of their targets for aerial bombardment. The magazine Foreign Policy described the system as a “mass assassination program of unprecedented size.”
The Lavender software analyzes information collected on most of the 2.3 million residents of the Gaza Strip through a system of mass surveillance, then assesses and ranks the likelihood that each particular person is active in the military wing of Hamas or PIJ. According to sources, the machine gives almost every single person in Gaza a rating from 1 to 100, expressing how likely it is that they are a militant.
Lavender apparently relies on large-scale surveillance of population data, in which machine learning applications are used to identify anomalies (such as frequent SIM card changes or any contact with members of militant groups) and visualise them as graphs. As the investigative research claims, the results produced by Lavender were treated as “orders” to be followed without question. The use of an advanced system of military AI for essentially targeting an enclosed population as a whole, a population that has absolutely nowhere to go, presents a situation in which the closed circuit of military targeting has become complete. The knowledge produced through the extensive, if not total, surveillance of a population will never reveal anything other than their legitimacy as a target. This is the self-referentiality, to which Rey Chow has referred as a “circuit of targeting […] that ultimately consolidates the omnipotence and omnipresence of the sovereign ‘self’” and renders the other as “a target whose existence justifies only one thing, its destruction by the bomber.”