The New Battlespace: How Geospatial AI, Outdated Intelligence, and the Illusion of Oversight Are Reshaping Military Targeting

The Anatomy of a Data-Driven Tragedy and GeoAI Ethics

Twenty years ago, as an Air Force analyst supporting U-2, Global Hawk, and Predator missions, the weight of the Law of Armed Conflict (LOAC) and Intelligence Oversight governed our every move. We did not measure target validation in milliseconds; we measured it in days. We would relentlessly monitor a single facility, building exhaustive "pattern of life" assessments to validate activities and guarantee our actions would not violate ethical and legal boundaries. The gravity of those lethal decisions is profound, even today, decades later, there are a handful of missions that remain difficult for me to remember without feeling their weight.

Because I know the intense scrutiny and moral burden that goes into this work, I cannot imagine the sickening realization experienced by the fire-team or the intelligence analysts who discovered their target in Minab, Iran, was not a military compound, but the Shajareh Tayyebeh girls' school. While bearing the responsibility of getting a target that tragically wrong is an analyst’s darkest nightmare, the true, irreversible catastrophe was borne by the children and educators who lost their lives. They are the ultimate victims of a systemic failure we can no longer ignore.

Of course, I must acknowledge a stark logistical imbalance between my experience and the opening weeks of Operation Epic Fury. I operated during sustained, stable operations, the long tail of a conflict. Modern U.S. strategy, conversely, dictates a modern Blitzkrieg. The goal is to strike fast with overwhelming force, immediately knocking out an enemy's ability to retaliate to make subsequent operations survivable. This opening stage creates an insatiable, immediate demand for massive, scaled target decks. When the political gears of war move faster than the human intelligence cycle can possibly adapt, as they arguably did here, the research and analysis leading up to the conflict becomes the single point of failure. The military is forced to rely on whatever data is already sitting in the system.

In our previous Project Geospatial exploration, The New Battlespace: How Geospatial AI is Reshaping Military Intelligence, we mapped the profound technological shifts altering the modern intelligence landscape. This follow-on goes deeper to dissect the practical mechanics and ethical boundaries of a modern targeting failure. The theater of modern warfare has undergone a violent and irreversible paradigm shift, trading the deliberate, human-paced intelligence cycles of counterinsurgency for the algorithmic velocity of large-scale combat operations.

However, this acceleration has exposed a critical, systemic vulnerability within the defense intelligence apparatus: the dangerous and growing friction between hyper-fast target generation algorithms and stagnant, outdated foundational databases. This systemic dissonance was tragically illuminated by the recent bombing of the Shajareh Tayyebeh girls' school in Minab, Iran, during the opening weeks of Operation Epic Fury. This was a catastrophic failure born not from a misfired weapon, a rogue commander, or a malfunctioning targeting pod, but from an administrative scaling failure fueled by obsolete data and automated hubris.

From the perspective of GeoAI ethics, the incident underscores a terrifying reality about the modernization of warfare. The military has not completely handed over the keys to autonomous, AI-run targeting; rather, it is operating in a precarious hybrid state. It is currently superimposing twenty-first-century artificial intelligence, systems capable of generating thousands of targets in mere hours, over twentieth-century database architectures that rely on overwhelmed human analysts for manual updates.

Within this hybrid environment, the foundational checks and balances, the integrity of the human-in-the-loop, and the strict requirement to vet targets based on the currency of data are more vital than ever. In the case of the Minab school, the geographic coordinates within the United States military's targeting catalogs remained permanently tethered to an old classification as an active military compound. When the algorithmic kill chain was activated, the target was selected and prioritized by the machine, then rapidly rubber-stamped by human operators who simply lacked the time and resources to dispute the system's output. The human authorized the strike, but the algorithm dictated the reality. Within the classified enclaves of military intelligence and the halls of Congress, this tragedy has triggered a profound reckoning regarding how targets are maintained and how artificial intelligence is warping the very fabric of both the Law of Armed Conflict (LOAC) and foundational Intelligence Oversight.

The Traditional Architecture of Target Intelligence

To understand how a ten-year-old data error survives to become a fatal targeting directive, one must examine the fragmented, highly specialized ecosystem of the defense intelligence community. In times of relative peace, the Department of Defense (DoD) traditionally divides the responsibility of updating military targets across multiple agencies, categorized strictly by intelligence disciplines. This division of labor is intended to create a holistic, multi-spectral view of an adversary's capabilities, but it inherently creates silos that are difficult to synchronize.

The collection, analysis, and maintenance of foundational military intelligence are primarily divided among three major combat support agencies (This is a gross oversimplification but highlights the overall concept):

  • Defense Intelligence Agency (DIA): Defense Intelligence Agency (DIA): Aggregates GEOINT, SIGINT, and human intelligence into a unified order of battle. Crucially, as the custodian of the DoD's primary intelligence repositories, the DIA holds the ultimate administrative responsibility for synthesizing this multi-source data and definitively assigning the functional purpose (and therefore the legal status) of a facility.

  • National Geospatial-Intelligence Agency (NGA): Exploits overhead imagery to verify the physical presence, structural integrity, and layout of facilities. Identifies new construction or physical alterations to fixed sites.

  • National Security Agency (NSA): Intercepts electronic communications and telemetry. Updates targets by correlating electromagnetic activity with specific geographic locations.

In an ideal, resource-infinite environment, these agencies operate in continuous synergy. However, this system requires continuous input and active, human-driven synthesis. When the Iranian facility ceased to be a military compound a decade ago, the NSA likely observed a total cessation of military signals intelligence. The site went "dark." Meanwhile, the NGA's satellite imagery would have shown the physical buildings remaining largely intact. Unless a facility is physically demolished, civilian modifications to a former military base do not immediately register as a demographic shift from low-earth orbit.

Because the facility was no longer an active threat and no longer broadcasting military signals, it dropped to the absolute bottom of the intelligence community's priority queue. Without a dedicated analyst to actively fuse the imagery with the lack of signals, and manually alter the facility's metadata in the central database, the location retained its original, lethal designation.

The Modernized Integrated Database (MIDB) and the Cataloging of War

The repository where these lethal designations live is the Modernized Integrated Database (MIDB). Built in the 1980s and designed for a vastly different era of data consumption, the MIDB is the DoD's authoritative, all-source repository of worldwide general military and targeting intelligence. It logs every known piece of adversarial equipment, geographic facility, and command hierarchy across the globe. Crucially, however, strict domestic intelligence oversight policies dictate that it intentionally omits the locations of friendly U.S. forces and equipment. This creates structural blind spots, blank spaces on the map that human analysts learn to navigate through tribal experience, but which represent dangerous voids for automated systems.While real-time systems like Blue Force Trackers (BFT) are designed to de-conflict friendly positions, relying on active transponders as the only failsafe against an AI generating mass targets from a fundamentally blind database creates a catastrophic liability.

The MIDB relies on classical database structures reminiscent of complex spreadsheets rather than dynamic, relational knowledge graphs that can automatically update based on new data ingestion. Entries are categorized using specific functional Category Codes (CATCODEs). The fatal flaw of the MIDB architecture is that it lacks an automated mechanism to decay or flag the confidence level of a target based solely on the passage of time. A CATCODE for a military base entered and vetted in 2014 carries the exact same structural weight and authority in the database as a CATCODE vetted in 2024, unless a human explicitly reviews and updates it.

The Prioritization Paradox, Database Rot, and Institutional Cuts

To understand how a ten-year-old data error survives, one must examine how the military triages its data within the "Analysis and Production" phase of the intelligence cycle. The military intelligence community does not possess an infinite number of analytical cells. Consequently, target maintenance is ruthlessly prioritized based on the immediate threat an entity poses to national security.

Target updating frequencies generally follow a triage matrix based on operational necessity:

  • Primary Targets: Strategic assets capable of inflicting massive retaliation, such as nuclear facilities, active airfields, or command-and-control bunkers. These demand extremely high maintenance.

  • Secondary Targets: Critical infrastructure supporting an adversary's war machine without posing an existential threat, such as power plants or logistical hubs.

  • Lower-Priority/Administrative Targets: There is a functional reality of lower-priority targets that represent a negligible threat in peacetime, including lower-level administrative buildings, disused motor pools, and former training grounds.

The Shajareh Tayyebeh school occupied a footprint that had once been a military compound. It is crucial to understand that this facility was not actively being tracked as a priority target for ten years. Rather, it was such a low priority that it simply sat dormant in the database, untouched and un-updated, while the civilian school was built on top of the old base infrastructure.

In the zero-sum game of intelligence resource allocation, verifying the status of an abandoned facility is viewed as a misallocation of highly specialized labor. Compounding this issue, the DIA analytical cells and supporting organizations responsible for this specific aspect of the intelligence cycle, routine database maintenance and entity-level target updating, have faced budget and personnel cuts over the last year. This has significantly reduced analytical oversight, making the routine auditing of lower-tier targets nearly impossible. These cuts force dedicated intelligence professionals into an impossible operational triage, forcing them to bear the moral responsibility of maintaining target accuracy while being structurally denied the time and personnel required to actually do so.

Consequently, these targets fall into a state of deep database rot. Traditionally, the military accepts the risk of this rot because these targets are so low-priority they would never be manually nominated to a Candidate Target List (CTL) for a strike. However, when an algorithmic targeting system unexpectedly resurrects these entries to meet a quota for mass strikes, the rot becomes weaponized.

The Vetting Bottleneck and the LOAC Filter

In traditional military doctrine, the presence of outdated, lower-priority targets in the MIDB was mitigated by a rigorous human vetting process. Before a geographic coordinate could be transitioned from a database entry to a smoking crater, it had to survive the Joint Targeting Cycle. During Phase 2 (Target Development) and Phase 3 (Capabilities Analysis), proposed targets run a multi-phase gauntlet designed to weed out errors and ensure strict compliance with the Law of Armed Conflict (LOAC).

LOAC is governed by four core principles: Military Necessity, Distinction (separating combatants from non-combatants), Proportionality, and Humanity. Target selection typically filters LOAC violations based on the assigned category of the target.

This is where outdated data structurally breaks the LOAC filter. If the database classifies a facility as an abandoned, lower-priority military administrative building, the Collateral Damage Estimation (CDE) software calculates the strike as a low-risk CDE Level 1 or Level 2. Because it is categorized as low risk, the strike authority remains at a lower tactical level, completely bypassing the rigorous legal scrutiny required for a high-risk CDE Level 5 strike. The system mathematically assumes Distinction has been met because of the old CATCODE.

Had the database accurately reflected that the facility was a civilian school, the CDE process would have immediately identified it as a protected entity, placing it permanently on the command's No-Strike List (NSL). Even if occupied by enemy forces, striking it would register as a CDE Level 5, requiring explicit intervention and proportionality approval from the Combatant Commander or Secretary of Defense. By relying on a ten-year-old category, the LOAC filter became blind to reality.

CDE Level Risk Profile Decision-Making Authority
Level 1 Low risk. Target is isolated; minimal exposure for civilian populations. Lower-level tactical commanders.27
Level 2 Minor risk. Precision weapons required to mitigate secondary effects. Tactical commanders.27
Level 3 Moderate risk. Blast radius encroaches on non-combatant structures. Operational commanders (e.g., Brigade/Division).
Level 4 High risk. Significant chance of collateral damage requiring specific weapon fusing. Joint Task Force Commander.
Level 5 Extreme risk. Significant potential risk of civilian casualties; dual-use facilities. High-level hierarchical authority (General Officer / SECDEF).27

AI Saturation, the Scaling Problem, and the Hybrid Reality

To bridge the massive chasm between the demand for mass targets in Large-Scale Combat Operations (LSCO) and the severe human vetting bottleneck, the Department of Defense turned to Silicon Valley. The solution was the aggressive integration of Artificial Intelligence (AI) and Machine Learning (ML) into the targeting workflow, a movement pioneered by Project Maven and commercial defense contractors like Palantir.

The military is undoubtedly operating in a hybrid state, heavily reliant on both human analysts and algorithmic assistance. However, because credible reports suggest that AI models, specifically Anthropic’s Claude embedded within the Maven Smart System, were directly involved in proposing targets for Operation Epic Fury, we must take the ethical implications seriously.

This capability allowed the U.S. military to bypass the traditional 20-target/10-day human bottleneck. As recently noted by the Pentagon’s Chief Digital and AI Officer, Cameron Stanley, Maven has successfully consolidated "eight or nine" legacy intelligence systems into a single visualization tool, collapsing the kill chain from hours to mere minutes.

The Acceleration of Error: Efficiency vs. Oversight

Because the operational demand for targets in LSCO vastly outstrips the human workforce's ability to update them, a dangerous cascading effect occurs. When the Maven Smart System was tasked with generating mass targets, it inevitably pulled from the MIDB. Because the system required immense volume, it did not stop at primary targets. The target selection algorithms dug deep into historically neglected target lists, essentially scraping the bottom of the database to find command nodes, logistical hubs, and administrative centers. As the AI pushed more and more of these scrapped, unverified coordinates into the active strike queue, the errors cascaded.

When the algorithm encountered the coordinates for the school in Minab, it did not see a classroom; it simply matched the query to the MIDB's outdated classification: Military Compound.

AI systems are fundamentally trained to be highly efficient at the specific tasks they are asked to do, in this case, matching query parameters to database entries to generate volume. They do not possess inherent ethical policy management. An algorithm designed to maximize target output does not naturally screen those targets for LOAC compliance or Intelligence Oversight restrictions, whether that means recognizing a ten-year-old civilian demographic shift, or understanding that undocumented military equipment might actually belong to our own forces. In reality, applying these legal and ethical screens is a tough logistical challenge even in traditional manual workflows; expecting an efficiency-driven AI to organically navigate these nuances without explicit, hardcoded guardrails is a recipe for disaster.

This is the crux of the scaling problem: AI does not merely speed up target discovery; it violently accelerates the manifestation of historical data errors. In a hybrid LSCO scenario, where thousands of targets are processed in a matter of days, the sheer volume flattens the distinction between a freshly vetted target and a ten-year-old database ghost.

Cognitive Offloading and the Myth of the Human-in-the-Loop

The defense establishment consistently defends the integration of AI in targeting by citing the doctrine of the "human-in-the-loop." Proponents argue that AI merely recommends targets, while human operators retain meaningful control.

In the context of the Minab school bombing, this defense collapses under the weight of volume. The incident exposes the phenomenon of "cognitive offloading", the psychological reality where humans, when overwhelmed by data and operating under extreme time constraints, defer to the judgment of an automated system.

This reality has prompted severe congressional scrutiny. Over 120 House Democrats recently demanded answers from the Pentagon regarding Operation Epic Fury, asking specifically if Maven was used to identify the Shajareh Tayyebeh school and "did a human verify the accuracy of this target?"

In an LSCO environment, where an AI system collapses the kill chain to minutes, the human-in-the-loop is reduced to a bureaucratic bottleneck. If an operator is presented with a target coordinate and a system-generated confidence score of 95%, accompanied by a metadata tag reading "Military Compound," the operator simply does not have the time, bandwidth, or alternate intelligence feeds to verify the data's currency. Operators are no longer making deliberate decisions; they are merely managing the AI's output queue. This offloading is not a failure of individual diligence or a lack of care by the warfighter; it is an involuntary, psychological survival mechanism triggered by being buried under an avalanche of algorithmic output.

It is highly unlikely that the military will ever formally eliminate traditional target vetting or the requirement for human authorization in the next five years. The strict legal frameworks of LOAC and domestic Intelligence Oversight demand a human-in-the-loop. However, maintaining the policy of human vetting while aggressively scaling AI target generation creates a dangerous illusion of safety. When an analyst is tasked with reviewing hundreds of machine-generated targets in a single shift during Large-Scale Combat Operations (LSCO), the rigorous, multi-day intelligence synthesis of the past is mathematically impossible. The vetting process devolves into an administrative rubber stamp. Analysts are forced to implicitly trust the AI's high confidence scores and outdated foundational data, transforming the crucial human safeguard into a mere procedural checkpoint.

It is crucial to emphasize that in this hybrid era, human commanders still hold the ultimate authority to "press the red button." AI systems are not yet independently selecting targets and automatically firing weapons without human authorization. However, this technical distinction offers a false sense of security. Indirectly, through our inability to update and verify the pedigree of target information at the immense scale and velocity that AI generates it, the targeting process might as well be fully autonomous. The human finger may authorize the strike, but an algorithmic engine, fed by outdated data, has already predetermined the casualty.

The Ethics of Algorithmic Lethality and Parallel Oversight Failures

The use of commercial AI in lethal targeting has exposed a deep, ideological rift between the technology sector and the Pentagon. Viewed through the lens of GeoAI ethics, the clash centers entirely on the ethical responsibility of maintaining accurate spatial data.

While much of the focus rests on LOAC, the scaling of AI targeting also undermines parallel Intelligence Oversight policies. Frameworks like Executive Order 12333 and DoD Manual 5240.01 dictate how intelligence is collected, retained, and strictly verified to prevent the misuse of data. These rules create intentional bureaucratic friction. AI scaling inherently bypasses this friction, pulling vast amounts of aged data without the institutional capability to verify it against current oversight standards.

Anthropic, a company founded on principles of AI safety, maintains strict usage restrictions against its models being used for autonomous lethal targeting or domestic mass surveillance. When reports surfaced that Claude was instrumental in generating the Iranian target lists, Anthropic objected, leading to a spectacular breakdown in relations with the defense establishment.

The fallout was rapid and public. The dispute reached the highest levels of government, starkly illustrating the military's prioritization of algorithmic velocity over civilian-sector ethical constraints regarding spatial fairness and data provenance. This schism highlights a profound misunderstanding of risk: the technology sector understands that spatial data biases can result in catastrophic outcomes, while the Pentagon views such ethical restrictions as an existential threat to its operational speed.

The Transition from MIDB to MARS: Building on Sand

The defense intelligence community is not blind to the inadequacies of the MIDB. Recognizing that a text-heavy, 1980s-era database cannot sustain the data demands of AI warfare, the DIA initiated the development of the Machine-Assisted Analytic Rapid-Repository System (MARS).

However, the transition has created a dangerous liminal space. The most perilous aspect is the process of data migration. Because it is impossible for humans to manually re-vet millions of database entries before the transfer, MARS will inevitably ingest the exact same outdated, unverified, historically lower-priority target data. If MARS ingests bad data, and combat AI systems plug directly into it for target generation, the speed of error is simply magnified.

The Architecture of Future Targeting

The bombing of the girls' school in Iran represents a watershed moment in the evolution of algorithmic warfare. It is a stark, empirical demonstration that the integration of artificial intelligence into the sensor-to-shooter kill chain is not a panacea for the scaling problems of modern combat. Rather, it is a magnifying glass that exponentially enlarges the latent flaws, bureaucratic latencies, and data decay inherent in legacy military intelligence systems.

As the Department of Defense continues its pivot toward Large-Scale Combat Operations, the demand for target volume will only increase. While the military will undoubtedly keep humans in the loop to satisfy legal doctrines, the intelligence community's inability to meaningfully investigate more than a fraction of AI-generated targets ensures that rubber-stamping will become standard operating procedure. We are not facing a future of rogue, autonomous weapons; we are facing a future of automated bureaucracy, where human operators are systematically overwhelmed into authorizing lethal errors they cannot see.

To prevent further tragedies and uphold the core principles of GeoAI ethics, the defense establishment must recognize that target saturation is a spatial data management problem. Fixing this architecture is not just about operational efficiency or legal compliance; it is a profound ethical imperative to protect our own analysts and commanders from the devastating moral injury of executing a flawed algorithmic command; and, above all, to ensure that the catastrophic price of our administrative failures is never again paid in innocent civilian blood. The transition to cloud-native architectures must be aggressively accelerated, with an intense focus on the algorithmic purging and re-vetting of historically neglected target lists.

Furthermore, the concept of the "human-in-the-loop" must be radically redefined. If a meaningful human-in-the-loop is truly unattainable at the sheer velocity required for LSCO, the military must substitute that human friction with strict, automated policy management. The system architecture itself must be mandated to identify targets relying on aged, unverified data and automatically flag or quarantine them from the active strike queue. Ideally, targeting databases should employ automated tagging that visibly degrades a target's confidence score if it has not been verified within specific, time-based thresholds, such as 3 months, 6 months, or 1 year or more, depending on the target category.

Until the velocity of intelligence validation matches the velocity of target generation, the military will continue to fight at a dangerous deficit of truth. Outdated data is not merely an administrative error; it is a weapon of mass tragedy, fired blindly into the fog of an algorithmic war.

Footnote: It is a fascinating parallel that Anthropic’s two primary restrictions inadvertently serve as commercial analogs for the military’s two most critical compliance frameworks. The prohibition on autonomous lethal targeting acts as a safeguard for the Law of Armed Conflict (LOAC), while the prohibition on domestic mass surveillance directly reinforces traditional Intelligence Oversight policies designed to protect civil liberties. By enforcing these corporate ethical boundaries, the tech sector is effectively attempting to function as an external backstop for established military and intelligence legal doctrines. It brings to question why the administration considers these blockers at all

Next
Next

The Subsurface Geopolitics: Regulating the Commercial Use of Quantum Gravity Gradiometry