A school was bombed in the southern Iranian town of Minab on the first day of US and Israeli strikes against Iran. More than 175 people died, including schoolchildren. Neither Washington nor Tel Aviv has said what happened. Nobody has taken responsibility. One question keeps coming up: Did an AI system hit the wrong target?
In the first 24 hours, US forces hit around 1,000 sites, about 42 per hour. A CSET report on the 18th Airborne Corps found that with Maven Smart System, 20 people could do what used to need a 2,000-person team at the Combined Air Operations Center in Iraq. By late 2024, the US had put a large language model, the same type of software behind consumer AI chatbots, inside Maven. This was among the first wars to use that technology in targeting.
The US military says it is still looking into the Minab strike. It has not said what part, if any, the AI system had in sending a missile at that building.
The New York Times reported early on that the system may have been running on old data. Foreign correspondent Louisa Loveluck wrote on X that the strike may have been “based on intel that is a decade old,” and that anyone checking recent satellite images freely online would have seen “a school with a sports field” at one of the sites marked for attack.
Computer scientist Anh Totti Nguyen has researched where AI vision systems go wrong. His paper, “Vision language models are blind: Failing to translate detailed visual features into words,” found that these systems often fail when two structures sit close together, and the software has to work out which is which.
Satellite images from the New York Times show the Shajarah Tayyebeh elementary school sitting right next to an IRGC compound in Minab, the exact setup Nguyen’s research flagged.
Emilia Probasco, a former Navy officer and senior fellow at the Center for Security and Emerging Technology, said in The Four Cast podcast that the responsibility falls on the commander who gave the order. That is how the military works. She said the black box problem, not being able to see how an AI system reached its answer, is “an ongoing area of research, not a solved one.”
Before the war, Anthropic, the company whose technology sits inside Maven, got into a contract dispute with the Defense Department over two things: whether AI is reliable enough for life-or-death calls, and whether using AI to connect scattered data points turns it into a mass surveillance tool.
Probasco said both concerns hold up, but noted “the awkwardness of a private company drawing lines around how a military conducts its operations.”
Holland Michel said the conversation keeps drifting toward the worst-case picture. A machine that picks targets and fires with no human involved. That risk is real, he said, but it is not what is happening now.
“The harder, more immediate work,” he said, “is making AI systems more transparent and ensuring that humans who rely on their outputs are making genuinely informed decisions, not simply deferring to whatever the machine suggests.”
BBC Verify tracked AI-made videos and doctored satellite images about the conflict that pulled in hundreds of millions of views.
Timothy Graham, a digital media researcher at the Queensland University of Technology, said: “The scale is truly alarming and this war has made it impossible to ignore now.” He added, “What used to require professional video production can now be done in minutes with AI tools. The barrier to creating convincing synthetic conflict footage has essentially collapsed.”
X said it would cut creators from its payment scheme if they posted AI-made war footage without a label. Mahsa Alimardani, a researcher at the Oxford Internet Institute who covers Iran, called it “a notable signal that they’ve noticed that this is a big problem.” Meta and TikTok did not reply when asked if they planned to do the same.
Your bank is using your money. You’re getting the scraps. Watch our free video on becoming your own bank