The deaths of over 165 schoolgirls in Iran's Minab has sparked a debate over AI's deployment in warfare. AI can identify targets faster than humans, but can it truly be trusted in war? Even a small error can cost hundreds of civilian lives. We analysed the available facts to understand if AI played a role in the Iran school bombing.

People carry coffins as they attend the funeral of the victims following a strike on a school in Iran's Minab. (Image: Reuters)
When the US and Israel launched joint strikes on Iran, missiles landed where they should never have — the Shajareh Tayyebeh girls' elementary school, in the city of Minab in southern Iran. Over 165 school girls, aged seven to 12, were killed along with teachers and staff. The incident, one of the deadliest civilian casualties in the Iran war, has sparked global outrage. The US, which was engaging targets in southern Iran on February 28, is being blamed for the attack. Amid the accusations, there is scrutiny over whether advanced AI systems led to the wrong target being identified.
US President Donald Trump, in a recent interview, dismissed allegations that US forces were responsible for the killing of 165 schoolgirls. "We think it was done by Iran because they're very inaccurate with their munitions," he said. He said that the US had no intent to target civilians.
However, evidence from independent analyses and expert assessments point to US involvement.
It has raised questions about the integration of generative AI like Anthropic's Claude into military operations. Amid the fog of war, the information is often incomplete or contested. So, we will only examine the facts to analyse if AI contributed to the tragedy.
The US used AI to conduct 900 strikes on Iran in 12 hours by shortening the "kill chain".
THE MINAB SCHOOL STRIKE: US ROLE LIKELY, SAY REPORTS
The attack on the school came shortly after the US-initiated Operation Epic Fury, with Israel targeting Iran's top leadership, military bases, and nuclear sites.
Satellite imagery and geolocated videos analysed by CNN suggest the school in Minab was hit around the same time as strikes on a nearby Islamic Revolutionary Guard Corps (IRGC) naval base.
Sam Lair, a research associate at the James Martin Center for Nonproliferation Studies, reviewed munitions in the footage, saying it was consistent with the US Tomahawk Land Attack Missile, which is exclusively used by American forces in the region.
CNN's investigation concluded that the US military was "likely responsible" for the strike that killed the Iranian schoolgirls.
Then, there are claims of a "double-tap" attack.
Iran's Mehr News, a semi-official Iranian news agency, and independent outlets like Middle East Eye, reported that the school was bombed twice, with the second strike occurring approximately 40 minutes after the first, targeting survivors who had gathered in the school's prayer hall.
Reuters Connect also documented the site as "bombed twice, 40 minutes apart", based on local reports and imagery.
Many have argued that the "double tap" pattern suggests intent rather than error. Such strikes are a tactic used to maximise casualties among first responders.
In active conflict, verifying such details is challenging, and the Pentagon has declined to comment on specifics.
The White House has not outright denied US involvement but has reiterated that any civilian deaths are regrettable and unintended.
Defence Secretary Pete Hegseth, a vocal proponent of AI integration in warfare, emphasised the military's commitment to precision, yet his office has provided no details on the Minab killings.
WHAT ROLE DID AI PLAY ON STRIKES IN IRAN?
The integration of AI into the US' operation against Iran has been questioned.
Anthropic's Claude AI was "embedded" in the Iran operation from the start, assisting with intelligence assessments, target identification, and battle simulations, reported The Wall Street Journal.
This occurred mere hours after Trump banned federal use of Anthropic's tools, labelling the company a "Radical Left AI company" for refusing to remove safeguards against autonomous weapons and mass surveillance. Despite the ban, Claude remained operational through partnerships like Palantir's Maven Smart System, which processed over 1,000 targets in the first 24 hours.
Claude suggested hundreds of targets in Iran, provided coordinates, and even priority kills based on real-time data from satellites and surveillance, The Washington Post reported.
According to an analysis by the NYT, "The school at one point was part of the Revolutionary Guards' naval base, according to satellite images from 2013". However, by September 2016, the same building was partitioned off and was no longer connected to the IRGC base.
It is possible the AI system did not factor in this development.
Another analysis by New York-based news website Quartz, said, "Targeting errors aren't new, but the introduction of generative AI into the targeting chain is. This is technology that still hallucinates facts, misreads images, and stumbles over reasoning in low-stakes commercial settings."
CLAUDE AI'S PALANTIR USED TO SHORTEN KILL CHAIN
Previously, Israel also used AI in Gaza conflict. An article by British newspaper The Guardian, headlined: "The machine did it coldly", described the AI systems as identifying 37,000 targets with minimal human oversight, leading to civilian casualties.
The US used AI to strike 1,000 targets in 24 hours in Iran, which experts suggest could otherwise have taken weeks. Anthropic's Claude helped reduce the "kill chain", which means the process of locating a target through to receiving approval and launching the strike was considerably reduced.
The US military uses Claude via Anthropic's partnership with war-tech firm Palantir. The AI tool is embedded in Palantir's Maven Smart System.
In the run-up to the first strikes on Iran, The Washington Post reported that Maven—powered by Claude—generated a list of “hundreds” of potential targets for the US military. The targets were ranked by priority and included precise location coordinates. The system also recommended specific weapons for each site, factoring in available stockpiles and how those weapons had performed against similar targets in previous operations.
Artificial intelligence's role on the battlefield is way too novel. Its reliability in high-stakes scenarios is yet to be studied. AI seemingly makes the decision to bomb civilians quicker than the "speed of thought".
Even a small error rate by AI means hundreds of casualties.
Hegseth has pushed for "aggressive AI adoption" in US military operations. The technology is clearly being rushed, given there have been very few real-time uses of AI on battlefields.
There is too much opacity over an AI blunder causing the death of the schoolgirls in Minab. However, by the looks of it, AI, despite its rapid evolution in the last decade, cannot be trusted fully despite its speed and near accuracy in a war zone. The mass graves of schoolgirls raises this question.
- Ends
Published By:
Anand Singh
Published On:
Mar 10, 2026 08:39 IST
Tune In

1 hour ago
