AI and Airstrikes Involve Student Death in Conflict
Written by Black Hot Fire Network Team on March 10, 2026
Abdul-Rahman al-Rawi, a 20-year-old construction student, was killed in February 2024 in al-Qaim, northeast Iraq, by the impact of a US missile. His death occurred during a series of 85 coordinated attacks by the US against Iraqi government-aligned forces and Iranian-backed militias in Iraq and Syria.
The Attack and Its Aftermath
Al-Rawi was standing near a car when the missile struck. His brother, Anmar al-Rawi, described the devastating aftermath, noting it took two days to recover all of his brother’s remains. The attack was part of a response to an earlier attack on a US base in northern Jordan. US officials later sent a letter of condolence to the family, acknowledging that Abdul-Rahman’s death was a mistake.
The Role of Artificial Intelligence
The US military initially boasted that the operation utilized state-of-the-art AI technology to pinpoint targets with precision, part of a project known as Project Maven. However, it has since emerged that up to three innocent bystanders may have been killed in the attacks. Centcom stated it has “no way of knowing” whether the strike killing Abdul-Rahman involved AI-assisted targeting, a claim that experts find concerning. This investigation suggests Abdul-Rahman is the first acknowledged civilian victim of an AI-assisted airstrike.
Project Maven and Concerns About Warfare
Project Maven, the US military’s flagship project for integrating machine learning, has raised concerns about the increasing reliance on AI in military operations. Experts warn of “automation bias,” where humans may uncritically trust computer-generated outputs, and the potential for de-skilling among military personnel. The use of AI in warfare is also raising ethical questions about human judgment and accountability.
AI and Recent Operations in Iran
Since February 2024, AI-assisted targeting is reported to have been widely used in attacks across Iran, resulting in significant civilian casualties. Palantir’s Maven Smart System (MSS) has been paired with Anthropic’s Claude AI to carry out these strikes. The degree to which AI influenced the decision to carry out the strike that killed Abdul-Rahman remains a key question.
Legal and Ethical Challenges
Anthropic recently rejected demands from the Department of Defense to allow its AI tools to be used for domestic surveillance and fully autonomous weapons. The company has launched legal action against the Trump administration, seeking to overturn a decision labeling it a “supply chain risk.” These developments highlight the growing legal and ethical challenges surrounding the use of AI in military applications.