Abdul-Rahman al-Rawi, a 20-year-old construction student, was killed in February 2024 in al-Qaim, northeast Iraq, by the impact of a US missile. His death occurred during a series of 85 coordinated attacks by the US against Iraqi government-aligned forces and Iranian-backed militias in Iraq and Syria.
Al-Rawi was standing near a car when the missile struck. His brother, Anmar al-Rawi, described the devastating aftermath, noting it took two days to recover all of his brother’s remains. The attack was part of a response to an earlier attack on a US base in northern Jordan. US officials later sent a letter of condolence to the family, acknowledging that Abdul-Rahman’s death was a mistake.
The US military initially boasted that the operation utilized state-of-the-art AI technology to pinpoint targets with precision, part of a project known as Project Maven. However, it has since emerged that up to three innocent bystanders may have been killed in the attacks. Centcom stated it has “no way of knowing” whether the strike killing Abdul-Rahman involved AI-assisted targeting, a claim that experts find concerning. This investigation suggests Abdul-Rahman is the first acknowledged civilian victim of an AI-assisted airstrike.
Project Maven, the US military’s flagship project for integrating machine learning, has raised concerns about the increasing reliance on AI in military operations. Experts warn of “automation bias,” where humans may uncritically trust computer-generated outputs, and the potential for de-skilling among military personnel. The use of AI in warfare is also raising ethical questions about human judgment and accountability.
Since February 2024, AI-assisted targeting is reported to have been widely used in attacks across Iran, resulting in significant civilian casualties. Palantir’s Maven Smart System (MSS) has been paired with Anthropic’s Claude AI to carry out these strikes. The degree to which AI influenced the decision to carry out the strike that killed Abdul-Rahman remains a key question.
Anthropic recently rejected demands from the Department of Defense to allow its AI tools to be used for domestic surveillance and fully autonomous weapons. The company has launched legal action against the Trump administration, seeking to overturn a decision labeling it a “supply chain risk.” These developments highlight the growing legal and ethical challenges surrounding the use of AI in military applications.
The Trump administration is reportedly pressuring FIFA to implement a policy prohibiting transgender athletes from…
The Arab Parliament initiated an international campaign to address a recently approved Israeli law concerning…
The Democratic Alliance (DA) has elected Geordin Hill-Lewis as its new leader. The announcement followed…
Veteran journalist Jim Lemon has pleaded not guilty to charges related to a protest at…
The number of Black-owned employer businesses in the United States surpassed 200,000 in 2023, marking…
Ghana is recognized by the World Bank as one of a few African economies expected…