AI Against Humanity
← Back to articles
Safety 📅 March 12, 2026

A defense official reveals how AI chatbots could be used for targeting decisions

The article reveals the US military's potential use of generative AI for targeting decisions, raising ethical concerns. The integration of AI in warfare poses significant risks.

The article discusses the potential use of generative AI systems by the US military for making targeting decisions in combat situations. A Defense Department official revealed that AI chatbots could be employed to rank targets and provide recommendations, which would still require human oversight. This development comes amid scrutiny following a tragic strike on an Iranian school, raising concerns about the implications of using AI in military operations. The Pentagon's 'Maven' initiative has already been utilizing older AI technologies for data analysis, but the integration of generative AI introduces new risks due to its less reliable outputs. Companies like OpenAI, Anthropic, and xAI are mentioned as potential providers of the AI models being considered for military use. The article highlights the urgent need for accountability and ethical considerations in the deployment of AI technologies in warfare, especially given the potential for rapid decision-making that could lead to catastrophic outcomes.

Why This Matters

This article matters because it highlights the significant risks associated with deploying AI in military contexts, particularly in terms of decision-making that can lead to loss of innocent lives. The integration of generative AI into targeting processes raises ethical questions about accountability and the reliability of AI outputs. Understanding these risks is crucial for ensuring that AI technologies are used responsibly and do not exacerbate existing issues in warfare.

Original Source

A defense official reveals how AI chatbots could be used for targeting decisions

Read the original source at technologyreview.com ↗

Type of Company

Topic