The U.S. military was able “to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela.
While Claude is only a few years old, the U.S. military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today.
In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the U.S. Navy, I find that digital systems are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses.
Myth and reality in military AI
Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the people who use them.
In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of considerable debate.
Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration and cybersecurity.
Claude is an example of a decision support system, not a weapon. Claude is embedded in the Maven Smart System, used widely by military, intelligence and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities.
The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions.



