Targeting Assistance and the Speed of War
OpenAI’s recent agreement to equip the Pentagon with its artificial intelligence in classified settings, just over two weeks ago, has ignited a firestorm of questions. While CEO Sam Altman insists the technology won’t be used for autonomous weapons, the fine print allows the military to operate within its own existing, and notably flexible, guidelines. Similarly, assurances that the AI won’t be employed for domestic surveillance seem equally shaky.
The motivations behind this rapid pivot remain opaque. Is it purely financial, with OpenAI seeking new revenue streams to fuel its hefty AI development costs? Or does Altman genuinely believe in his frequently stated ideology: that liberal democracies, and their armed forces, must possess the most advanced AI to counter China’s growing influence? Regardless of the ‘why,’ the ‘what next’ carries significant weight. OpenAI has now positioned itself squarely within the complex realities of modern warfare, at a time when US strikes against Iran, increasingly reliant on AI, are escalating.
The integration of OpenAI’s technology into classified military systems is still in its early stages. It needs to be woven into the existing technological fabric used by the Pentagon. This process is further complicated by past controversies, such as when President Trump ordered the military to cease using Anthropic’s AI after the company refused to permit its use for “any lawful purpose.” This led to Anthropic being classified as a supply chain risk, a designation the company is now challenging in court.
Should the Iran conflict persist by the time OpenAI’s AI is fully deployed, its applications could be profound. One defense official suggested a scenario where human analysts could feed potential target data into an AI model. The AI would then analyze this information, factoring in logistical details like aircraft and supply locations, and suggest strike priorities. This model could process diverse inputs, including text, images, and video. Crucially, a human would still be tasked with the final verification of these AI-generated recommendations. This raises a critical question: if human oversight is truly rigorous, how significantly does AI actually accelerate targeting and strike decisions?
Defending Against Drones and the Stakes Involved
For years, the military has utilized systems like Maven, which can automatically analyze drone footage to identify potential threats. OpenAI’s models, much like Anthropic’s Claude, are likely to offer a conversational interface, enabling users to request interpretations of intelligence and recommendations in natural language. This represents a significant shift; while AI has long analyzed vast datasets for military insights, the use of generative AI for recommending battlefield actions is being tested in earnest for the first time in the context of Iran.
The stakes are undeniably high. The tragic loss of six US service members in Kuwait on March 1, following an Iranian drone attack that evaded US air defenses, underscores the urgent need for advanced counter-drone capabilities. OpenAI announced a partnership with Anduril, a company specializing in drone and counter-drone technologies, at the close of 2024. This collaboration aims to expedite the analysis of drone attacks against US forces and assist in their neutralization. An OpenAI spokesperson clarified that this partnership aligns with company policy, as the technology is directed at drones, not people.
While Anduril offers a suite of counter-drone technologies to military bases globally, neither company has provided updates on the project’s progress. Anduril has historically focused on training its own AI models for threat identification through camera and sensor data analysis, rather than developing conversational AI systems that allow direct soldier interaction and natural language guidance. This is precisely where OpenAI’s models could offer a crucial enhancement.
The ongoing developments signal a new era in military AI, pushing the boundaries of what these technologies can achieve on the battlefield. The coming months will reveal the extent to which these cutting-edge tools will be deployed and what ethical considerations will ultimately guide their use.
📰 Source: MIT Tech Review