Leading AI developers like OpenAI and Anthropic are threading a delicate needle to sell software to the U.S. military: make the Pentagon more efficient by not letting their AI kill people.
Today, their tools are not used as weapons, but AI can identify threats; The Pentagon's chief digital and AI officer, Dr. Radha Plumb told TechCrunch. Phone interview.
“Obviously, we're increasingly working on ways to speed up the kill chain so we can respond in time to protect our troops,” Plumb said.
The “killing field” is a complex of sensors, identifying threats involving platforms and weapons; Refers to the process of tracking and elimination. According to Plumb, Generative AI is proving to be helpful during the planning and strategic stages of the kill chain.
The relationship between the Pentagon and AI developers is a relatively new one. OpenAI Anthropic and Meta They have revised their terms of use. By 2024, to allow US intelligence and defense agencies to use their AI systems. However, their AI is still not allowed to harm humans.
When asked how the Pentagon works with AI modeling providers, Plumb said, “We're really clear about what we're going to be working on and we're not going to use their technologies.”
Nevertheless, This has started a speed dating process for AI companies and defense contractors.
Must submit. Collaborated with Lockheed Martin and Booz Allen.Additionally, it will bring its Llama AI models to defense agencies in November. that month Anthropic has partnered with Palantir.. December OpenAI violated the same agreement. Quieter with Anduril; Combined His models were also used with Palantir.
As Generative AI proves its usefulness at the Pentagon, Silicon Valley may relax its AI usage policies and allow more military applications.
“Playing through different scenarios is something that next-generation AI can help with,” Plumb said. “Our commanders can take advantage of the full range of tools available, but think creatively about different response options and potential trade-offs in an environment where there is a threat or series of threats that need to be prosecuted.”
It is unclear whose technology the Pentagon is using for this work. Using generative AI in the kill chain (even in the early planning stages) seems to violate the usage policies of many top model developers. HumanismFor example, Using its models to manufacture or modify “systems designed to endanger or cause loss of human life” is prohibited.
In response to our questions, Anthropic pointed TechCrunch to its CEO, Dario Amodei. A recent interview with the Financial TimesIn defense of his military work,
The position that AI should never be used in defense and intelligence settings makes no sense to me. It's obvious that we can go to gangs and do anything we want – including evil weapons – like crazy. I'm trying to find a middle ground to act responsibly.
OpenAI Meta and Cohere did not respond to TechCrunch's request for comment.
Life and death and AI weapons
A debate over defense technology has raged in recent months. Should AI weapons really be allowed to make life-and-death decisions? Some argue that the US military already has the weapons.
Anduril CEO Palmer Luckey said recently. noted in X. The US military has a long history of purchasing and using autonomous weapons systems such as A; CIWS Fort.
“The DoD has been purchasing and deploying autonomous weapon systems for decades. Their use (and export!) is well-understood and rigorously regulated by tightly defined, non-volunteer regulations,” Luckey said.
However, TechCrunch did not know whether the Pentagon would purchase and operate fully autonomous weapons. When asked if not, Plumb dismissed the idea as a matter of principle.
“No, The answer is short,” Plumb said. “The decision to use force, both in terms of credibility and ethics, will always involve humans, and that includes our weapons systems.”
“Autonomy” is the word. Somewhat unclear AI coding agents; AI coding systems, such as automated systems such as self-driving cars or self-firing weapons; Self-driving cars or fully automatic weapons – the technology industry as a whole – has sparked a debate over when it will become truly independent.
Plumb said the idea that automated systems make life-and-death decisions independently is “too ambiguous” and less “science fiction-y” in reality. Instead, She suggested that the Pentagon's use of AI systems is a true collaboration between humans and machines, with senior leaders actively making decisions throughout the process.
“People tend to think of it as if there are robots out there somewhere. Then the pipe computer (a fictional autonomous machine) spits out a piece of paper and people tick a box,” Plumb said. “That's not how human-machine teams work, and that's not an efficient way to use these AI systems.”
AI Security at the Pentagon
Military partnerships don't always go over well with Silicon Valley employees. Last year there were dozens of Amazon and Google employees. They were ousted and arrested after protesting their companies' military contracts with Israel.Cloud deals under the code name “Project Nimbus”.
By comparison, there has been a relatively muted response from the AI community. Some AI researchers, such as Anthropic's Evan Hubinger, say the use of AI in the military is inevitable, and it's important to work directly with the military to make sure they get it right.
“If we take the risks of AI seriously, the US government is a very important actor, and trying to block the US government from using AI is not a practical strategy,” Hubinger said in November. Submit LessWrong to the online forum.. “It's not enough to just focus on catastrophic risks, you have to prevent any way the government can abuse your models.”