The Pentagon Is Building an AI to Find Secret Nuclear Missiles

The Pentagon is reportedly working on a new AI initiative designed to find hidden nuclear missiles before they launch, Reuters reports. It’s part of a program intended to determine how artificial intelligence might be used to keep America safe against the threat of nuclear attack. If successful, such a program could lead to significant changes in our nuclear deterrence programs.


There are two basic ways to defend against a nuclear attack. You can launch a preemptive strike against the missile launchers before the rockets ever take flight, or you can wait and attempt to bring the rockets down once they’re airborne. Launching counterattacks against in-flight ICBMs has proven to be extremely difficult. No nation, including the United States, currently possesses a missile shield system capable of reliably stopping inbound warheads on ICBMs.

It’s much easier to stop a missile from launching than it is to catch it once it’s already in the air — provided you can find the launch site. For decades, countries like Russia and China have deployed road-mobile missile launchers that can be quickly assembled, prepped for launch, and then moved again. The speed with which these systems can be deployed has made it far more challenging to track them via satellite images. An AI system that could analyze data when it came in — and that might theoretically be better than humans at spotting telltale signs of launcher movement or deployment — could give the United States an enormous advantage in anticipating potential enemy attacks. This program is separate from Project Maven, though it shares some conceptual similarities with that effort.

There are, of course, substantial concerns about integrating AI and machines too closely into the decision-making loop. We’ve talked before about Lieutenant Colonel Stanislav Petrov, who in 1983 disobeyed orders and refused to confirm that the United States had launched ICBMs directly towards Russia. The fear here isn’t that an AI will become conscious and decide to nuke the world — fears of Skynet (or A.L.I.E, for The 100 fans) remain the realm of science fiction — but that computers might make decisions too quickly for humans to counteract or evaluate. Here’s Reuters:

U.S. Air Force General John Hyten, the top commander of U.S. nuclear forces, said once AI-driven systems become fully operational, the Pentagon will need to think about creating safeguards to ensure humans — not machines — control the pace of nuclear decision-making, the “escalation ladder” in Pentagon speak.

“(Artificial intelligence) could force you onto that ladder if you don’t put the safeguards in,” Hyten, head of the U.S. Strategic Command, said in an interview. “Once you’re on it, then everything starts moving.”

Experts at RAND have raised the possibility that hostile nations would attempt to camouflage weapons systems as other types of equipment, and pointed out that there are various ways to fool Google’s image identification; a recent project at MIT managed to confuse neural networks into identifying a turtle as a rifle. In short, any investment we make into the field will inevitably be matched by moves and countermoves from adversaries — but the fact that our adversaries are investing means, naturally, that we have to as well.
By Joel Hruska

Post a Comment

[disqus][blogger][facebook]

Afrogalaxy

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget