Home → News → Should Algorithms Control Nuclear Launch Codes? The... → Full Text

Should Algorithms Control Nuclear Launch Codes? The U.S. Says No

By Wired

February 23, 2023

[article image]

Last Thursday, the U.S. State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of artificial intelligence (AI).

The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the U.S. to guide the development of military AI at a crucial time for the technology. The document does not legally bind the U.S. military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly. 

Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.

From Wired
View Full Article



No entries found