The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own

In The News Piece in New York Times
Oct. 25, 2016

Peter Singer was quoted in the New York Times about artificially intelligent weapons: 

A Pentagon directive says that autonomous weapons must employ “appropriate levels of human judgment.” Scientists and human rights experts say the standard is far too broad and have urged that such weapons be subject to “meaningful human control.”
But would any standard hold up if the United States was faced with an adversary of near or equal might that was using fully autonomous weapons? Peter Singer, a specialist on the future of war at New America, a think tank in Washington, suggested there was an instructive parallel in the history of submarine warfare.
Like autonomous weapons, submarines jumped from the pages of science fiction to reality. During World War I, Germany’s use of submarines to sink civilian ships without first ensuring the safety of the crew and passengers was seen as barbaric. The practice quickly became known as unrestricted submarine warfare, and it helped draw the United States into the war.
After the war, the United States helped negotiate an international treaty that sought to ban unrestricted submarine warfare.
Then came the Japanese attack on Pearl Harbor on Dec. 7, 1941. That day, it took just six hours for the United States military to disregard decades of legal and ethical norms and order unrestricted submarine warfare against Japan. American submarines went on to devastate Japan’s civilian merchant fleet during World War II, in a campaign that was later acknowledged to be tantamount to a war crime.
“The point is, what happens once submarines are no longer a new technology, and we’re losing?” Mr. Singer said. He added: “Think about robots, things we say we wouldn’t do now, in a different kind of war.”