The Questionable Ethics of Using Robotic Weapons

Wes O'Donnell
4 min readSep 17, 2019

The idea of a completely autonomous robot, one that can think and make decisions without human input, has fascinated humans since the industrial revolution.

In pop culture, we’ve been entertained by the likes of Johnny 5 from “Short Circuit,” Sonny from “I, Robot” and T.A.R.S. from “Interstellar,” among many others. But robotics is rife with villains as well: Skynet from the “Terminator” series, HAL 9000 from “2001” and the machines in “The Matrix” come to mind.

Our fascination with AI is only gaining speed. Research into authentic artificial intelligence (AI) has been valued at $1.2 trillion. To its credit, AI’s proponents claim that artificial intelligence will solve a host of human-caused issues like climate change and human-centric issues like a cure for cancer. But just because humans can build something doesn’t mean we should.

AI has its detractors. SpaceX founder Elon Musk has called for regulation of the burgeoning AI industry and warned that AI poses an existential threat to our species. It’s not necessarily that AI scientists are afraid of an omnipotent, omnipresent evil computer brain. Researchers’ fears are more along the lines of a competent AI whose goals are misaligned with ours.

Robotic Weapons and Autonomous Weapons Systems (AWS)

In the argument for the development of lethal military robotic weapons, the question of “should” we build it has often been sacrificed on the altar of the new technological arms race…

--

--

Wes O'Donnell

US Army & US Air Force Veteran | Global Security Writer | Intel Forecaster | Law Student | TEDx Speaker | Pro Democracy | Pro Human | Hates Authoritarians