The Questionable Ethics of Using Robotic Weapons

Wes O'Donnell
4 min readSep 17, 2019

The idea of a completely autonomous robot, one that can think and make decisions without human input, has fascinated humans since the industrial revolution.

In pop culture, we’ve been entertained by the likes of Johnny 5 from “Short Circuit,” Sonny from “I, Robot” and T.A.R.S. from “Interstellar,” among many others. But robotics is rife with villains as well: Skynet from the “Terminator” series, HAL 9000 from “2001” and the machines in “The Matrix” come to mind.

Our fascination with AI is only gaining speed. Research into authentic artificial intelligence (AI) has been valued at $1.2 trillion. To its credit, AI’s proponents claim that artificial intelligence will solve a host of human-caused issues like climate change and human-centric issues like a cure for cancer. But just because humans can build something doesn’t mean we should.

AI has its detractors. SpaceX founder Elon Musk has called for regulation of the burgeoning AI industry and warned that AI poses an existential threat to our species. It’s not necessarily that AI scientists are afraid of an omnipotent, omnipresent evil computer brain. Researchers’ fears are more along the lines of a competent AI whose goals are misaligned with ours.

Robotic Weapons and Autonomous Weapons Systems (AWS)

In the argument for the development of lethal military robotic weapons, the question of “should” we build it has often been sacrificed on the altar of the new technological arms race between nations. In other words, we need to build it because our adversaries are building it.

At any given moment, the United States has dozens of Unmanned Aerial Vehicles (UAVs) in the air, performing various missions from surveillance to missile strikes. But such drones are always controlled by a human operator and executing a strike requires at least some degree of human command and control.

But autonomous technology is under development in several countries. What happens when we give the machines the decision-making authority to kill or spare people?

With or without AI, an autonomous weapon system could be programmed to fire on any legal enemy combatant. But here’s where the ethical issues get murky.

Wes O'Donnell

Army & Air Force Veteran | Global Security Wonk for War is Boring, GEN, OneZero, Soldier of Fortune | Law Student | TEDx Speaker | Founder of Warrior Lodge