SKYNET rising: Should Robots Have A License to Kill?
Way back in 1942, science fiction author Isaac Asimov proposed his famous Three Laws of Robotics in a short story entitled “Runaround”:
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Despite the enduring influence of these tenets, there’s nonetheless a push underway to give robots what’s been termed “lethal autonomy” – that is, the ability to kill without direct human involvement. Killing by algorithm. That’s no longer science fiction. Not only has it become technologically possible but increasingly likely to occur, if not here, then overseas. For some, the advantages of automation in human conflict are just too great a temptation. That’s a fundamental shift that could very well change our geopolitical landscape.
Good times. Here in the US, ineptly executed no-knock raids have become so prevalent that it already seems like we have unthinking robots out there using lethal force.
But seriously now, since qualified immunity is based on the ‘reasonable person’ standard, it wouldn’t even apply if American police departments ever got their hands on one of these ‘Terminators’. And you certainly can’t fire or suspend a robot without pay. No one would be accountable.