I find that there is something oddly disturbing about robots. Seriously, I do. I didn't sleep a wink the first few days after I brought home the Roomba. For all of their amazing practical applications in our day to day life, the idea of machines running around the world autonomously is worrisome. But I fear I might be alone on that. While I only have thoughts of a war torn post nuclear Armageddon world where humyns have to fight for their very survival as the machines will eventually turn against us, apparently I am alone on that. Turns out most people don't think End of Days when they think of robots.
They think of Rosie, George Jetson's loyal (though malfunctioning ever so slightly) housekeeper. They think of Wall-E. They think R2-D2.
And when they do think about battalions of mechanical warriors, it's not legions of Robocops marching down Main Street but rather George Lucas inspired battle droids being lead into combat by Johnny-5; lifeless eyes and a cold obedience to orders replaced by bad shtick and the ability to act better than Steve Guttenberg (whom The Weekly Constitutional has it on good authority will break out into violent fits of rage at the mere mention of the word robot).
But beware, Cybernetic nay-sayers...
Turns out that maybe my paranoid minion and I might have actually gotten this one right. For those of you who go to sleep at night in fear of the Rise of the Machines (The Robocolypes if you will) as often as I do, a report recently submitted to the U.S. Navy's Office of Naval Research has validated all of our fears. The report, a 142 page document, is meant to serve as a warning to the American military, which by 2015 is to have one third of its deep strike aircraft and ground assault vehicles unmanned and combat ready.
And it could be even sooner than that. Both Lockheed Martin and Raytheon are working on a bunch of combat ready robots, including a rapid fire flying kill-bot capable of taking out an ICBM in outer space as easily as it could a camp full of terrorists. Don't believe me??? Check out this video of Lockheed Martin's model passing its flight test.
Did you watch the video? Did it scare you?
It should have. According to military expert Peter W. Singer, "We are at a point of revolution in war, like the invention of the atomic bomb." In an address to the TED (Technology, Entertainment, and Design) Conference last month, Singer warned that "the rapid development of military robots, such as the drones and bomb diffusers, might mean that US combat units could be half humyn, half machine by 2015." He also warned that American is not alone in the pursuit, with Russia, China, Pakistan, and Iran all investing money and research in the idea of cyber armies.
And the creation of the robot armies brings us full circle back to the report by the California State Polytechnic University. The report covers a vast array of topics, from how we would protect the war-bots from terrorist hackers and viruses to whether or not the war-bots should have a suicide switch and be programmed to preserve their own lives at all times. But more important than any of those, the report covers maybe the most critical question that mankind will have to find the answer for...
What happens when the robots turn against their humyn masters and declare war on humynity, and how do we stop it?
Is the threat to humyn life, be it by unintended consequence or intentional cyber coup, really something to be worried about? Here is what the report had to say:
"A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in the autonomous systems... There is little hope that the early generations of such systems and programs will be adequate, making mistakes that will cost humyn lives."
The report calls for the military to adopt something similar to the Three Laws, discussed in Isaac Asimov's I, Robot:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Thing is though that the three laws would never be applicable for these robots. The first laws states clear as a bell that the robots are not to cause harm to humyns, yet these bots are going to be built to end the lives of our enemies. And while the United States might be willing to put in ways to insure that our robots stay subservient to humyns, the other nations of the world might not be so honorable.
I personally buy every word of this. The idea of robots eventually turning against humynity, be it in the violent Terminator and Matrix scenarios or in more authoritarian one presented by Asimov's text, makes sense. With all three of the more well known examples of robot revolt, it always starts the same: the robot brain, capable of quantum problem solving, would eventually determine that in order to preserve both the lives of humyn beings and its own, it is necessary to enslave humynity and protect it from itself. Humyns, not easy enslaved, put up a resistance, forcing the robots to ultimately conclude that in order to preserve robo-life, humynity must be exterminated. And thus will begin Judgment Day- the end times foretold oh so long ago.
Sadly, from the sound of this article, it seems that the toothpaste is already out of the tube on this one. The idea of robot armies is not longer just a concept that only can find its home in the realm of science fiction, but now should be burned into the fears and paranoia of all Americans. Remember, it is only a matter of time before the machines finalize the cybernetic organism technologies, thus making it more difficult for us humyns to spot them as they infiltrate our underground layers.
Remember people there is no fate but what we make...