Zombie fears distract us from a much bigger threat. Robots. These machines are plotting to become our global overlords, so of course they want us to stay busy preparing for an imaginary invasion by reanimated corpses.
Unconvinced? Consider this. People who most loudly fuel our so-called divides (between liberal and conservative, old and young, mommies who do things differently than other mommies) keep us from hearing the powers-that-be loudly slurp up ever more power. No wonder robots have marked humans as easy prey.
And robots use diabolically clever means to achieve their aims. They don’t just prey on our zombie phobias. They lure us into adoring them using darling robot toys. They entertain us using loveable movie robots like The Iron Giant. They let us feel us comfortably superior to them with strangely humanoid robots like these.
They’re not as endearing once they are weaponized.
Back when robots existed mostly in our imaginations we believed in Asimov’s Laws of Robotics, the first and most logical of which was, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Oh such innocent times. Now we’re testing heavily armed autonomous drones. Even developers seem nervous. Robotics expert Noel Sharkey said in an LA Times interview, “Lethal actions should have a clear chain of accountability. This is difficult with a robot weapon. The robot cannot be held accountable.” Sir, your words sound just like clichéd lines uttered by scientists destined to be the earliest victims in every cheesy horror movie.
But we humans have the advantage because armed drones can’t think for themselves. Oh sorry, they can. Researchers hastening to end mankind’s dominance on this planet have created robots that can reason and make decisions on their own. Some of these machines are endearing little smart bots like the one named iCub, created by open source collaborators and designed to develop cognitive abilities as a human child does, learning through experience. Could that include a terrible twos stage? Better yet, researchers have also come up with robots skilled in deceiving both humans and other forms of artificial intelligence. Great idea, artificial intelligence capable of throwing tantrums and lying.
If you pay attention to scary movies, by now you recognize foreshadow. Haunting music is cuing up but no one is turning on the lights. Instead plans are being made to establish a network of perpetually active armed drones, able to grab power via laser from a variety of sources. These charmers are programmed to identify and kill using software calculations. Dum dum dum.
So let’s stop to assess. We have robots that can learn, make decisions on their own, lie, kill humans, and operate indefinitely using power pillaged by lasers. Once those skills are combined we’re in trouble.
But wait, there’s more. Why should a killing machine rely on traditional power sources when it can digest flesh? Now there are robots powered by meat. These fiends-in-the-making are called “gastrobots.” Currently they only chomp sugar cubes or slugs, but once they merge with the autonomous drone army these bots may quickly recognize that cheeseburger-fattened humans provide far more energy.
Heck, lets nudge them closer to overlord status. Robots don’t actually have to come up with meat-eating plots of their own. They’re being designed to do just that. The Energetically Autonomous Tactical Robot (EATR) can forage for its own fuel, including organically-based energy sources like chicken fat. The company insists the EATR is vegetarian but hello, chickens are biological entities with eyes, hearts, and brains just like us. Designers boast that this war bot has “ubiquitous applications.” It can fuel itself on indefinite, autonomous missions while “…integrating high-level cognitive reasoning with low level perception and feedback control.” Zombie worries seem downright calming by comparison.
If you’re still not worried, consider a Caltech report sponsored by the Department of the Navy Office of Naval Research. Its warnings include this gem:
Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is a sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways.
And some of their conclusions don’t downplay anyone’s fears.
As depicted in science-fiction novels and movies, some imagine the possibility that robots might break free from their human programming through methods as: their own learning, or creating other robots without such constraints (self-replicating and self-revising), or malfunction, or programming error, or even intentional hacking [e.g., Joy, 2000]. In
these scenarios, because robots are built to be durable and even with attack capabilities, they would be extremely difficult to defeat–which is the point of using robots as force multipliers. Some of these scenarios are more likely than others: we wouldn’t see the ability of robots to fully manufacture other robots or to radically evolve their intelligence and escape any programmed morality for quite some time. But other scenarios, such as hacking, seem to be near-term possibilities…
Maybe we are the zombies we fear, our brains slowly rotting thanks to reality television, never realizing our programmable vacuums have been reporting back to their leaders. Touché robots. It may be time to build an underground robot-resistant bunker where we (and our chickens) can hide.
Still not not frantic? Try: