Evil Robots – It’s all in the LEDs

By picking up machine learning and micro-controllers, I’ve found myself at a juncture.  A natural spot where these two hobbies naturally coincide.  Robots.

I can honestly say that I’ve never really thought much about Asimov or his three laws of Robotics until this point.  So… what are these three laws? and what the hell do they do?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

They make perfect sense right? These laws make it clear that robots shouldn’t harm humans.  The thing is, though… that a huge body of sci-fi exists which totally disregard putting the three laws.  Seriously.  I bet you could name at least three evil robots.  Why didn’t their creators program in the 3 laws?  Well, primarily, it’d make for a pretty boring plot… but I actually think that in real life, we won’t see robots programmed with the three laws because well… these laws are hard to code! I mean, how do you even begin to go about coding them?  I have a more obvious and far easier way of preventing evil robots from taking over — DON’T USE RED LEDS!

Let’s take a look at some well known evil robots and let’s see if we can identify a common characteristic. Ok?

Probably the first evil AI that comes to mind is the HAL 9000.  This lovable computer from Arthur C. Clarke’s 2001 attempts to kill the entire crew of the spacecraft, Discovery.  It would have succeeded too, if it wasn’t for a meddling kid named Dave Bowman.

I’m not a huge Kubrick fan, but I must admit that he did an excellent job of making HAL seem very creepy.  You’ll note that the camera mysteriously has a bright red LED glowing right smack in the middle.  I’m not really sure how a camera can operate with light bring projected through its aperture, but meh. It’s the future… oh wait… it should have happened 11 years ago… but that’s a tangent. Bottom line. Evil AI. Red LED.

 

The second Robot AI I’d like to point out came into being about a decade after 2001: A Space Odyssey, the Cylon.  You will notice the scanning visual thingy on this robot? Yes. Red LED. It marks an entire race of sentient AIs whose entire purpose is to eradicate humanity from the cosmos.  Okay. I know what you’re thinking… didn’t KITT also have a Red LED sensor? KITT wasn’t evil! Hah.  The original version of that AI was KARR which WAS evil. KITT is just pretending to be good to piss off it’s evil twin.

Skipping forward to the 80s, we come across The Terminator. This robot is doubly evil.  It’s totally got not one but TWO red LEDs.  The Terminator is the progeny of an AI who tricked the superpowers of earth into engaging in nuclear war.  That’s not just evil… it’s totally passive aggressive too.  I’m pretty sure that SkyNET had red LEDs all over it.

Need more convincing? Okay. Let’s go.

Here are the “Squiddies” from the Matrix.  These AIs are all about rending humans limb from limb. They are basically ruthless killing machines.  Note the sheer number of red lights. I rest my case.

Red LEDs = EVIL

So like, what the hell? Why use Red LEDs at all?  It probably has to do with power. I mean, power corrupts right?  Let’s take a look at one more example.  This should hammer home the corruptive power of the red LED.

Otto Octavius, a mild mannered scientist creates an 8-limbed exoskeleton that’ll revolutionize the world.  He uses red LEDs for their power… but cancels it out by building a rather fragile looking anti-evil circuit into the suit.  Learn from Otto’s mistake. If you make yourself some kinda futuristic, armored, super-powered exoskeleton? DON’T MAKE THE ANTI-EVIL CIRCUIT THE MOST FRAGILE PART!

So okay.  What happens if we don’t use red LEDs? Are there any example of powerful robots who don’t have them? Maybe we *NEED* to use red LEDs… I’ll leave you with one final image.  You come to your own conclusions.