Tag Archives: robots

Evil Robots – It’s all in the LEDs

By picking up machine learning and micro-controllers, I’ve found myself at a juncture.  A natural spot where these two hobbies naturally coincide.  Robots.

I can honestly say that I’ve never really thought much about Asimov or his three laws of Robotics until this point.  So… what are these three laws? and what the hell do they do?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

They make perfect sense right? These laws make it clear that robots shouldn’t harm humans.  The thing is, though… that a huge body of sci-fi exists which totally disregard putting the three laws.  Seriously.  I bet you could name at least three evil robots.  Why didn’t their creators program in the 3 laws?  Well, primarily, it’d make for a pretty boring plot… but I actually think that in real life, we won’t see robots programmed with the three laws because well… these laws are hard to code! I mean, how do you even begin to go about coding them?  I have a more obvious and far easier way of preventing evil robots from taking over — DON’T USE RED LEDS!

Let’s take a look at some well known evil robots and let’s see if we can identify a common characteristic. Ok?

Probably the first evil AI that comes to mind is the HAL 9000.  This lovable computer from Arthur C. Clarke’s 2001 attempts to kill the entire crew of the spacecraft, Discovery.  It would have succeeded too, if it wasn’t for a meddling kid named Dave Bowman.

I’m not a huge Kubrick fan, but I must admit that he did an excellent job of making HAL seem very creepy.  You’ll note that the camera mysteriously has a bright red LED glowing right smack in the middle.  I’m not really sure how a camera can operate with light bring projected through its aperture, but meh. It’s the future… oh wait… it should have happened 11 years ago… but that’s a tangent. Bottom line. Evil AI. Red LED.


The second Robot AI I’d like to point out came into being about a decade after 2001: A Space Odyssey, the Cylon.  You will notice the scanning visual thingy on this robot? Yes. Red LED. It marks an entire race of sentient AIs whose entire purpose is to eradicate humanity from the cosmos.  Okay. I know what you’re thinking… didn’t KITT also have a Red LED sensor? KITT wasn’t evil! Hah.  The original version of that AI was KARR which WAS evil. KITT is just pretending to be good to piss off it’s evil twin.

Skipping forward to the 80s, we come across The Terminator. This robot is doubly evil.  It’s totally got not one but TWO red LEDs.  The Terminator is the progeny of an AI who tricked the superpowers of earth into engaging in nuclear war.  That’s not just evil… it’s totally passive aggressive too.  I’m pretty sure that SkyNET had red LEDs all over it.

Need more convincing? Okay. Let’s go.

Here are the “Squiddies” from the Matrix.  These AIs are all about rending humans limb from limb. They are basically ruthless killing machines.  Note the sheer number of red lights. I rest my case.


So like, what the hell? Why use Red LEDs at all?  It probably has to do with power. I mean, power corrupts right?  Let’s take a look at one more example.  This should hammer home the corruptive power of the red LED.

Otto Octavius, a mild mannered scientist creates an 8-limbed exoskeleton that’ll revolutionize the world.  He uses red LEDs for their power… but cancels it out by building a rather fragile looking anti-evil circuit into the suit.  Learn from Otto’s mistake. If you make yourself some kinda futuristic, armored, super-powered exoskeleton? DON’T MAKE THE ANTI-EVIL CIRCUIT THE MOST FRAGILE PART!

So okay.  What happens if we don’t use red LEDs? Are there any example of powerful robots who don’t have them? Maybe we *NEED* to use red LEDs… I’ll leave you with one final image.  You come to your own conclusions.


Spherical Robots and Arduino Uno!

Back in November, I was at the Academy of Science for Nightlife and found nestled in the creepy taxidermy (African Hall), I ran into Spherical Robots rolling around. Check out the video:

This particular set of robots were originally created for Burning Man, but it’s no surprise to me that they would tour around the more sciency venues of San Francisco.  I’d never really considered a spherical robot before… but I guess it’s a “thing”.  Go ahead! Google it!

Now to me, the easiest way to get a sphere moving would be to have a rod running through an axis of the ball and you roll around it using either weight or a gyroscope on whatever internal mechanism you’ve got driving.  In this design you effectively have one HUGE wheel. These robots, however did not seem to do that.  They were able to roll in arbitrary directions.  They didn’t have to turn to change direction.

My initial gut feeling was, it used a pair of gyroscopes offset from each other by some angle.  When the orientation of the two gyroscopes change, the sphere’s effective center of gravity would shift causing it to move.  After doing some reading, though, it seems like that is not the case. A more likely scenario is… you have a large pendulum style structure which is moved around with a pair of solenoids.  This apparatus is dangled from a stable platform.  When the pendulum is pushed in a direction by the solenoids, the center of gravity shifts and the ball rolls.  The amount that the pendulum can move the ball is proportional to the weight at the end of the pendulum.

All of this conjecture has lead me to conclude that I need to try it out.  So… I’m off to learn a little bit of robotics.  So… how does one get started in robotics?  One dives right in!

I’ve been wanting to work with the Arduino family of microcontrollers, and this seems like an ideal project.  I ordered myself an Arduino Uno, a couple of add-on boards and am ready to rock!

My first goal will be to familiarize myself with the Arduino programming language and generally make myself comfortable with the tools involved.

There is some soldering involved.  You can see in the above image, my new toys.  The blue circuit board is the actual Arduino itself.  Below the magnifier is an add-on board that will allow me to control two motors.  (some assembly required).   I have a 2nd add-on board coming which will have add a small LCD display. I’m sure that board will be the one I play with the most (At least at first).

Naturally, I had to try to put together the motor controls, but it seems like fate would have other plans… Look at what happened to my soldering pencil:

Yeah. So… a new one is on the way. I guess soldering will have to wait a few days.

Workbench Concept Art

Over the weekend, I was having serious trouble modeling out what I thought the assembly area on the desk would look like.

Originally I had envisioned an aligator clip contraption, sets of tweezers, and a magnifier lamp.  That all seemed kind of boring though.  I then came up with a stepper motor kind of arm contraption that would be used in place of the aligator clips.   As it turns out, this is a hard thing to model without some kind of visual reference… and since it’s a fictional piece of equipment… well… I had to go back to the drawing board… literally.

The actual size of this armature would be pretty small as what it held would be the primary focal point of the magnifier lamp.  Since it’d be pretty small, it’s impractical to adjust the thing by hand, so I came up with a set of dials which will connect via wires to the robotic armature.

Here you see three knobs, which ostensibly control three axis of adjustment that the arm can move in.  The larger, shorter section of the knob is used for “course” movement and the smaller, longer inner knob is for “fine” movement.

My first stab at drawing this thing had all 3 knobs lined up in space, but it occurred to me that if I did that, the gearing of the knobs within the housing would all intersect.  Because of that, they’re all offset from each other slightly in the drawing… and will also be in the model.

Submediant, the voice of robots

This past weekend, I spent some time honing the script from last blog post into a framework that could be used to give voice to my Little Robots.  Okay, so what’s changed since I published that awesome .wav generating perl script? Well, first let’s listen to something rendered by the new script…

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

You can hear that the sound is far more refined.  I took the time to add some basic attack and decay to the sine waveform.  Even though it’s one of the most audible changes… it’s probably one of the least significant.   One major change from the previous script is the movement of all data to external data files…

Here is the data file that generated the MP3 that you just listened to:

@SONGTITLE:Minuet in G
@ALBUMNAME:Let Them Eat Test
@ARTIST:Little Robots

D4,1/4 G3,1/8 A3,1/8 B3,1/8 C4,1/8
D4,1/4 G3,1/4 G3,1/4
E4,1/4 C4,1/8 D4,1/8 E4,1/8 F#4,1/8
G4,1/4 G3,1/4 G3,1/4

B2,1/2 A2,1/4

C4,1/4 D4,1/8 C4,1/8 B3,1/8 A3,1/8
B3,1/4 C4,1/8 B3,1/8 A3,1/8 G3,1/8
F#3,1/4 G3,1/8 A3,1/8 B3,1/8 G3,1/8

D3,1/4 B2,1/4 G2,1/4
F#2,1/4 D2,1/4 F#2,1/4

D4,1/4 G3,1/8 A3,1/8 B3,1/8 C4,1/8
D4,1/4 G3,1/4 G3,1/4
E4,1/4 C4,1/8 D4,1/8 E4,1/8 F#4,1/8
G4,1/4 G3,1/4 G3,1/4

B2,1/2 A2,1/4

C4,1/4 D4,1/8 C4,1/8 B3,1/8 A3,1/8
B3,1/4 C4,1/8 B3,1/8 A3,1/8 G3,1/8
A3,1/4 B3,1/8 A3,1/8 G3,1/8 F#3,1/8

C2,1/4 D2,1/4 D2,1/4
G2,1/4 D2,1/4 G0,1/4

You can see, both the notes and metadata about the song are found all in one place, completely apart from the code.  At the very top of the file are attributes which are denoted with an ‘@’.  Notes are comma separated tuples where the first field is the note itself and the 2nd field is its duration.   Notes are divided into sections attributed to instruments (set off by an &).  Multiple sections for a single instrument simply append notes.

In this particular example, we have two instruments, one called lefthand and the other called righthand.  They simulate the left and right hands when playing the very simplified version of Minuet in G from the Easiest Book of Piano Favorites.

Both the notes and the durations are defined in a file.  The notes get mapped to frequencies and the durations get mapped to fractions of beats.  In my notation, 1/2 means half note, 1/4 means quarter note, etc… and if it ends in a period, that’s a dotted note.

Like all great projects, this one needed a name… and so I scoured Wikipedia‘s audio/sound entries for the first cool sounding term that I had no real understanding of… and that became the name of the project… Submediant. Even after trying to read the wiki page… I still have no idea what a Submediant is… but I’m sticking to it.

Exit Structured Audio, Enter Home Grown

In a previous blog post, I had explored using MPEG4 Structured Audio to produce the sounds that my robots will make in future animation.  A few weeks later, I find that the learning curve for the tools is a bit too steep for my liking… so I decided to start from scratch… and write my own.

Yeah, I understand that people of far greater sound knowledge and experience put a LOT of effort into Structured Audio, but that doesn’t help me now.  Maybe this tool that I’m building (which doesn’t quite have a name yet) will iterate toward the stuff I’m giving up… but I’m sure I’ll learn quite a bit about audio programming along the way.

As it turns out, most of the tool chain that I had previously built still works! I’m just replacing sfront with something that I’ve written… and if you’re curious what a program looks like which generates .wav files? here it is!


use strict;

my $filename = shift @ARGV;
die "You must specify an output file." unless $filename;

my %aliases = (
   C4 => 261.626,
   D4 => 293.665,
   E4 => 329.628,
   F4 => 349.228,
   G4 => 391.995,
   A4 => 440.000,
   B4 => 493.883,
   C5 => 523.251

my @song = qw(B4 A4 G4 A4 B4 B4 B4 A4 A4 A4 B4
              D4 D4 B4 A4 G4 A4 B4 B4 B4 B4 A4 A4 B4 A4 G4);
my $duration = (scalar @song) * 0.5 + 0.5;
my $volume = 8000;

my @notes;
push @notes, $aliases{$_} for @song;

my $channels = 1;
my $samplerate = 44100;
my $bitspersample = 16;
my $byterate = $channels * $samplerate * $bitspersample / 8;
my $blockalign = $channels * $bitspersample / 8;
my $samples = $samplerate * $duration;
my $dataChunkSize = $samples * $bitspersample / 8;
my $chunkSize = 36 + $dataChunkSize;

my @values;
for (my $i = 0; $i < $samples; $i++) {
   my $note = int($i / $samplerate * 2);
   my $freq = $notes[$note];
   my $x = $i * $freq / $samplerate * 3.14159 * 2;
   push @values, int($volume * sin($x));

open(FILE, "> $filename") || die "could not open $filename";
print FILE pack('NVN', 0x52494646, $chunkSize, 0x57415645);
print FILE pack('NVvvVVvv', 0x666d7420, 16, 1, $channels, $samplerate,
     $byterate, $blockalign, $bitspersample);
print FILE pack('NV', 0x64617461, $dataChunkSize);
for my $sample (@values) {
    print FILE pack('v', $sample);


Yup. That’s the whole thing.  No libraries. No weird complexity.

I decided to write the thing in perl for the practice.  I use an awful lot of perl at work, so as this project grows, it’ll force me to learn the proper perl patterns.

If you’re curious what that program produces? Here’s the converted MP3:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Seña’s Final Eye Rigging

I took some time over the Labor Day weekend to finalize Seña’s eye rigging.  As you can see from the image to the right, the rigging itself is made up of something like 25 bones.  Crazy but awesome.

A particularly pleasing aspect of finishing this rig is, it satisfies all of the requirements that I set forth in my previous post about Eye Detail.  This means that I wasn’t forced to compromise with the features I wanted… I was able to figure it all out.

While rigging the “eye petals” (the four sun shades around the outside of the eye), I ran into another peculiarity of blender‘s armature system.  Let me describe the setup… The eye petals each have a vertex group (upper petal, lower petal, left petal, right petal) and each vertex group has an associated bone in the armature (they’re the largest ones in the image above).  Whenever I’d rotate a petal, though, it’d deform in very peculiar ways.  Below is a simple rig that reproduces the problem…

For the first say, 20 degrees of rotation everything looks fine.  You’ll notice though as the bone is rotated, the box it rotates starts to get smaller and no longer seems to follow the track of rotation.  Grrr… what’s causing that??!

Before I present the solution, I need to show you how the vertex groups are set up so that I can explain what it is we’re seeing here…

You can see that I have two vertex groups… “Upper Box” represents the box we’re trying to move and “All Boxes” is a group containing both the upper and lower box.  “All Boxes” is associated with the parent bone (behind blue one) and “Upper box” is associated with only the box that we’re moving.

Because the upper box is being influenced by two different bones at the same time, their overlapping influence on the vertexes of the box causes the strange behavior.  The solution is to remove the upper box from the “All Boxes” vertex group.  Once I removed the overlap in bone influence everything works as expected.

Eureka! Strange pose behavior solved!

A while back, I had posted about a strange rigging behavior that I was seeing.  I’m not sure that the description that I was terribly useful… so I decided to make a couple animated gifs to help with the visuals…

Here you see what a proper rigging is supposed to look like.  The block rotates with the rotation of the “Bone”. Now, The following image illustrates what I was looking at…

You can see here, the cube is actually moving faster than the rotation of the bone.  The exaggerated motion caused all kinds of stuff to get out of alignment.

The way you associate the Armature or Skeleton to an object is you set the Armature as the “Parent” of the object.  In this case, the object is the cube.  You do this by simply selecting the cube, then the Armature and hitting CTRL-P (You can also parent by going through the menus).

When I created the simplified case of the single bone and a single cube, it worked just fine.  On a hunch, I decided to un-parent then re-parent the object and armature… and voiala! the problem was reproduced.   It turns out that in Blender 2.5, you’re not actually “parenting” you’re setting a modifier on the object… so clearing the object’s parent does NOT clear its relationship with the armature.  When you then re-parent the two, you create a SECOND modifier between the object and the armature, so any changes to the armature are magnified 2x… which is the behavior you see in the 2nd image.

Okay. Well, this is all well and dandy, but if you can’t un-parent, how do you get rid of the armature to object relationship?  It ends up in the object modifiers tab.  Here’s what it looks like if you’ve got an Armature associated multiple times with an object…So… all you really have to do at this point is figure out which of the two modifiers you want and X the other one(s).

In the situation with my aperture, I had created 3 modifiers.. so the pose was affecting the object three times as fast.


Seña Wireframe

She’s still untextured, and not all of the mesh is there… but we’ve got a good, healthy start!

Like most designs, as the pieces come together, I’m sure changes will need to be made.  The overall curvature of the front will be one alteration that I need to make.  This particular change is necessary for the eye to rotate through its frame properly.   Another change that I want to make is to the rear wheel.  I have a double-wheel in this design, but I think that I want to change that to a single, larger wheel.

I suspect that the texturing of the robot will be fun… but to get to that point, I’ve got to do UV Unwrapping.  This has got to be one of THE most arduous tasks ever conceived.  It’s more or less the same problem as representing a round globe on a flat map… except instead of a globe, you have this robot.

Looming on the horizon is the task of sound.  What’s this robot going to sound like?  I don’t just mean the whirring of motors or other familiar sounds… A great deal of effort in the design was to make Seña emotive… and to complete that task I need to come up with a voice.  No, this voice isn’t going to speak English, but in the tradition of R2D2, or the cute little robots from Silent Running, I’ve got to come up with a distinct, synthetic voice.

My current plan is to use a sine wave that’s got a base frequency.  The base frequency will change a little up or down depending on the emotion being conveyed.  This base frequency will have “words” added to it as bursts of other sine waves added in. The first few attempts may simply be a sped up and perhaps reversed Morse Code.  I realize that it’s unlikely robots would communicate through something as archaic as Morse Code, but screw it, I’m writing fiction here!

Rigging, It’s tougher than it looks…

So I spent the last few hours trying to figure out why my aperture lines up correctly in one pose but not in another.  I’m sure that there’s a simple solution to the problem… and looking back I’ll think, “D’oh! yeah, that was a dumb newb mistake.”   In a previous life, I was told that experiences like these are “learning opportunities.” sigh.

For anyone curious what a vertically slatted optical assembly looks like rigged in blender? Check it out… It’ll change over the next few days I’m sure, as I figure out my alignment issue, but to date, this is the most complex rig I’ve ever done. Go me!

So what you’re looking at here are the rig “bones” for the aperture which allow the aperture to open and close.  Behind it, you see a fan-shaped set of bones which control the segmented slats above and below the eye that telescope in or out depending on the inclination of the optical assembly itself.

.The specific problem that I’m having at the moment is, if I incline the eye, the aperture drifts slightly to the left.  I’ll experiment on it this weekend and hopefully find a good solution.  Learning Opportunity, right?

In the meanwhile, I actually did a little bit more concept design work around how Seña will express emotions.  Here are four basic emotions that I tried to express using the eye that I designed for her.

Actually drawing out the eye pieces in different emotional states, it showed me just how important coloration will be.

See if you can figure out what these emotions are…

Robo Optics

I’m finally getting into the grave details of modeling robot optics.  There is something that I’ve noticed about sci-fi movie optics that make no real sense logically, but they definitely add drama and emotiveness… the idea of eye color.

In just about all movies we’ve seen featuring robots, bad robots have red eyes.   Often we see as a robot makes a transition from being a good robot, the lights on that robot itself change from say blue to red.  I decided to add this color change concept to the robot optics I’ll be using for Little Robots.

The way I’ll do this is, behind the aperture, I’ll have a set of three colored lamps.  As I want to express different emotional states, I’ll dim or intensify the corresponding colors.  I’ll probably look up the Mood Ring color scale and just use that as my palette.   The lights themselves will reflect off a white surface at the back of the optical cylinder and should show up through the aperture of the robot.  Nice and tidy.


Speaking of apertures… I discovered that well… I had no idea how a mechanical aperture worked.  Instead of chickening out and using something lame like a square aperture, I messed around and discovered that really, apertures are just little sheets of rectangular metal (well, more like trapazoids) that rotate about 15 degrees.  Check it out:

Here you see my aperture which is made up of eight rectangles.  On the left, it’s in its closed state.  On the right, I posed it open.  Each aperture blade rotates a small amount in place and whammo… you have an open aperture.