Building infrastructure in ChucK

ChucK is a programming environment specifically geared toward generating sound.  I was first exposed to it through the Coursera Class Introduction to Programming for Musicians and Digital Artists.

Chuck the language is a little weird, but expressive enough. It’s really good at making experimental sounds and generating samples but doesn’t have an extensive classlib for playing around with music theory.

I’ve started a github repository to build out this kind of infrastructure: https://github.com/rl337/chuck

The base class in my library that makes noise is called a Playable.  It’s interface looks like this:

public class Playable {
    function void play(int midi_note, dur duration, float gain);
    function void play(float freq, dur duration, float gain);
}

The play methods are synchronous. You might think this would be a serious limitation but a major feature of ChucK is it’s ability to spin off threads (called sporks)… so keeping things synchronous keeps the logic easy to follow. At the moment, I only have very basic waveforms implementing Playable… but later on I can make these things quite complex.

Adding to the library from the music theory side, we have the Scale object:

public class Scale {
    function void init(int tonic, int intervals[]);
    function int note(int degree);
}

Here you can initialize the scale with a list of intervals. Intervals are basically the number of half-steps between the scale degrees. Typically this is 8, but the way it’s coded it can be an arbitrarily sized array. This will let it capture non-8-degree scales like blues scales later on. The note() method takes an arbitrary scale degree and returns the midi note of that degree.

As a utility for creating lots of useful scales, I’ve created the Scales object:

public class Scales {
    public static Scale Major(int tonic);
    public static Scale Minor(int tonic);
}

The Scales object are essentially factories for Major and Minor Scales… so if you don’t want to memorize and initialize a Scale object every time, you can just say Scales.Major(60) and BAM you have a C Major scale!

Building on the Scale object, I created the Chord object. This is going to be a work in progress because I’m still learning all of the theory behind building chords. Really I just have the functionality around triads coded up… but it looks like this:

public class Chord {
    public void init(int halfsteps[]);
    public int note(int index);
}

The halfsteps are actual midi notes. Now this might not seem that useful as the structure is super-simple but you’re really not supposed to construct chords on your own… to do that, I’ve build the Chords factory object. This is where most of music theory around building chords on scales will take place:

public class Chords {
    public static Chord Major(Scale scale, int root);
    public static Chord Augmented(Scale scale, int root);
    public static Chord Minor(Scale scale, int root);
    public static Chord Diminished(Scale scale, int root);
}

Here the Root is in zero-based scale degrees… not in midi notes… so if you have a C scale, the 0th degree is C.
In order to get a D Major chord on a C Major Scale, you can just do this: Chords.Major(Scales.Major(60), 2)
There will be a lot of convenience methods here.. I’m sure 7th chords will be added next.

In order to play Chords, I’ve created a class called a Chordable. It takes an array of Playables which it then uses to voice chords.

    public void init(Playable voices[]);
    function void inversion(int inversion);
    function void play(Chord chord, dur duration, float gain);
    function void arpeggio(Chord chord, int sequence[], dur duration, float gain);

Here, you see that the init() method takes an array of Playable. The play() method plays the chord voiced with whatever the current inversion is set to. Really, the inversion should probably go on the Chord, but i found it convenient to put it here. The arpeggio() method takes a chord note array and divides the duration playing each of the chord notes specified.

Everything is still pretty basic… and i’m sure that it’ll evolve over time.. but as a software engineer, it makes exploring music theory a little more fun.

Introduction to Music Production, Assignment 4

This is Richard Lee from San Francisco welcoming you to my 4th assignment for Coursera and Berklee College of Music’s class, Introduction to Music Production.

I’ll be talking about Dynamic Processors and their parameters.  Specifically threshold, ratio, attack and release.

As the name suggests, Dynamic Processors affect the Dynamic Range of sound passing through it.  What does that mean, exactly?

Dynamic Range has two distinct contexts.  On one hand, it refers to the contrast between the softest sound that can be perceived by humans to the threshold of pain.  This seems like it’d be straightforward to measure, but not only does the range vary from person to person, it also varies environmental factors such as air pressure.

Thankfully, we’re mostly concerned with the other notion of Dynamic Range.  This Dynamic Range isn’t loudness, but the range of signal amplitudes that can accurately passed from input to output through a piece of audio equipment.

Some of the original uses for Dynamic Processors were to simply limit the amplitude of the signals beyond a certain threshold.  We call this kind of processor a Limit.  It was put in place to protect delicate equipment from sudden jolts or surges caused by connecting or dropping components.

You can generalize a Limit to not just cut a signal off at a threshold but instead restrict its growth to something less aggressive, you get a Compressor.  To say that another way, a Limit is a Compressor with a very large ratio.  In the context of Compressors, the Ratio represents the amplitude of the input in relation to the output.

To make a compressor even more useful, we’ll introduce two new parameters, attack, which has time units (typically milliseconds) gives us a delay between the signal crossing the threshold and the ratio actually being applied to the input to get the output.  The opposite of attack is release, which defines the delay between a signal falling below the threshold and the processor halting its influence.

The main purpose of a compressor in modern music is to reduce the dynamic range of a piece so that it’s possible to increase the gain, which makes the piece sound louder without distortion.

A sort of inverse to a Compressor is a Gate.  Where the Compressor activates when a signal’s amplitude is above a certain threshold, a Gate aggressively attenuates any signal BELOW the threshold.  This helps remove noise from a piece of music as the noise would fall below the amplitude threshold of the instruments or vocals being played over it.

I’m cutting this assignment a little short.  I didn’t have time to create images to help visualize what I was trying to get across in my text.  Hopefully I’ll be able to find some time to put some polish on the next assignment!  Thanks for reading!

 

Introduction to Music Production, Assignment 3

Hi. I’m Richard Lee from San Francisco.  This is my 3rd peer review assignment for Berklee College of Music‘s Introduction to Music Production Coursera Class.

For this assignment, I’d like to talk about submixes.  I REALLY enjoy the idea of using submixes.  It really gets deep into the plumbing of busses and audio flow.  As it turns out though, my DAW of choice, GarageBand does not have support for submixes, so I’ll have to keep the assignment largely conceptual.

stringsSo what is a submix? Imagine you’re setting up to record an entire orchestra.  It’d be very convenient to have an individual mixer board for every set of instruments playing a distinct part.   In the example to the right, you see three distinct mixer boards, each one handling an individual type of instruments.  Each mixer handling a “part” then feeds into an overall “string instruments” mixer.

This setup allows you to adjust the gains on each individual performer (say, player 2 of the violins consistently plays decibels below the others).  On the “Strings” mixer, you can adjust the level of all violins as a whole.   This allows you to adjust violins in relation to other instruments, say the cellos.   Each of the individual instrument mixers could be considered a submix of the Strings mixer.

Let’s continue to think about submixes in terms of physical devices.  Imagine now, we’re setting up the audio to record a modern rock band.  We have significantly fewer instruments.  Let’s assume we have a drum kit, an electric guitar, a bass guitar, and a vocal.  The drum kit has 3 inputs.  A bass drum mic and two hanging mics.  Each of the other instruments/vocals have one input.

As far as submixes go, it would be nice to have all three inputs from the drums individually adjusted as a submix.  It’d also be nice to be able to individually adjust each of the guitars, but also change levels of both guitars relative to vocals or drums, so we should have a submix with both guitars.  It might also be useful to have a submix of both the drums and guitars so that instrumentals can be changed relative to vocals.  Finally, we’d like to have everything flowing through a master, where we can apply some global effects to.   If we were to set this up like we did for the orchestra, we’d have to find ourselves four very small mixer boards, each representing a submix.  The submixes will cascade until they converge to a single output.  This means a LOT of cable, a lot of “self noise” and a lot of opportunity for human error to ruin the recording.

What we’ll do in this case is use a single 10-input mixer board (see diagram below).  Here, we’ll feed the outputs of the physical channels into the inputs of other channels; sometimes combining the input of multiple channels.   The combining of multiple channel outputs into a single channel input is called a “bus”.  To achieve our 4 submixes, we’ll use 4 busses.

rockband

Here you see the details of which channel outputs feed into what input.  The busses are denoted by color.  Our drum submix is made up of the two hanging mics and the bass drum mic.  The guitar submix is made up of the bass guitar and electric guitar.  The instrumental submix is made up of the drum submix and the guitar submix.  Finally, the master output is a mix of the instrumental submix and the vocals.  Thankfully, instead of running lots of cables between inputs and outputs on a physical mixer board, we can use a full featured DAW.

This is a very basic layout with no aux-sends or effects.  Naturally, that stuff would add more complexity to an already cluttered diagram; so I skipped it.

This week, I would have really liked to have tried this out on a DAW.  Unfortunately, I didn’t quite have the time to try out some of the free DAWs that feature busses… Garage band kind of has a notion of a bus, but it’s really just for effects… not for combining inputs or channeling outputs.

I hope that my examples were able to illustrate the need for and usefulness of submixes. Thanks for reviewing my work.

 

Introduction to Music Production, Assignment 2

My name is Richard Lee from San Francisco.  I’d like to welcome you to my second peer reviewed assignment for Berklee College of Music’s Introduction to Music Production Coursera Class.

My topic for this assignment is to efficiently record audio in my DAW, documenting both the project setup, and creating the tracks.

garageband-guitarFirst, the Digital Audio Workstation software is GarageBand. Since it came free with my iMac, I decided to give it a shot, as it’d likely have a massive community of users who, like me, are just starting off in making nois*ahem*music.  If I ran into a problem, it’s likely 1000 other people have run into the same problem and their solution would be posted a mere google search away.

garageband-file-newWe start our journey with creating a new project.  This is accomplished by either hitting the COMMAND-N key combination or clicking on File | New on the top menu.  In this case, I used the menu to create a new project.

You’ll now be presented with a new project dialog (see image below).  Choose where you’d like your project files to be saved.  In my case, I decided to create the project in Coursera \ MusicProduction \ Week 3.
garageband-new-project
Once you’ve chosen a directory for your project to live, you’ll be greeted by the main GarageBand window.  By default the project is populated by a “Grand Piano” track.  I have seen a workflow where you’re able to choose a first track type up front, but going through File | New menus seem to skip that initial choice.

 

garageband-delete-trackIf your goal is to record a MIDI grand piano, you can go ahead and use this default track.  For the case of this assignment, however, I want to record using an electric guitar; specifically my Epiphone LP-100.  To do that, we want to delete the default track and add a new Electric Guitar track.  To do that, select the Grand Piano by clicking on it then select Track | Delete Track (or Command-Delete).

 

garageband-new-trackTo create a new Track, click the Track menu and select New Track. This will present you with a choice of “Software Instrument” or midi track, a “Real Instrument” for recording directly from an audio device, or “Electric Guitar” which is like a “Real Instrument”, but allows for guitar specific special effects.  Here you see, I chose Electric Guitar.

 

 

IMG_0507My next step was to set up my audio interface.  The Audio Interface that I’m using is an M-Audio M-Track Plus.  On the right, you can see my interface and it’s got nothing plugged into it yet.  I’m going to be using input 1 for this assignment, so I turned the gain all the way down prior to plugging anything in.

IMG_0514My monitor is a Sennheiser HD202 headset.  I’ll plug it into the headphone jack of the interface.  The headset is natively an 1/8″ jack, but comes with an 1/8″ to 1/4″ adapter.

I make sure that input 1 has “Guitar” selected instead of “Mic/Line” and plug the TS Cable from the guitar into the “Guitar/Line Input” jack.I strum my guitar aggressively while slowly increasing the gain until the signal starts to peak in the yellow.

Once my levels are set in the interface, It’s back to GarageBand.  Toward the bottom of the window, there are a cluster of controls which are used to start and stop recordings, switch between Project, Tuner, and Time modes.  For this assignment the most important view is Time.  To enable a metronome tick during your recording, be sure the metronome icon is highlighted  as it is below.

garageband-project-controls

Hit the record button to begin your recording.

Just for the heck of it, I exported the strumming.  Not particularly good guitar playing, but it *is* pretty loud!

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

In reflection, a lot of this assignment seems very simple; intuitive even.  Without the class, however, I wouldn’t have known the utility of the audio interface.  I had tried several times before to plug my guitar amp into my computer via some 1/4″ jack to USB cable, but the recording was always noisy and of low quality.  My original plan was to create a video… but as it turns out, it’s very difficult to film yourself doing tight actions such as adjusting gains or plugging in cables.  Perhaps I’ll be inspired by the creativity of my classmates and give it a shot next week.

Thanks for reviewing me.

 

 

 

Anatomy of a Robot

A while back, I wrote a blog post about spherical robots.   I had taken it upon myself to learn a bit more about robotics with the intention of building a simple autonomous robot.  Well, over two years later, I’m at a point where I can do some actual robotics work.  I look back on what I’ve learned… and the rabbit hole of a trek that led me here.

The Coursera Rabbit Hole

mindstormbrick

What goes into a robot?  Well, naively I thought that you hook up servos and sensors tosome kind of micro controller and away you go.  It’s the Lego Mindstorms version of Robotics.  Of course, that *IS* one way to look at robots… but really, that’s just the beginning… the “Hello World” program of building robots.  I wanted to build something a little more sophisticated than the “recommended age 12-adult” crowd.

itunes-artworkWell, for the answer we look to the Control of Mobile Robots class offered on Coursera.  In this class, Dr. Magnus Egerstedt introduces Control Systems.  The class itself does not get into the hardware of building robots, but digs into the abstraction layers necessary for successfully modeling and controlling things that interact with the real world. 

What exactly do I mean when I say “Control things?” Well, think about yourself for a moment.  If you’re standing and someone shoves you, you are able to react in a way to keep yourself stable and standing… or simply put you’re in control of your body.  The act of keeping yourself upright is a complex set of muscle movements that need to be carried out correctly but you don’t need to think about how to control each muscle… you just do it.  The instinctive impulse to lean or step is handled by your innate control system.

itunes-artworkIt’s not enough, however, for a robot to be “controllable”.  My goal is to build a robot that’s autonomous.   That requires some form of higher level artificial intelligence.  It turns out that Coursera offers another class geared toward exactly this: Artificial Intelligence Planning!  In this class, Dr. Gerhard Wickler and Professor Austin Tate take you through a survey of programmatic problem solving algorithms.  I was amused to learn that like all other computer science problem, artificial intelligence problem solving comes down to a search algorithm.

At the end of this course, you’ll be able to write a program that given some set of circumstances and corresponding set of possible actions, it will figure out what to do to accomplish its goals; assuming that some possible set of actions can achieve the goal.

small-icon.hoverThis lead me to the next problem… perception.  An autonomous robot has sensors and it needs to be able to figure out some “state” of the universe in order for it to use its problem solving capabilities.  How on earth do you map images/sounds/echolocation to logical states of the universe?  Through machine learning.  As it turns out, Coursera offers a LOT of classes on exactly this.  The most notable of these classes is Coursera’s co-founder‘s class on Machine learning…  Here, you learn all kinds of algorithms for automatically classifying and identifying logical states based on noisy or confusing input.

itunes-artworkA more advanced class that I really enjoyed focused on state of the art Neural Networks.  The class is called Neural Networks for Machine Learning and is taught by Dr. Geoffrey Hinton.  This class goes into great depth on various kinds of Neural Networks.  This class totally blew my mind.  I have no doubt that the correct application of neural nets with the right kind of self-motivating planner will lead to formidable AIs.

Putting it all together

First let’s talk about hardware.  Below is a list of hardware that I’m going to use and I’ll parallel it with what I feel may be the human anatomy counter-part.

IMG_0465I’m going to use an Arduino Uno as the primary interface with all of my robot’s actuators.  It represents the spinal cord and instinctive nervous system of the robot.   The Arduino is a very simple microcontroller that isn’t terribly fast.  It also doesn’t have much in the way of memory.  It does have extremely easy interfaces with motors and sensors.  This makes it ideal for running a closed loop System. (See Control of Mobile Robots class for details).

raspberrypiConnected to the Arduino will be a Raspberry Pi. The Pi will be the brains of the Robot. All higher order problem solving will occur here.  The brain and the spinal cord will talk to each other using SPI.  Naturally the Raspberry Pi will be the master of the SPI bus.  As the robot gets more complex, it might be necessary to attach more than one microcontroller (maybe not all arduinos)… especially if I start working with more complex sensors.

shapelockThe supports and overall skeleton of my robot will be created with Shape Lock.  It’s a material that can be melted, shaped and re-used over and over.  It claims to be machinable (using my dremmel) and durable.  I imagine that if I need stronger load bearing parts, I can prototype in shape lock and carve some other material based on the prototype.  Wood is a likely candidate.

Okay. The big pieces are out of the way.  Now the fun stuff.  What sensors / servos will I use?  I have a variety of electric motors, solenoids and steppers that I picked up from Adafruit. It’s likely that my first robot will be a simple differential drive deal.. but eventually I’d like to go back to my ideas in the original blog post and create a spherical robot.   In the end, the actual drive system and sensors don’t matter that much… they’re just the accessories of the Mr. Potato head.  All interchangeable.

Perceptrons as a digit classifier

perceptron-diagIt’s been a while since I posted.  I’ve continued to mess around with Machine Learning.  As a seemingly natural extension of Andrew Ng‘s class, I took Geoffrey Hinton‘s Neural Networks for Machine Learning class.  I have to say, it totally blew my mind.  If we haven’t already gotten to a near-sentient AI, I’m convinced that we’re very close.

For my own exploration of Machine Learning, I’ve decided to work through the family of neural network types in a similar order to the sequence presented in Geoffrey Hinton’s class.  That means, that I start with the very first type of neural network, the Perceptron.

A Perceptron takes multiple input values and makes a single logistic decision about it.  I took a collection of MNIST digits to see how well a Perceptron would do at identifying them.  You’d imagine that this would be a natural fit for a Perceptron as digits are reasonably distinct… but I discovered that there really is not enough information stored within the weights of a perceptron to make it an effective classifier for MNIST.

Here’s what the perceptron weights looked like for number 3. White means positive coefficients and black means negative.  Gray means near-zero.  The images read from left to right and represent the state after increasing batches of 1000 labeled cases.

perceptron-0.5

Here you can see that the more batches you run against the perceptron, the more complex the weights get.  These complexities are probably overfitting. I thought, meh. Let me see how well it performed.  Now, as 3s only represent about 10% of the label data, if I always guess “Not a 3″, I would see a 90% correct guess.  This means that I wanted a value higher than 90%. It turns out that my classifier scored 90.26% which caused me to suspect that my perceptron had guessed all 0s.   From actually looking at the guesses, though, I realized that no… the perceptron was actually guessing based on data, so I needed to dig a bit more into what it was doing.   Here’s the distribution of its answers:

Correct Negative: 4034
Correct Positive: 479
False Negative: 466
False Positive: 21

So for real, it was guessing right about 90% of the time.  What screws it over ends up being its false negatives.  blah.

I decided to try something.  Instead of weighting positive and negative cases equally when determining weights, I thought that I would add a term called alpha which weights positive and negative cases differently.  In this case, since the positive case only happens one in 10 times, I wanted to weight the negative cases by 1/10th, so I set alpha to 0.1.

perceptron-0.1

Here is an image of the coefficients using the values weighted by alpha.  You can see that, especially in the earlier batches that the shape of the 3 is a lot fuzzier and we have less complexity, even in the later iterations.  So how well did THIS set of coefficients work?

Correct Negative 3962
Correct Positive 472
False Negative 538
False Positive 28

It seems like it did a lot worse.  It had a lot more false negatives. The false positives and correct positives also dropped but not by much.  It seems like the fuzzier coefficients caused the perceptron to be less certain about a particular trial.

 

Happy Birthday, Bob Ross!

Today would be Bob Ross‘s 70th birthday. 

For those of you who aren’t familiar with Bob Ross, he was an artist who’s best known for his PBS show, The Joy of Painting.  I’d say that Bob Ross informed a LOT of my creative processes growing up. I’d watch this show religiously even though my family would never have been able to afford oil paints or canvas.   For a short while, I tried to reproduce his techniques with poster paint and notebook paper, but without much success.

According to the wikipedia article, Bob spent time in the Air Force before dedicating himself to painting.  It’s hard to imagine the painter in a military environment… but apparently, he got quite a lot of the inspirations for his famous landscapes from where he was stationed in Alaska.

Happy Birthday, Bob.  Thank you for being such an inspiration.

 

SparkFun Color LCD Shield – First Impressions

I picked up a SparkFun Color LCD Shield while I was shopping for some sensors and battery packs.  I thought that maybe I could use it to help visualize sensor data prior to passing it off to my Raspberry Pi.

The shield itself didn’t come with headers… which is okay, because I had a few extra from a purchase I made from adafruit. A short solder job later, It’s plugged into my Uno.

I grabbed the Arduino library from github, plugged it in and tried to run the examples.  Initially, the board didn’t seem to function, but after reading the documentation a little more, I realized that sparkfun ships one of two different models. I happened to get one that didn’t run with the default code. The change to the examples is extremely trivial.  You change the following line:

lcd.init(EPSON);

to

lcd.init(PHILLIPS);

One thing that I noticed about the shield… the updating is very slow, so you have to be extremely mindful of how you want to change your pixels. Simply clearing the display has a visible redraw. I wrote a short program to play around with the buttons on the shield. Essentially, it just draws or undraws rectangles based on whether or not buttons are pressed. Here’s the code itself.  If you try it out, you can see what I mean about the redraw rate.  Either this is an artifact of the sparkfun library or it’s a limitation of the device itself. I guess I can look at the library and see if there’s anything obvious.

#include <ColorLCDShield.h>
LCDShield lcd;

byte cont = 40;  // Good center value for contrast
int lastS1 = 0;
int lastS2 = 0;
int lastS3 = 0;

void setup() {
  pinMode(3, INPUT);
  digitalWrite(3, HIGH);
  pinMode(4, INPUT);
  digitalWrite(4, HIGH);
  pinMode(5, INPUT);
  digitalWrite(5, HIGH);

  lcd.init(PHILLIPS);  
  lcd.contrast(cont);
}

void loop() {
  int s1 = digitalRead(3);
  int s2 = digitalRead(4);
  int s3 = digitalRead(5);

  if (s1 == lastS1 && s2 == lastS2 && s3 == lastS3) {
      return;
  }

  if (s1 != lastS1) {
    if (s1) {
      lcd.setRect(80, 35, 131, 51, 1, WHITE);
    } else {
      lcd.setRect(80, 35, 131, 51, 1, CYAN);
    }
    lastS1 = s1;
  }

  if (s2 != lastS2) {
    if (s2) {
      lcd.setRect(80, 67, 131, 83, 1, WHITE);
    } else {
      lcd.setRect(80, 67, 131, 83, 1, MAGENTA);
    }
    lastS2 = s2;
  }

  if (s3 != lastS3) {
    if (s3) {
      lcd.setRect(80, 99, 131, 115, 1, WHITE);
    } else {
      lcd.setRect(80, 99, 131, 115, 1, BLUE);
    }
    lastS3 = s3;
  }
}

Perception and PCA

In Machine Learning, you can use Principal Component Analysis as a lossy transform that lets you reduce the complexity of data in order to improve the performance of computationally expensive operations.

So like, what does PCA actually do? It crunches multiple dimensions into a single dimension.  Consider for a moment the idea of BMI, or Body/Mass Index.  It takes 2 dimensions, mass and height and crunches them into a single dimension which somewhat describes both.  The diagram to the right kind of expresses this idea of shrinking the number of dimensions.  Here, Dimension X is a conglomerate of Dimensions A and B such that position x corresponds to point a on Dimension A and point b on Dimension B.

As I’ve learned more about algorithmic learning, I’ve found myself believing that really, we humans learn in similar ways.  This led me to the observation that perhaps the 4 spacial dimensions of space-time may actually not actually be “real” dimensions at all. Maybe our brains take in the high-dimensionality described by String Theory and collapse it, through electro-chemical processes similar to PCA into the 4 dimensions we interact with.

It could be that the world we live in is far more complex than we perceive. We could be thinking in the blue Dimension X when really we exist in A and B.

Booting my Raspberry Pi

Yeah I spent the $35 to get myself a Raspberry Pi. Here it is!

Admittedly, It’s taken me a bit longer than I’d have expected to get it to boot.  First, I bought the wrong USB power cable.  I thought that the connector was a “Mini-USB” but really it was a “Micro-USB”. You’d think that with the volume of electronics I own, I’d know the difference between the two… but you’d be wrong.

My next folly was with the operating system.  First I tried the Arch Linux build, but for some reason, it didn’t copy to my SD card properly. My instinct was that the Amazon Basics SD Card wasn’t compatible for some reason or another, but that was a false trail.  I instead copied the recommended Raspbian “Wheezy” image… and viola! I have it booting!

Here’s a screen shot part way through the boot.  The Composite Video output doesn’t seem to line up very well with my old Dell monitor.

The boot process is pretty swift and the distribution automatically DHCPs a network address and starts sshd, so it’s possible to ssh in as soon as it finishes coming up.

Once inside, I poked around a little bit. Here’s the CPU info

pi@raspberrypi ~ $ cat /proc/cpuinfo
Processor	: ARMv6-compatible processor rev 7 (v6l)
BogoMIPS	: 697.95
Features	: swp half thumb fastmult vfp edsp java tls 
CPU implementer	: 0x41
CPU architecture: 7
CPU variant	: 0x0
CPU part	: 0xb76
CPU revision	: 7

Hardware	: BCM2708
Revision	: 0002
Serial		: 0000000048e7f498

I distinctly remember commenting that a “100mhz computer with 128 megs of ram will always make a good linux box.” Here’s a linux box that has way way more as far as capabilities… for $35. Maybe we’ve reached the future.