Applying Machine Learning

I’ve successfully completed Andrew Ng’s Machine Learning class. I feel pretty spiffy that I can now say that I’ve implemented a Neural Network… though right now it’s just in Octave.

In his concluding video, Andrew asked that we go forth into the world and do cool things with machine learning… and I plan on doing exactly that. I can think of several interesting things that I can build in the context of my work at Topsy Labs… but it’s unlikely that any work context stuff will filter out in a way that the public can see directly.. so I plan on creating a few personal projects around machine learning.

The first thing that I want to do is go through the course material and actually implement most of it in Java.  For that purpose, I’ve checked in the start of said project into github as part of my incubator repo. I’ve affectionately named the sub-project “SkyNET” after the largely misunderstood AI from the Terminator Franchise.  The code itself won’t have specific spoilers from the exercises… that’d be bad form… but they will have brute force implementations of the algorithms taught in the course.

I think that actually implementing things from scratch, I’ll get a better handle of how and why some of this stuff works.  For this initial version of Skynet, I am not going to allow myself the luxury of third party libraries.  The point isn’t to create a well optimized machine learning system… It’s a learning exercise.  Part two of Skynet will be to re-implement the whole thing again in C++.  This time I’ll incorporate a bunch of well optimized libraries like Boost.

Those of you who want to troll through my Java code… here’s the direct link to Skynet.