So far I’ve only been validating my Linear Regression algorithm using actual linear data.

Something that was pointed out in Andrew Ng‘s class was.. though linear regression is inherently… well… linear, the dataset that you try to learn against need not be data scattered around a straight line. It all depends on what features you use.

I was curious to see what kind of functions I could have skynet approximate… so I chose two. e^x and sine(x). You can see in the picture above, I generated a largely scattered dataset around a small section of e^x. The blue line running through the center are the values generated by my hypothesis. The features used in this case were created by raising x to higher powers (in this case, up to x^15)… anotherwords, my features were: x, x^2, x^3 … x^15.

You can see that the hypothesis here is definitely NOT linear. Pretty impressive.

Here’s a much tougher function to approximate… however the 16 higher order features came pretty close to htiting sin(x). I bet if I included factorial terms in the feature set, linear regression would have figured out the taylor series that makes up sin(x).

I played around with the regularization term, lambda in each of these plots, but surprisingly, it didn’t affect the hypothesis that much. It might be because the test data that I generate is not terribly ambiguous, so even regularized, the cost functions don’t change shape very much… they’re just translated upward slightly.