In Machine Learning, you can use Principal Component Analysis as a lossy transform that lets you reduce the complexity of data in order to improve the performance of computationally expensive operations.

So like, what does PCA actually do? It crunches multiple dimensions into a single dimension. Consider for a moment the idea of BMI, or Body/Mass Index. It takes 2 dimensions, mass and height and crunches them into a single dimension which somewhat describes both. The diagram to the right kind of expresses this idea of shrinking the number of dimensions. Here, Dimension X is a conglomerate of Dimensions A and B such that position x corresponds to point a on Dimension A and point b on Dimension B.

As I’ve learned more about algorithmic learning, I’ve found myself believing that really, we humans learn in similar ways. This led me to the observation that perhaps the 4 spacial dimensions of space-time may actually not actually be “real” dimensions at all. Maybe our brains take in the high-dimensionality described by String Theory and collapse it, through electro-chemical processes similar to PCA into the 4 dimensions we interact with.

It could be that the world we live in is far more complex than we perceive. We could be thinking in the blue Dimension X when really we exist in A and B.