In Machine Learning, you can use Principal Component Analysis as a lossy transform that lets you reduce the complexity of data in order to improve the performance of computationally expensive operations.

So like, what does PCA actually do? It crunches multiple dimensions into a single dimension. Consider for a moment the idea of BMI, or Body/Mass Index. It takes 2 dimensions, mass and height and crunches them into a single dimension which somewhat describes both. The diagram to the right kind of expresses this idea of shrinking the number of dimensions. Here, Dimension X is a conglomerate of Dimensions A and B such that position x corresponds to point a on Dimension A and point b on Dimension B.

As I’ve learned more about algorithmic learning, I’ve found myself believing that really, we humans learn in similar ways. This led me to the observation that perhaps the 4 spacial dimensions of space-time may actually not actually be “real” dimensions at all. Maybe our brains take in the high-dimensionality described by String Theory and collapse it, through electro-chemical processes similar to PCA into the 4 dimensions we interact with.

It could be that the world we live in is far more complex than we perceive. We could be thinking in the blue Dimension X when really we exist in A and B.

Dr KohUndoubtedly. Heck, we really can’t even perceive the 4th dimension in the abstract sense, our brains compensate using cues from spatial data like movement to form a rationalized “model” of time, like you described. So time as we experience it, I’d say, exists in the “dimension X” zone. Our own perceptive capacity is limited to essentially the elements we need to survive, sensation of “extraneous” data has no biological function. Even those senses can lie to us and create “boogeymen” when our brain believes it’s in our best survival interest.

RichardPost authorDid you listen to the Radiolab episode on color?

It’s kind of relevant here… if you haven’t check it out: http://www.radiolab.org/2012/may/21/