Shadows and Vectors
Machine learning is often introduced through many different entry points. Some accounts begin with artificial neurons and early attempts to mimic the brain. Others start from linear algebra, constructing the field from vectors, matrices, and optimization. Still others frame intelligence through probability, describing learning as inference under uncertainty.
At first these appear to be different stories about the same field. Different doors into the same building.
Yet after some time a curious realization emerges. The building itself is not as large as it first appears.
No matter where the story begins, whether with neurons, statistics, information theory, or optimization, it eventually circles back to a small set of recurring ideas: representation, projection, and the search for structure in high-dimensional space.
When trying to assemble these fragments into a coherent architecture, one begins to suspect that the story of artificial intelligence might not truly begin with neurons, or even with mathematics.
Perhaps it should begin with Plato.
In Plato's allegory of the cave, prisoners are chained inside a cave and can see only the wall before them. Behind them burns a fire, and between the fire and the prisoners objects are carried. The shadows of these objects are projected onto the wall. Having never seen anything else, the prisoners take these shadows to be the real world.
The allegory is usually read as a story about knowledge and illusion. For someone working in machine learning, however, it begins to look strikingly familiar.
In a technical sense, this is precisely what we do.
We call it projection.
The world we attempt to model is impossibly complex, high-dimensional, continuous, and far beyond the capacity of any finite representation. What we actually observe are fragments, signals, and shadows: pixels in an image, tokens in a sentence, measurements in a dataset. From these shadows we attempt to reconstruct structure.
Our models learn functions that map these observations into vectors, points in abstract spaces where similarity, meaning, and relationships can be computed.
If we are ambitious, we hope that somewhere in that space there exists a vector, or a set of vectors, capable of representing the world as we understand it.
A single representation that captures the structure of everything we can imagine.
Everything we can imagine.
This phrase matters. Imagination marks the boundary of the space we are able to model. The vectors we search for do not represent the world itself, but the world as it appears within the limits of our perception and abstraction.
In this sense machine learning is not merely a technology. It is a continuation of a much older philosophical problem. It concerns how beings with limited senses and finite representations attempt to infer the structure of a reality they never directly observe.
There is undeniable beauty in the projection world. Linear algebra provides a language for manipulating shadows. Optimization offers procedures for refining them. Neural networks construct increasingly complex transformations of these projections.
Yet there is also something quietly unsettling about this framework.
Once we recognize that our models operate entirely within the space of projections, a question inevitably appears.
Are we learning about the world itself, or only about the structure of the shadows we have already seen?
Artists have long understood this ambiguity. A painting is not the world but a projection of it through perception and interpretation. Literature does not reproduce reality but rearranges fragments of experience into patterns that feel meaningful.
Machine learning, in its own way, performs a similar act. It compresses the world into representations, searching for patterns that allow us to predict which shadow may appear next.
Perhaps Eliot's line captures the tension better than any technical description.
We have the experience: oceans of data, billions of parameters, and models capable of generating images, text, and sound with astonishing fidelity.
And yet the meaning still seems to elude us.
The shadows grow sharper.
But the cave remains.