Humans judge depth using several visual cues. For people who have healthy, working vision in both eyes, one of those cues comes to us courtesy of binocular vision. Binocular vision makes stereopsis -- the technical term for 3-D vision -- possible. Here's how it works.
Assuming you've got healthy vision in both eyes, your optical fields overlap. That means if you were to close one eye and look forward and then switch eyes, much of what you'd see would remain the same. Both eyes collect light and send signals to the brain, which incorporates this information into a single image.
Our eyes converge upon points we focus on. The closer an object is, the more our optic axes converge to intersect each other. A scientist named Charles Wheatstone discovered that our brains can judge depth by comparing the differences between the two sets of images the brain receives from our eyes. These differences allow us to judge how far away an object is.
Wheatstone conducted some experiments that suggested our brains fuse the two streams of data we receive from our eyes into a single mental image. One of these tests involved a stereogram -- a pair of images -- of the same object at two different scales. If you were to view each image separately, you could tell the two weren't of the same size. But when viewed together with each eye only seeing one of the two images, the viewer's brain would fuse the two pictures into a single image. And that image's size would be between the large and small versions of the picture he or she was looking at.
Wheatstone also discovered that if a person has poor vision in one eye, the brain learns to dismiss the information that eye gathers. It's actually possible to develop the skill to fuse images from both eyes with concentration. That's how Magic Eye pictures work -- they require effort on the part of the viewer.
There are other cues we rely on to judge depth, including how large an object appears to be in relation to other objects within our field of view. But binocular vision is what makes 3-D imaging possible. By presenting each eye its own set of images, 3-D technicians can simulate what it's like to look at an actual, physical object.
So how does the 3DS take advantage of this?