Studying Sound Reflections
The auricle of a person's ear has lots of surfaces that can reflect sound waves. Most of these surfaces are curved. Some might direct the sound toward other surfaces in the ear, causing the wave to bounce more than once before reaching the tympanic membrane. Interactions with a person's face, head, hair and torso are complicated as well. Attempting to isolate and measure each of these reflections by hand would be almost impossible. For this reason, scientists have studied head-related transfer functions (HRTFs) using sound sources, lots of microphones and computer programs.
In some cases, researchers have attached tiny microphones to the surfaces of human participants' bodies. In others, they have used lifelike mannequins designed to accurately represent a person's skin, cartilage and body proportions. One such mannequin is the Knowles Electronic Manikin for Acoustic Research, or KEMAR, which has been used in HRTF research in laboratories such as the MIT Media Lab.
These microphones have one job - to capture sound. Computers can then analyze subtle differences in sounds with different points of origin or in the way a single sound interacts with different parts of the body. Eventually, this information leads to an algorithm or set of algorithms. The algorithm is essentially a group of rules that describes the way the HRTFs and other factors changed the shape of the sound wave. Applying the algorithm to another sound wave changes its shape as well, giving it the same properties that the first wave had after it interacted with the person's body.
Algorithms like these are at the heart of virtual surround-sound systems. Here's what happens:
- Researchers use microphones to capture and study the sound from a 5.1-surround speaker setup. Often, the research includes ears and bodies with lots of different shapes and sizes to help determine how different people perceive the same sound.
- With the help of a computer, researchers develop an algorithm that can re-create this sound.
- Researchers apply this 5.1-channel algorithm to a two-speaker system, recreating a sound field with the shape that a real 5.1-channel speaker system would emit.
In other words, the process applies aural cues to the sound wave, fooling your brain into interpreting the sound as though it came from five sources instead of two.