As a field of study, haptics has closely paralleled the rise and evolution of automation. Before the industrial revolution, scientists focused on how living things experienced touch. Biologists learned that even simple organisms, such as jellyfish and worms, possessed sophisticated touch responses. In the early part of the 20th century, psychologists and medical researchers actively studied how humans experience touch. Appropriately so, this branch of science became known as human haptics, and it revealed that the human hand, the primary structure associated with the sense of touch, was extraordinarily complex.
With 27 bones and 40 muscles, including muscles located in the forearm, the hand offers tremendous dexterity. Scientists quantify this dexterity using a concept known as degrees of freedom. A degree of freedom is movement afforded by a single joint. Because the human hand contains 22 joints, it allows movement with 22 degrees of freedom. The skin covering the hand is also rich with receptors and nerves, components of the nervous system that communicate touch sensations to the brain and spinal cord.
Then came the development of machines and robots. These mechanical devices also had to touch and feel their environment, so researchers began to study how this sensation could be transferred to machines. The era of machine haptics had begun. The earliest machines that allowed haptic interaction with remote objects were simple lever-and-cable-actuated tongs placed at the end of a pole. By moving, orienting and squeezing a pistol grip, a worker could remotely control tongs, which could be used to grab, move and manipulate an object.
In the 1940s, these relatively crude remote manipulation systems were improved to serve the nuclear and hazardous material industries. Through a machine interface, workers could manipulate toxic and dangerous substances without risking exposure. Eventually, scientists developed designs that replaced mechanical connections with motors and electronic signals. This made it possible to communicate even subtle hand actions to a remote manipulator more efficiently than ever before.
The next big advance arrived in the form of the electronic computer. At first, computers were used to control machines in a real environment (think of the computer that controls a factory robot in an auto assembly plant). But by the 1980s, computers could generate virtual environments -- 3-D worlds into which users could be cast. In these early virtual environments, users could receive stimuli through sight and sound only. Haptic interaction with simulated objects would remain limited for many years.
Then, in 1993, the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT) constructed a device that delivered haptic stimulation, finally making it possible to touch and feel a computer-generated object. The scientists working on the project began to describe their area of research as computer haptics to differentiate it from machine and human haptics. Today, computer haptics is defined as the systems required -- both hardware and software -- to render the touch and feel of virtual objects. It is a rapidly growing field that is yielding a number of promising haptic technologies.
Before we look at some of these technologies in greater detail, let's look at the types of touch sensations a haptic system must provide to be successful.