How Virtual Reality Works

A virtual reality CAVE display projecting images onto the floor, walls and ceiling to provide full immersion. See more virtual reality pictures.
Photo courtesy of Dave Pape

What do you think of when you hear the words virtual reality (VR)? Do you imagine someone wearing a clunky helmet attached to a computer with a thick cable? Do visions of crudely rendered pterodactyls haunt you? Do you think of Neo and Morpheus traipsing about the Matrix? Or do you wince at the term, wishing it would just go away?

If the last applies to you, you're likely a computer scientist or engineer, many of whom now avoid the words virtual reality even while they work on technologies most of us associate with VR. Today, you're more likely to hear someone use the words virtual environment (VE) to refer to what the public knows as virtual reality. We'll use the terms interchangeably in this article.

Advertisement

Naming discrepancies aside, the concept remains the same - using computer technology to create a simulated, three-dimensional world that a user can manipulate and explore while feeling as if he were in that world. Scientists, theorists and engineers have designed dozens of devices and applications to achieve this goal. Opinions differ on what exactly constitutes a true VR experience, but in general it should include:

  • Three-dimensional images that appear to be life-sized from the perspective of the user
  • The ability to track a user's motions, particula­rly his head and eye movements, and correspondingly adjust the images on the user's display to reflect the change in perspective

In this article, we'll look at the defining characteristics of VR, some of the technology used in VR systems, a few of its applications, some concerns about virtual reality and a brief history of the discipline. In the next section, we'll look at how experts define virtual environments, starting with immersion.

Advertisement

Virtual Reality Immersion

A virtual reality unit that allows the user to move freely in any direction
Photo courtesy of VIRTUSPHERE

In a virtual reality environment, a user experiences immersion, or the feeling of being inside and a part of that world. He is also able to interact with his environment in meaningful ways. The combination of a sense of immersion and interactivity is called telepresence. Computer scientist Jonathan Steuer defined it as “the extent to which one feels present in the mediated environment, rather than in the immediate physical environment.” In other words, an effective VR experience causes you to become unaware of your real surroundings and focus on your existence inside the virtual environment.

Jonathan Steuer proposed two main components of immersion: depth of information and breadth of information. Depth of information refers to the amount and quality of data in the signals a user receives when interacting in a virtual environment. For the user, this could refer to a display’s resolution, the complexity of the environment’s graphics, the sophistication of the system’s audio output, et cetera. Steuer defines breadth of information as the “number of sensory dimensions simultaneously presented.” A virtual environment experience has a wide breadth of information if it stimulates all your senses. Most virtual environment experiences prioritize visual and audio components over other sensory-stimulating factors, but a growing number of scientists and engineers are looking into ways to incorporate a users’ sense of touch. Systems that give a user force feedback and touch interaction are called haptic systems.

Advertisement

For immersion to be effective, a user must be able to explore what appears to be a life-sized virtual environment and be able to change perspectives seamlessly. If the virtual environment consists of a single pedestal in the middle of a room, a user should be able to view the pedestal from any angle and the point of view should shift according to where the user is looking. Dr. Frederick Brooks, a pioneer in VR technology and theory, says that displays must project a frame rate of at least 20 - 30 frames per second in order to create a convincing user experience.

The Virtual Reality Environment

Other sensory output from the VE system should adjust in real time as a user explores the environment. If the environment incorporates 3-D sound, the user must be convinced that the sound’s orientation shifts in a natural way as he maneuvers through the environment. Sensory stimulation must be consistent if a user is to feel immersed within a VE. If the VE shows a perfectly still scene, you wouldn’t expect to feel gale-force winds. Likewise, if the VE puts you in the middle of a hurricane, you wouldn’t expect to feel a gentle breeze or detect the scent of roses.

Lag time between when a user acts and when the virtual environment reflects that action is called latency. Latency usually refers to the delay between the time a user turns his head or moves his eyes and the change in the point of view, though the term can also be used for a lag in other sensory outputs. Studies with flight simulators show that humans can detect a latency of more than 50 milliseconds. When a user detects latency, it causes him to become aware of being in an artificial environment and destroys the sense of immersion.

Advertisement

­An immersive experience suffers if a user becomes aware of the real world around him. Truly immersive experiences make the user forget his real surroundings, effectively causing the computer to become a non entity. In order to reach the goal of true immersion, developers have to come up with input methods that are more natural for users. As long as a user is aware of the interaction device, he is not truly immersed.

In the next section, we’ll look at the other facet of telepresence: interactivity.

Advertisement

Virtual Reality Interactivity

DisneyQuest’s Cyber Space Mountain Capsule
Photo courtesy of Sue Holland

Immersion within a virtual environment is one thing, but for a user to feel truly involved there must also be an element of interaction. Early applications using the technology common in VE systems today allowed the user to have a relatively passive experience. Users could watch a pre-recorded film while wearing a head-mounted display (HMD). They would sit in a motion chair and watch the film as the system subjected them to various stimuli, such as blowing air on them to simulate wind. While users felt a sense of immersion, interactivity was limited to shifting their point of view by looking around. Their path was pre-determined and unalterable.

Today, you can find virtual roller coasters that use the same sort of technology. DisneyQuest in Orlando, Florida features CyberSpace Mountain, where patrons can design their own roller coaster, then enter a simulator to ride their virtual creation. The system is very immersive, but apart from the initial design phase there isn't any interaction, so it's not an example of a true virtual environment.

Advertisement

Interactivity depends on many factors. Steuer suggests that three of these factors are speed, range and mapping. Steuer defines speed as the rate that a user's actions are incorporated into the computer model and reflected in a way the user can perceive. Range refers to how many possible outcomes could result from any particular user action. Mapping is the system's ability to produce natural results in response to a user's actions.

Navigation within a virtual environment is one kind of interactivity. If a user can direct his own movement within the environment, it can be called an interactive experience. Most virtual environments include other forms of interaction, since users can easily become bored after just a few minutes of exploration. Computer Scientist Mary Whitton points out that poorly designed interaction can drastically reduce the sense of immersion, while finding ways to engage users can increase it. When a virtual environment is interesting and engaging, users are more willing to suspend disbelief and become immersed.

True interactivity also includes being able to modify the environment. A good virtual environment will respond to the user's actions in a way that makes sense, even if it only makes sense within the realm of the virtual environment. If a virtual environment changes in outlandish and unpredictable ways, it risks disrupting the user's sense of telepresence.

In the next section, we'll look at some of the hardware used in VE systems.

Advertisement

The Virtual Reality Headset

The Nintendo Power Glove used in virtual reality gaming
Photo used under the

Today, most VE systems are powered by normal personal computers. PCs are sophisticated enough to develop and run the software necessary to create virtual environments. Graphics are usually handled by powerful graphics cards originally designed with the video gaming community in mind. The same video card that lets you play World of Warcraft is probably powering the graphics for an advanced virtual environment.

VE systems need a way to display images to a user. Many systems use HMDs, which are headsets that contain two monitors, one for each eye. The images create a stereoscopic effect, giving the illusion of depth. Early HMDs used cathode ray tube (CRT) monitors, which were bulky but provided good resolution and quality, or liquid crystal display (LCD) monitors, which were much cheaper but were unable to compete with the quality of CRT displays. Today, LCD displays are much more advanced, with improved resolution and color saturation, and have become more common than CRT monitors.

Advertisement

A data suit to provide user input
Photo courtesy of Dave Pape

Other VE systems project images on the walls, floor and ceiling of a room and are called Cave Automatic Virtual Environments (CAVE). The University of Illinois-Chicago designed the first CAVE display, using a rear projection technique to display images on the walls, floor and ceiling of a small room. Users can move around in a CAVE display, wearing special glasses to complete the illusion of moving through a virtual environment. CAVE displays give users a much wider field of view, which helps in immersion. They also allow a group of people to share the experience at the same time (though the display would track only one user’s point of view, meaning others in the room would be passive observers). CAVE displays are very expensive and require more space than other systems.

Closely related to display technology are tracking systems. Tracking systems analyze the orientation of a user’s point of view so that the computer system sends the right images to the visual display. Most systems require a user to be tethered with cables to a processing unit, limiting the range of motions available to him. Tracker technology developments tend to lag behind other VR technologies because the market for such technology is mainly VR-focused. Without the demands of other disciplines or applications, there isn’t as much interest in developing new ways to track user movements and point of view.

Input devices are also important in VR systems. Currently, input devices range from controllers with two or three buttons to electronic gloves and voice recognition software. There is no standard control system across the discipline. VR scientists and engineers are continuously exploring ways to make user input as natural as possible to increase the sense of telepresence. Some of the more common forms of input devices are:

  • Joysticks
  • Force balls/tracking balls
  • Controller wands
  • Datagloves
  • Voice recognition
  • Motion trackers/bodysuits
  • Treadmills

Advertisement

Virtual Reality Games

The Wii controller in action
Photo courtesy

Scientists are also exploring the possibility of developing biosensors for VR use. A biosensor can detect and interpret nerve and muscle activity. With a properly calibrated biosensor, a computer can interpret how a user is moving in physical space and translate that into the corresponding motions in virtual space. Biosensors may be attached directly to the skin of a user, or may be incorporated into gloves or bodysuits. One limitation to biosensor suits is that they must be custom made for each user or the sensors will not line up properly on the user’s body.

Mary Whitton, of UNC-Chapel Hill, believes that the entertainment industry will drive the development of most VR technology going forward. The video game industry in particular has contributed advancements in graphics and sound capabilities that engineers can incorporate into virtual reality systems’ designs. One advance that Whitton finds particularly interesting is the Nintendo Wii’s wand controller. The controller is not only a commercially available device with some tracking capabilities; it’s also affordable and appeals to people who don’t normally play video games. Since tracking and input devices are two areas that traditionally have fallen behind other VR technologies, this controller could be the first of a new wave of technological advances useful to VR systems.

Advertisement

­Some programmers envision the Internet developing into a three-dimensional virtual space, where you navigate through virtual landscapes to access information and entertainment. Web sites could take form as a three-dimensional location, allowing users to explore in a much more literal way than before. Programmers have developed several different computer languages and Web browsers to achieve this vision. Some of these include:

  • Virtual Reality Modeling Language (VRML)- the earliest three-dimensional modeling language for the Web.
  • 3DML - a three-dimensional modeling language where a user can visit a spot (or Web site) through most Internet browsers after installing a plug-in.
  • X3D - the language that replaced VRML as the standard for creating virtual environments in the Internet.
  • Collaborative Design Activity (COLLADA) - a format used to allow file interchanges within three-dimensional programs.

Of course, many VE experts would argue that without an HMD, Internet-based systems are not true virtual environments. They lack critical elements of immersion, particularly tracking and displaying images as life-sized.

Advertisement

Virtual Reality Applications

Using virtual therapy to treat a patient’s fear of flying.
Photo courtesy of Virtually Better, Inc.

In the early 1990s, the public's exposure to virtual reality rarely went beyond a relatively primitive demonstration of a few blocky figures being chased around a chessboard by a crude pterodactyl. While the entertainment industry is still interested in virtual reality applications in games and theatre experiences, the really interesting uses for VR systems are in other fields.

Some architects create virtual models of their building plans so that people can walk through the structure before the foundation is even laid. Clients can move around exteriors and interiors and ask questions, or even suggest alterations to the design. Virtual models can give you a much more accurate idea of how moving through a building will feel than a miniature model.

Advertisement

Car companies have used VR technology to build virtual prototypes of new vehicles, testing them thoroughly before producing a single physical part. Designers can make alterations without having to scrap the entire model, as they often would with physical ones. The development process becomes more efficient and less expensive as a result.

Virtual environments are used in training programs for the military, the space program and even medical students. The military have long been supporters of VR technology and development. Training programs can include everything from vehicle simulations to squad combat. On the whole, VR systems are much safer and, in the long run, less expensive than alternative training methods. Soldiers who have gone through extensive VR training have proven to be as effective as those who trained under traditional conditions.

In medicine, staff can use virtual environments to train in everything from surgical procedures to diagnosing a patient. Surgeons have used virtual reality technology to not only train and educate, but also to perform surgery remotely by using robotic devices. The first robotic surgery was performed in 1998 at a hospital in Paris. The biggest challenge in using VR technology to perform robotic surgery is latency, since any delay in such a delicate procedure can feel unnatural to the surgeon. Such systems also need to provide finely-tuned sensory feedback to the surgeon.

Another medical use of VR technology is psyc­hological therapy. Dr. Barbara Rothbaum of Emory University and Dr. Larry Hodges of the Georgia Institute of Technology pioneered the use of virtual environments in treating people with phobias and other psychological conditions. They use virtual environments as a form of exposure therapy, where a patient is exposed -- under controlled conditions -- to stimuli that cause him distress. The application has two big advantages over real exposure therapy: it is much more convenient and patients are more willing to try the therapy because they know it isn't the real world. Their research led to the founding of the company Virtually Better, which sells VR therapy systems to doctors in 14 countries.

In the next section, we'll look at some concerns and challenges with virtual reality technology.

Advertisement

Virtual Reality Challenges and Concerns

Image courtesy NASA

The big challenges in the field of virtual reality are developing better tracking systems, finding more natural ways to allow users to interact within a virtual environment and decreasing the time it takes to build virtual spaces. While there are a few tracking system companies that have been around since the earliest days of virtual reality, most companies are small and don’t last very long. Likewise, there aren’t many companies that are working on input devices specifically for VR applications. Most VR developers have to rely on and adapt technology originally meant for another discipline, and they have to hope that the company producing the technology stays in business. As for creating virtual worlds, it can take a long time to create a convincing virtual environment - the more realistic the environment, the longer it takes to make it. It could take a team of programmers more than a year to duplicate a real room accurately in virtual space.

Another challenge for VE system developers is creating a system that avoids bad ergonomics. Many systems rely on hardware that encumbers a user or limits his options through physical tethers. Without well-designed hardware, a user could have trouble with his sense of balance or inertia with a decrease in the sense of telepresence, or he could experience cybersickness, with symptoms that can include disorientation and nausea. Not all users seem to be at risk for cybersickness -- some people can explore a virtual environment for hours with no ill effects, while others may feel queasy after just a few minutes.

Advertisement

­Some psychologists are concerned that immersion in virtual environments could psychologically affect a user. They suggest that VE systems that place a user in violent situations, particularly as the perpetuator of violence, could result in the user becoming desensitized. In effect, there’s a fear that VE entertainment systems could breed a generation of sociopaths. Others aren’t as worried about desensitization, but do warn that convincing VE experiences could lead to a kind of cyber addiction. There have been several news stories of gamers neglecting their real lives for their online, in-game presence. Engaging virtual environments could potentially be more addictive.

Another emerging concern involves criminal acts. In the virtual world, defining acts such as murder or sex crimes has been problematic. At what point can authorities charge a person with a real crime for actions within a virtual environment? Studies indicate that people can have real physical and emotional reactions to stimuli within a virtual environment, and so it’s quite possible that a victim of a virtual attack could feel real emotional trauma. Can the attacker be punished for causing real-life distress? We don’t yet have answers to these questions.

In the next section, we’ll look at the history of virtual reality.

Advertisement

Virtual Reality History

Photo courtesy of Atticus Graybill of Virtually Better, Inc.

The concept of virtual reality has been around for decades, even though the public really only became aware of it in the early 1990s. In the mid 1950s, a cinematographer named Morton Heilig envisioned a theatre experience that would stimulate all his audiences’ senses, drawing them in to the stories more effectively. He built a single user console in 1960 called the Sensorama that included a stereoscopic display, fans, odor emitters, stereo speakers and a moving chair. He also invented a head mounted television display designed to let a user watch television in 3-D. Users were passive audiences for the films, but many of Heilig’s concepts would find their way into the VR field.

Philco Corporation engineers developed the first HMD in 1961, called the Headsight. The helmet included a video screen and tracking system, which the engineers linked to a closed circuit camera system. They intended the HMD for use in dangerous situations -- a user could observe a real environment remotely, adjusting the camera angle by turning his head. Bell Laboratories used a similar HMD for helicopter pilots. They linked HMDs to infrared cameras attached to the bottom of helicopters, which allowed pilots to have a clear field of view while flying in the dark.

Advertisement

In 1965, a computer scientist named Ivan Sutherland envisioned what he called the “Ultimate Display.” Using this display, a person could look into a virtual world that would appear as real as the physical world the user lived in. This vision guided almost all the developments within the field of virtual reality. Sutherland’s concept included:

  • A virtual world that appears real to any observer, seen through an HMD and augmented through three-dimensional sound and tactile stimuli
  • A computer that maintains the world model in real time
  • The ability for users to manipulate virtual objects in a realistic, intuitive way

In 1966, Sutherland built an HMD that was tethered to a computer system. The computer provided all the graphics for the display (up to this point, HMDs had only been linked to cameras). He used a suspension system to hold the HMD, as it was far too heavy for a user to support comfortably. The HMD could display images in stereo, giving the illusion of depth, and it could also track the user’s head movements so that the field of view would change appropriately as the user looked around.

Advertisement

Virtual Reality Development

A BOOM Display used by NASA to emulate space
Photo courtesy of NASA

NASA, the Department of Defense and the National Science Foundation funded much of the research and development for virtual reality projects. The CIA contributed $80,000 in research money to Sutherland. Early applications mainly fell into the vehicle simulator category and were used in training exercises. Because the flight experiences in simulators were similar but not identical to real flights, the military, NASA, and airlines instituted policies that required pilots to have a significant lag time (at least one day) between a simulated flight and a real flight in case their real performance suffered.

For years, VR technology remained out of the public eye. Almost all development focused on vehicle simulations until the 1980s. Then, in 1984, a computer scientist named Michael McGreevy began to experiment with VR technology as a way to advance human-computer interface (HCI) designs. HCI still plays a big role in VR research, and moreover it lead to the media picking up on the idea of VR a few years later.

Advertisement

Jaron Lanier coined the term Virtual Reality in 1987. In the 1990s, the media latched on to the concept of virtual reality and ran with it. The resulting hype gave many people an unrealistic expectation of what virtual reality technologies could do. As the public realized that virtual reality was not yet as sophisticated as they had been lead to believe, interest waned. The term virtual reality began to fade away with the public’s expectations. Today, VE developers try not to exaggerate the capabilities or applications of VE systems, and they also tend to avoid the term virtual reality.

For more information on virtual reality and virtual environments, check out the links on the next page.

Advertisement

Frequently Answered Questions

What are the 3 types of virtual reality?
The three types of virtual reality are non-immersive, semi-immersive, and fully immersive. Non-immersive VR does not block out the real world and only provides basic interaction. Semi-immersive VR partially blocks out the real world and provides more advanced interaction. Fully immersive VR completely blocks out the real world and provides the most advanced interaction.

Lots More Information

Related Articles

More Great Links

  • Beier, K. “Virtual Reality: A Short Introduction.” University of Michigan Virtual Reality Laboratory at the College of Engineering.
  • Briggs, John C. “The Promise of Virtual Reality.” The Futurist. 1996.
  • Brooks, Frederick. “Is There Any Real Virtue in Virtual Reality?” Presentation. 1998.
  • Brooks, Frederick. “What’s Real About Virtual Reality?” IEEE Computer Graphics and Applications. 1999.
  • Carlson, Wayne. “A Critical History of Computer Graphics and Animation.” The Ohio State University. 2003.http://accad.osu.edu/~waynec/history/lesson17.html
  • ScienceDaily
  • Sipress, Alan. “Does Virtual Reality Need a Sheriff?” Washington Post. 2 June, 2007.
  • Steuer, Jonathan. “Defining Virtual Reality: Dimensions Determining Telepresence.” Journal of Communications. Vol. 42, No. 2. 1992.
  • The Encyclopedia of Virtual Environmentshttp://www.hitl.washington.edu/scivw/EVE/index.html
  • Whitton, Mary C. “Making Virtual Environments Compelling.” Communications of the ACM. Vol. 46, No. 7. 2003.
  • Whitton, Mary C. Personal interview conducted June 19, 2007.

Advertisement

Loading...