University of Amsterdam
by Robert Belleman and Roman Shulakov.
Distributed Real-time Interactive Virtual Environment
If you want quick information on using the
UvA-DRIVE at the Section Computational Science,
read this HOWTO (UvA access only).
For more details on the design of this system, read the paper by R.G.
Belleman, B. Stolk and R. de Vries ``Immersive
Virtual Reality on Commodity Hardware''.
Virtual reality (VR) systems give the user the illusion to be in a virtual
space by creating an immersive interactive environment where one can see
objects in stereoscopic 3D and interact with them. Such systems are normally
associated with quite expensive high-performance hardware which often can
not be afforded by small research groups.
Thanks to progress in different technological and scientific areas,
it is now possible to build relatively cheap VR systems with a performance
and features that are comparable to high-end systems. Our UvA-DRIVE system
uses Immersive Projection Technology with one large screen, active stereo
with shutter glasses, electromagnetic tracking and multi-modal interaction
based video systems are used as a relatively cheap solution to produce
images with a large viewable area that can be seen by a number of users
at the same time. To eliminate shadows on the screen, back projection is
used by projecting the image on the back surface of a semitransparent screen.
This technique requires high brightness colour projectors. Our system uses
one projector, providing full screen, anti-aliased, full-colour images
of 1024x768 resolution at 120 frames per second (60 frames per second in
The host computer used in VR systems should be powerful enough to perform
the calculations to visualize scientific datasets and to render them on
screen. To do both, high performance CPUs and graphical systems are required.
Our system is a Symmetric Multi Processing (SMP) architecture powered by
two Intel Pentium-III processors (both running at 1 GHz) with 1 Gb of shared
memory. The graphics system is based on nVidia's GeForce2 DDR chipset through
a AGP 4x interface. The host machine runs the Linux operating system (version
2.4) and easily available libraries for the VR applications, including
OpenGL, Performer, CAVE library, VR Juggler and VTK. This software allows
VR applications to be run on UvA-DRIVE and in a CAVE, with almost no changes.
systems draw stereoscopic images to create depth awareness by providing
separate pictures for the right and left eye. There are several ways to
do this, each with its own advantages and disadvantages. UvA-DRIVE uses
an active stereo method with shutter glasses. The glasses have a shutter
on each eye made from liquid crystal material that can be made transparent
or opaque very quickly. The shutter glasses are synchronized with the images
on the screen through an infrared connection. The projection system switches
from the left eye to the right eye image, opening one shutter on the glasses
and closing another, at a frequency that is too high to be perceived by
the human eye.
The graphics system is responsible for generating separate pictures
for the left and right eye. UvA-DRIVE uses a special method by doubling
the screen's update frequency. The idea is to put pictures for the left
and right eyes on the top and bottom halves of the screen and display it
with a doubled update frequency so that the left and right eye image overlap,
occupying the whole screen area.
systems use special hardware called trackers to determine a user's position
and orientation in six degrees-of-freedom. What usually gets tracked are
the user's head and hands. A sensor mounted on the head allows the VR system
to create user-centred projections based on the user's movements while
hand sensors are used to interact with virtual objects.
There are different methods to do tracking, each based on different
techniques such as magnetic fields, acoustics, inertia, optics, etc. The
UvA-DRIVE uses a Polhemus magnetic tracker that can serve up to four sensors.
This device reports position and orientation of each sensor to the host
machine via a serial interface.
hardware used within UvA-DRIVE are input devices, or controllers. Controllers
allow a user to interact with the virtual world. Controllers usually have
a number of buttons, joysticks or trackballs to enable interaction. The
hand-driven controllers are combined with tracking sensors to allow the
VR system to receive user commands and to track the position of the hand
with respect to virtual objects.
UvA-DRIVE is equipped with a ``wand'' that has three buttons, one joystick
with two degrees of freedom and a tracking sensor, all mounted in a single
device. A special feature of UvA-DRIVE is a speech recognition system that
can be used to interact with the virtual world using spoken commands. The
system uses a wireless microphone and special software to convert speech
into commands. In many applications, speech makes it easier and faster
to interact with the virtual world.
Interaction with distributed simulations
UvA-DRIVE is connected through a Giganet-switch to the supercomputer facilities
available in the Netherlands. UvA-DRIVE is also part of the Virtual Laboratory
and the GRID worldwide infrastructure based on Globus technology. In this
way, large scale simulations running on distributed high-performance platforms
can be interactively examined with help of the UvA-DRIVE system.