Cave Automatic Virtual Environment

Cave Automatic Virtual Environment

A Cave Automatic Virtual Environment (better known by the recursive acronym CAVE) is an immersive virtual reality environment where projectors are directed to three, four, five or six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic where a philosopher contemplates perception, reality and illusion.


General characteristics of the CAVE

The CAVE is a large theatre sited within a larger room. The walls of the CAVE are made up of rear-projection screens, and the floor is made of a down-projection screen. High-resolution projectors display images on the screens via mirrors. The user wears special glasses inside the CAVE to see 3D graphics generated by the CAVE. People using the CAVE can see objects apparently floating in the air, and can walk around them, getting a proper view of what they would look like in reality. This is made possible by electromagnetic sensors. The frame of the CAVE is made of non-magnetic stainless steel to interfere as little as possible with the electromagnetic sensors. A CAVE user's movements are tracked by the sensors and the video adjusts accordingly. Computers control both this aspect of the CAVE and the audio aspect. There are multiple speakers placed at multiple angles in the CAVE, providing 3D sound to complement the 3D video. [1]


The first CAVE

The first CAVE was developed in the Electronic Visualization Laboratory at University of Illinois at Chicago and was announced and demonstrated at the 1992 SIGGRAPH. The CAVE was developed in response to a challenge from the SIGGRAPH 92 Showcase effort (and its chair James E. George) for scientists to create and display a one-to-many visualization tool that utilized large projection screens. The CAVE answered that challenge, and became the third major physical form of immersive VR (after goggles 'n' gloves and vehicle simulators). Carolina Cruz-Neira, Thomas A. DeFanti and Daniel J. Sandin are credited with its invention. It has been used and developed in cooperation with the NCSA, to conduct research in various virtual reality and scientific visualization fields. CAVE is a registered trademark of the University of Illinois Board of Regents. The name was first licensed to Pyramid Systems and is currently licensed to Mechdyne Corporation, the parent company of Fakespace Systems (Fakespace Systems acquired Pyramid Systems in 1999). Commercial systems based on the concept of the CAVE are available from a handful of manufacturers.


A lifelike visual display is created by projectors positioned outside the CAVE and controlled by physical movements from a user inside the CAVE. A motion capture system records the real time position of the user. Stereoscopic LCD shutter glasses convey a 3D image. The computers rapidly generate a pair of images, one for each of the user's eyes, based on the motion capture data. The glasses are synchronized with the projectors so that each eye only sees the correct image. Since the projectors are positioned outside the cube, mirrors are often used to reduce the distance required from the projectors to the screens. One or more computers drive the projectors. Clusters of desktop PCs are popular to run CAVEs, because they cost less and run faster.


Software and libraries designed specifically for CAVE applications are available. There are several techniques for rendering the scene. OpenGL is better for simpler simulations, not large scenes. There are 3 popular scene graphs in use today: OpenSG, OpenSceneGraph, and OpenGL Performer. OpenSG and OpenSceneGraph are open source, while OpenGL Performer is a commercial product from SGI.

CAVELib is the original Application Programmer's Interface (API) developed for the CAVE(TM) system created at the Electronic Visualization Lab at University of Illinois Chicago. The software was commercialized in 1996 and further enhanced by VRCO Inc. The CAVELib is a low level VR software package in that it abstracts for a developer window and viewport creation, viewer-centered perspective calculations, displaying to multiple graphics channels, multi-processing and multi-threading, cluster synchronization and data sharing, and stereoscopic viewing. Developers create all of the graphics for their environment and the CAVELib makes it display properly. The CAVELib API is platform-independent, enabling developers to create high-end virtual reality applications on Windows and Linux operating systems (IRIX, Solaris, and HP-UX are no longer supported). CAVELib-based applications are externally configurable at run-time, making an application executable independent of the display system.

VR Juggler is a suite of APIs designed to simplify the VR application development process. VR Juggler allows the programmer to write an application that will work with any VR display device, with any VR input devices, without changing any code or having to recompile the application. Juggler is used in over 100 CAVEs worldwide.

CoVE is a suite of APIs designed to enable the creation of reusable VR applications. CoVE provides programmers with an API to develop multi-user, multi-tasking, collaborative, cluster-ready applications with rich 2D interfaces using an immersive window manager and windowing API to provide windows, menus, buttons, and other common widgets within the VR system. CoVE also supports running X11 applications within the VR environment.

Equalizer is an open source rendering framework and resource management system for multipipe applications, ranging from single pipe workstations to VR installations. Equalizer provides an API to write parallel, scalable visualization applications which are configured at run-time by a resource server.

Syzygy is a freely-distributed grid operating system for PC Cluster Virtual Reality, Tele-Collaboration, and Multimedia Supercomputing, developed by the Integrated Systems Laboratory at the Beckman Institute of the University of Illinois at Urbana-Champaign. This middleware runs on Mac OS, Linux, Windows, and Irix. C++, OpenGL, and Python applications (as well as other regular computer apps) can run on this and be distributed for VR.

Avango is a framework for building distributed virtual reality applications. It provides a field/fieldcontainer based application layer similar to VRML. Within this layer a scene graph, based on OpenGL Performer, input sensors, and output actuators are implemented as runtime loadable modules (or plugins). A network layer provides automatic replication/distribution of the application graph using a reliable multi-cast system. Applications in Avango are written in Scheme and run in the scripting layer. The scripting layer provides complete access to fieldcontainers and their fields; this way distributed collaborative scenarios as well as render-distributed applications (or even both at the same time) are supported. Avango was originally developed at the VR group at GMD, now Virtual Environments Group at Franhofer IAIS and was open-sourced in 2004. An in-depth description can be found in here.

CaveUT is an open source mutator for Unreal Tournament 2004. Developed by PublicVR, CaveUT leverages existing gaming technologies to create a CAVE environment. By using Unreal Tournament's spectator function CaveUT can position virtual viewpoints around the player's "head". Each viewpoint is a separate client that, when projected on a wall, gives the illusion of a 3D environment.

Quest3D A real-time 3D engine and development platform, suitable for CAVE implementations.

Vrui, 3DVisualizer, LidarViewer and several others are software packages developed for the cave in the Keck Center for Active Visualization in Earth Sciences and have been publicly released with continuing development. Vrui (Virtual Reality User Interface) is a development toolkit that handles real-time rendering, head tracking, etc. whereas 3DVisualizer, LidarViewer, etc. are applications that provide visualization tools for specific data types.

inVRs The inVRs framework provides a clearly structured approach for the design of highly interactive and responsive VEs and NVEs. It is developed following Open Source principles (LGPL) easy to use with CAVEs and a variety of input devices.

VR4MAX is a package for real-time 3D rendering and development of interactive 3D models and simulators based on Autodesk 3ds Max content. VR4MAX Extreme supports multi-projection for CAVE implementations and provides extensive tracking support.

Cave5D is an adaptation of Vis5D to the CAVE. It enables users to interactively explore animated 3D output from weather models and similar data sets.

EON Icube is a hardware & software package developed by Eon Reality that uses PC-based technology to create a multi-sided immersive environment in which participants may be completely surrounded by virtual imagery and 3D sound. The Icube software supports edge blending and the capability to create full quad buffer stereo images in 3D.

libGlass is a general purpose distributed computing library, but has been used extensively in distributed computer graphic applications. There are many applications running at the five-sided CAVE. For example: astronomic application,arcade-like flight simulator and OpenGL demos.

TechViz XL is a commercial software package that makes any existing 3D OpenGL application (like CATIA, Pro/E, Unigraphics...) work directly in a CAVE, without any source code modification. Working like an OpenGL driver, it takes the commands of the existing application, streams them on a PC cluster, and changes the camera so that the viewpoint is dependent on the tracking system.

VirtualSight is a visualization software developed by Lumiscaphe which can be configured into a CAVE.

Developments in CAVE research

The biggest issues that researchers are faced with when it comes to the CAVE are size and cost. Researchers have come up with a derivative of the CAVE system called ImmersaDesk. With the ImmersaDesk, the user looks at one projection screen instead of being completely separated from the outside world, as is the case with the original CAVE. The idea behind the ImmersaDesk is that it is a single screen placed at a 45-degree angle so that the person using the machine has the opportunity to look forward and downward. The screen is 4’ X 5’, so it is wide enough to give the user the width that they need to obtain the proper 3D experience. The 3D images are produced by using the same glasses used in the CAVE. This system uses sonic hand and head tracking, so the system still uses a computer to process the users’ movements.

This system is much more affordable and practical than the original CAVE system for some obvious reasons. First, one does not need to create a “room inside a room”. That is to say, one does not need to place the ImmersaDesk inside a pitch-black room large enough to accommodate it. One projector is needed instead of four, and only one projection screen. A computer as expensive or with the same capabilities necessary for the original CAVE is not needed. Another thing that makes the ImmersaDesk attractive is that since it was derived from the original CAVE, it is compatible with all of the CAVE’s software packages, libraries and interfaces. [2]


In order to be able to create an image that will not be distorted or out of place, calibration must take place in the CAVE before an image is projected. The calibration process depends on the motion capture technology. Optical or Inertial-acoustic systems only requires to configure the zero and the axes used by the tracking system. Calibration of electromagnetic sensors (like the ones used in the first cave) is more complex. In this case a person will put on the special glasses needed to see the images in 3D. The projectors then fill the CAVE with many one-inch boxes set one foot apart. The person then takes an instrument called an “ultrasonic measurement device” which has a cursor in the middle of it, and positions the device so that the cursor is visually in line with the projected box. This process can go on until almost 400 different blocks are measured. Each time the cursor is placed inside a block, a computer program records the location of that block and sends the location to another computer. If the points are calibrated accurately, there should be no distortion in the images that are projected in the CAVE. This also allows the CAVE to correctly identify where the user is located and can precisely track their movements, allowing the projectors to display images based on where the person is inside the CAVE. [3]


The concept of the original CAVE has been reapplied and is currently being used in a variety of fields. Many universities own CAVE systems.

CAVEs have many uses. Many engineering companies use CAVEs to enhance product development. Prototypes of parts can be created and tested, interfaces can be developed, and factory layouts can be simulated, all before spending any money on physical parts. This gives engineers a better idea of how a part will behave in the product in its entirety.


External links

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Cave Automatic Virtual Environment — Saltar a navegación, búsqueda Para otros usos de este término, véase Cave. La tecnología Cave Automatic Virtual Environment o CAVE, es un entorno de realidad virtual inmersiva. Se trata de una sala en forma de cubo en la que hay proyectores… …   Wikipedia Español

  • Cave Automatic Virtual Environment — Die CAVE am Electronic Visualization Laboratory der University of Illinois at Chicago Der aus dem Englischen stammende Begriff Cave Automatic Virtual Environment (abgekürzt: CAVE; wörtlich übersetzt: Höhle mit automatisierter, virtueller Umwelt)… …   Deutsch Wikipedia

  • CAVE — oder Cave steht für: Cave 54, ein Studentenjazzclub in Heidelberg ein lateinisches Wort im Sinne von Hüte Dich!; siehe Cave eine Gemeinde in der Provinz Rom in Italien, siehe Cave (Latium) einen japanischen Hersteller von Videospielen und… …   Deutsch Wikipedia

  • CAVE —   [Abk. für Cave Automatic Virtual Environment, dt. »Höhle (als) automatische virtuelle Umgebung«, der Name spielt auf Platos Höhlengleichnis an], 1992 an der University of Illinois (USA) projektierte Form der Darstellung von …   Universal-Lexikon

  • Virtual reality — This article is about the sensory technology. For the Alan Ayckbourn play, see Virtual Reality (play). For the gamebook series, see Virtual Reality (gamebooks). U.S. Navy personnel using a VR parachute trainer …   Wikipedia

  • virtual reality — a realistic simulation of an environment, including three dimensional graphics, by a computer system using interactive software and hardware. [1985 90] * * * Use of computer modeling and simulation to enable a person to interact with an… …   Universalium

  • Virtual world — A virtual world is an online community that takes the form of a computer based simulated environment through which users can interact with one another and use and create objects.[1] The term has become largely synonymous with interactive 3D… …   Wikipedia

  • Cave (disambiguation) — The word cave, in addition to its usual meaning of a subterranean chamber (see Cave), has these meanings:Language*Cave is Latin, meaning Beware! . Occurring in medical terminology.People*Nick Cave is the leader of the Australian rock band Nick… …   Wikipedia

  • Virtual Reality — Fallschirmspringer der US Navy üben mit einem Head Mounted Display virtuell das Fallschirmspringen Als virtuelle Realität oder Virtual Reality (engl.), kurz VR, wird die Darstellung und gleichzeitige Wahrnehmung der Wirklichkeit und ihrer… …   Deutsch Wikipedia

  • Virtual reality — Fallschirmspringer der US Navy üben mit einem Head Mounted Display virtuell das Fallschirmspringen Als virtuelle Realität oder Virtual Reality (engl.), kurz VR, wird die Darstellung und gleichzeitige Wahrnehmung der Wirklichkeit und ihrer… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”