Technologies

ReaCTor: Is a room in which the user is presented with high-resolution stereo-pair images projected in real-time on 3 walls and the floor. The floor space is 3m x 3m and each of the 3 walls is 3m x 2.2m. The floor is painted wood and is projected from above, while the 3 walls are acrylic and back projected. The 4 projectors are Barco 808's, modified by SEOS to allow accurate geometrical alignment of the edges, colour convergence of the 3 beams, and colour and contrast blending. Each projector outputs a 1024x768 image at 90 frames/sec (alternate left-eye and right-eye views are sent so the frame rate is 45Hz for each field); the projections are reflected off Mylar mirrors ("folding" the projection to save space). Crystal eyes shutter glass emitters are mounted behind each screen to provide the IR synch signal for the shutterglasses.
The tracking system is InterSense 900, with 2 tracking stations (one head tracker + one hand-held 'wand' tracker with a joystick and 4 buttons). The tracking is based on 2 technologies: each tracking station contains an inertial device that measures linear velocity and acceleration, and angular velocity; this provides very low latency updates, but with incremental errors. The tracking stations also contain spatially separated compund microphones that detect ultrasonic signals from 18 emitters spread across the ceiling of the cave. Based on time-of-flight measurements from emitter to microphone-receiver, the exact position and orientation of the tracking devices is very accurately computed; thus the latter system "corrects" the incremental errors of the (faster) former system.

The image generator is an SGI Onyx2 InfiniteReality2 with 4 graphics pipes (one feeds each screen); ther are 8 processors, so typically each of the 4 rendering processes is "locked" to a processor and the tracking is locked to another. This leaves 3 processors for the application.


Picture of the UCL ReaCTor (left) and after addition of a traversable screen for an early City Workshop (right)


More Information

DIVE: UCL has worked with the Distributed Interactive Virtual Environments (DIVE) system from SICS for many years. We have ported it to the ReaCTor system and a variety of other immersive and semi-immersive systems. Most of our scalability and humanoid animation demonstrations are built in DIVE.

More Information

Crowd Rendering: We are further developing the crowd rendering system for use within the Digital Care, City and CityWide projects.

Virtual City Models: We are developing modeling and rendering process for very large urban models. Refer to the Virtual London page for more details.

Animated Avatar Models: We are developing and modeling virtual humans and appropriate virtual behaviours for our virtual friends. Refer to the brochure for more information [PDF] or the general website [HTML].