| |
|
Phase 0 - Develop Initial CVE Infrastructure
|
Contents
|
Overview
DIVE
VRJuggler/Interact Platform
The Interaction System
|
|
The first phase of React/Iinteract was the
development of an appropriate software platform. It was intended in the
proposal phase that the platform would be DIVE. However other forces
outside the project had caused us to develop IPT support for DIVE before
the project started. DIVE did not completely satisfy our needs as they
developed within the experimental phases. Thus we had two technical
objectives:
- Development of DIVE to support easily configurable tracking input
and vehicle support.
- Development of a light-weight platform on top of VRJuggler.
|
|
The obvious choice of the CVE toolkit for our
lab was the DIVE
software [4] because we had over seven
years experience with it. In order to support ongoing project work, an
initial port of this to IPTs had been achieved in 2001 [19].
This infrastructure (DIVE 3.3x3 ) was released to collaborators around the
start of the React/Interact project in Spring 2002, but it did not fully
meet the requirements of React/Interact. Key features we required that
were not available in DIVE were guaranteed repeatability down to precise
timing of events on specific frames, and total recording of all user
activity. Based on project experiences using a variety of input devices [9,10,11],
the React/Interact project drove a major revision (DIVE 3.3x5) that is the
current public version.
DIVE version 3.3x5 fully met the requirements of objective 1, and this
software supported the third set of experiments in this project. DIVE is a
peer-peer system, and it has broad support for many display
configurations. However, it was not the ideal platform for running very
focussed experiments where precise timing of events is critical. In
addition, although publicly available as binaries, and available as source
for academic institutes, DIVE is not Open Source software and we wanted to
make sure our experiments would be reproducible and extendable by others
by making the complete platform code was available. Thus the interaction
module that had been designed for DIVE, was re-engineered as a lightweight
platform built on top of the VRJuggler software [1].
This platform was used in the first two experiments described in the next
section. |
VRJuggler/Interact
Platform
|
Because we
wanted to concentrate on building an interaction toolkit and not a general
VR toolkit, our first step was to choose some pre-existing software as the
basis of our system:-
-
Firstly
we had to choose a particular Virtual Environment (VE) toolkit,
to setup and run the virtual environment (providing management of
hardware). After some discussion we decided to use VRJuggler
as our VE. This is a very
versatile system, although still under under much development, which aims to
offer
reliable cross-platform support (i.e. both Windows and UNIX based machines), and can be easily configured to use many
different VE
peripherals (trackers, screens etc). This was desirable, as we wanted to run the same
application code on many different VR technologies and systems; particularly when running
the distributed user experiments in Phase
3 of the project.
-
Our
second decision was to choose a suitable scene graph, to manage and
store the scene, which could then be manipulated by the interaction
system via the various interaction
techniques. After many frustrating trials with some of the newer
open source scene graphs such as OpenSceneGraph and
OpenSG, we found that none
were mature enough to be used for our work. We needed a scene graph that
was stable and worked now, and so we opted for the popular SGI
OpenGL Performer.
|
The Interaction System
|
Once we had chosen our scene
graph and VE
toolkit, we had to build and integrate these existing software's with the
new interaction system. We wanted to
develop an interaction toolkit which may be used with
other CVEs and scene graphs in the future, and so we tried to contain all the
interaction functionality in a single interaction manager class (see simple system
schematic below). This interaction manager class would expect a certain
set of services from the CVE (such as tracker input), and a particular interface to the
scene graph, which would allow manipulation of the scene according to the interactions of
the user. The interface would take the form of a set of Interaction Events
(IEs) (derived from taxonomies of interaction in VR). For example, a
single IE might be PICKUP_OBJECT.
The use of IEs as an interface to the scene graph has several
advantages:-
Firstly portability - any scene graph may be used in the future if it can
implement the set of IEs. Thus we are not tied to a particular scene graph.
Note, the same is also true of our choice of CVE; we may use any CVE in
the future as long as it can
provide appropriate user input to the Interaction Manager.
A second advantage of using IEs as our interface to the scene graph, is that we can
record the entirety of user interactions in the VE by simply recording all the
IEs. That is to say the record of all IEs in a session,
represents all the information needed to recreate that session. So by saving all
IEs for a user, we can replay and watch the users movements in the VE at
our leisure. This functionality could be very
useful for the researcher in evaluation of experimental results for two reasons.
Firstly, we have a complete record of
all button presses, user movements, objects picked up, framerate etc, and so
have a rich data set which can be analysed post experiment in ways which may
not have been thought of when the experiment was designed. Secondly, we can better handle
any outliers or confusing data, as we can recreate the session and see if
anything unusual happened, such as a technology failure like loss of tracking.
|
|