The primary research interest of my group is the development of algorithms and tools for fast and efficient generation as well as manipulation of high-quality, realistic imagery. In particular, we have been researching a number of challenging problems in the following sub-areas:
Rendering of realistic objects under general lighting requires the simulation of complex light
transport phenomena like shadows, inter-reflections, caustics and sub-surface scattering.
We proposed one of the first methods that could deal with all these phenoma using a technique
called Precomputed
Radiance Transfer. Roughly speaking, light transport is precomputed and projected into
basis functions, which enables real-time evaluation of lighting under arbitrary illumination.
Since this technique is rather efficient, it has been incorporated into DirectX and is used by a number
of games, such as Halo 2. Precomputed radiance transfer techniques are inherently
limited to static scenes, falling short of the ultimate goal of realistic, dynamic imagery.
Recently, we have started to work on real-time global illumination techniques that enable
the use of fully dynamic geometry, lights, and materials. We are investigating the use of
approximate visibility, which has already led to very promising results.
Further, we have been investigating another important aspect of real-time rendering:
shadows. Shadowing techniques are commonly limited to the generation of
hard shadows. Even then, anti-aliasing is a common artifact. We have developed a new
mathematical formulation of shadowing, Convolution Shadow Maps, which enables us
to efficiently render aliasing-free shadows as well as soft shadows.
Realistic renderings require not physically-based simulation of light
transport but also realistic materials. To this end, we have worked on
the acquisition and representation of realistic material properties,
such as reflectance acquisition and texture transfer. Editing of complex
material properties is another area, which we have recently addressed and
where we have shown that common image editing operations transfer into
the domain of material editing. We have also worked on real-time, realistic
rendering of objects with complex material properties, such as subsurface
scattering, glossy materials under environmental illumination, or hair.
More recently, we have started working in the area of computational photography,
and more specifically in the area of high-dynamic-range imaging (HDRI). HDRI is
a set of techniques that allows one to increase the dynamic range of traditional
photographs, thus avoiding under- or overexposed areas. To this end a bracketed
sequence of images is concatenated in one single HDR image. In order to display an
HDR image, one then needs to compress the dynamic range again into the dynamic range
of the display, which is commonly called tone-mapping. We have proposed
a technique called Exposure Fusion, which directly creates a tone-mapped image
without going through an HDR image. This technique was quickly adopted by a number
of tools, e.g., PanoTools, due to its
robustness (no parameter tweaking required). We have further worked on radiometrically
calibrated HDR imaging (i.e., both in terms of color and luminance), which enables the
use of a normal digital SLR camera instead of expensive measurement devices.