|Abstract:||Reconstruction and modeling of 3-D geometry are among the core issues in computer vision. While geometry composed of radiometrically simple materials is currently relatively easy to reconstruct and model, geometry with complex reflectance properties presents a substantial challenge. The research work presented here is an investigation of this problem. Two methods are proposed that are specifically designed to handle geometry with complex photometric properties.
The first proposed method is designed to reconstruct large-scale geometry with arbitrary and possibly anisotropic BRDFs. Existing reconstruction techniques typically make explicit or implicit assumptions about the reflectance properties of a surface. The proposed method uses the idea of photometric ranging, where no such assumptions are necessary. In a photometric stereo-like setup of multiple images obtained from a single viewpoint under controlled illumination, photometric ranging recovers the depth directly for each camera pixel, rather than through surface normal field integration. It exploits the basic concept of radiant energy density falloff with distance from a point light source. Double-covering the incident light field allows to find sets of coincidental pairs of light directions where this can be used to align the reflected light fields and directly reconstruct the depth of a scene. Unlike photometric stereo, in photometric ranging no assumptions are required about the surface smoothness, the presence or absence of shadowing, or the nature of the BRDF, which may vary over the surface. Once the depth of a scene is known, the reflected light field can also be resampled to relight the scene, that is to render the same scene from the camera view, but under novel lighting, including nearby and distant sources.
The second proposed method aims to model small-scale geometry of volumetric surface materials with complex reflectance. Instead of recovering the intricate geometry itself, it uses an appearance-based approach of volumetric texturing, which is a popular technique for rendering rich 3-D detail when a polygonal surface representation would be ineffective. Although efficient algorithms for rendering volumetric textures have been known for years, capturing the richness of real volumetric materials remains a challenging problem. The proposed technique generates a volumetric representation of a complex 3-D texture with unknown reflectance and structure. From the acquired reflectance data in the form of a 6-D Bidirectional Texture Function (BTF), the proposed method creates an efficient volumetric representation in the form of a stack of semi-transparent layers each representing a slice through the texture's volume. In addition to negligible storage requirements, this representation is ideally suited for hardware-accelerated real-time rendering.