Abstract
The appearance of surface texture as it varies with angular changes of view and illumination is becoming an increasingly important research topic. The bidirectional texture function (BTF) is used in surface modeling because it describes observed image texture as a function of imaging parameters. The BTF has no geometric information, as it is based solely on observed texture appearance. Computational tasks such as recognizing or rendering typically require projecting a sampled BTF to a lower dimensional subspace or clustering to extract representative textons. However, there is a serious drawback to this approach. Specifically, cast shadowing and occlusions are not fully captured. When recovering the full BTF from a sampled BTF with interpolation, the following two characteristics are difficult or impossible to reproduce: (1) the position and contrast of the shadow border, (2) the movement of the shadow border when the imaging parameters are changed continuously. For a textured surface, the nonlinear effects of cast shadows and occlusions are not negligible. On the contrary, these effects occur throughout the surface and are important perceptual cues to infer surface type. In this paper we present a texture representation that integrates appearance-based information from the sampled BTF with concise geometric information inferred from the sampled BTF. The model is a hybrid of geometric and image-based models and has key advantages in a wide range of tasks, including texture prediction, recognition, and synthesis.