Digital and Computational Video, International Workshop on
Download PDF

Abstract

Estimation of the spatial coordinates of object points of a 3D environment, determined from stereo images of that environment, is well documented. However, the current approaches to 3D reconstruction of a scene from stereo images, require either the intrinsic camera parameters, the extrinsic parameters, or the spatial coordinates of at least five world points of that scene. In practice many more such points are usually required. This paper presents a method for 3D reconstruction when only two camera views are available, and no other camera information is available. It is assumed that the optical axes of the right and left cameras intersect and their Y-axes are parallel. A derivation of the equations needed in order to solve for the unknown parameters, within the framework of the assumed stereo system, is presented. Next, a one-sixteenth pixel interpolation over the camera pixel arrays is performed in order to improve the point correspondences, and thereby the accuracy of the parameter estimates. The technique described in this paper provides a 3D reconstruction up to a scale factor. This scaled reconstruction along with knowledge of the coordinates of one world point, or the dimensions of a familiar object, yield the true to scale point coordinates for that scene. Experiments on both synthetic and real stereo images yield very satisfactory results, as demonstrated in the paper
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!