Pattern Recognition, International Conference on
Download PDF

Abstract

For 3D model based video conferencing, estimation of the rigid motions of the human head and the non-rigid motions of the face are usually required. In this paper, we focus on the image motions corresponding to the rigid motions of the human head on the video images. We discovered that the image motions in image i can be computed through estimating a non-rigid transformation matrix between the pose 1 and pose i. A novel two-stage algorithm for estimating the transform is proposed. It assumes at least four 3D control points are estimated by stereovision from the first pose, and the images of these four points are detected in image i. Then this transform can be estimated by minimizing the backprojection errors of the four image points. This is more robust and accurate than methods based on estimating the rigid motions. To verify our method, a 3D wireframe model was used to model a human head for the simulated video image sequence.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!