Abstract
In this paper, we propose an efficient method that estimates the motion parameters of a human head from a video sequence by using a three-layer linear iterative process. In the innermost layer, we estimate the motion of each input face image in a video sequence based on a generic face model and a small set of feature points. A fast iterative least-square method is used to recover these motion parameters. After that, we iteratively estimate three model scaling factors using multiple frames with the recovered poses in the middle layer. Finally, we update 3D coordinates of the feature points on the generic face model in the outermost layer. Since all iterative processes can be solved linearly, the computational cost is low. Tests on synthetic data under noisy conditions and two real video sequences have been performed. Experimental results show that the proposed method is robust and has good performance.