Abstract
Merging virtual object onto human video sequence is an important technique for many applications such as special effects in movies and augmented reality applications. In a traditional method, the operator manually fit 3D body model onto the human video sequence, and generate virtual objects at the current 3D body pose. However, the manual fitting is a time consuming task, and the automatic registration is required. In this paper, we propose a new method for merging virtual objects onto the human video sequence. First, we track the current 3D pose of human figure by using the spatio-temporal analysis and the structural knowledge of human body. Then we generate CG objects and merge it with the human figure in video. In this paper, we demonstrate examples of merging virtual cloth with the video captured images.