Surround Vision

Enhancing Silhouette-based Human Motion Capture with 3D Motion Fields


Investigators : Christian Theobalt, Joel Carranza
Supervisors : Prof. Dr. Hans-Peter Seidel, Dr. Marcus Magnor


Overview

High-quality non-intrusive human motion capture is necessary for acquistion of model-based free-viewpoint video of human actors. Silhouette-based approaches have demonstrated that they are able to accurately recover a large range of human motion from multi-view video. However, they fail to make use of all available information, specifically that of texture information. In this project we develop an algorithm that uses motion fields constructed from optical flow in multi-view video sequences. The use of motion fields augments the silhoutte-based method by incorporating texture-information into the tracking process. The algorithm is a key-component in a larger free-viewpoint video system of human actors. Our results demonstrate that our method accurately estimates pose parameters and allows for realistic texture generation in 3D video sequences.



Figure: 1: Close-up shots of the arm and the torso of the body model before and after pose correction computed from the 3D motion field. The motion field was exaggerated to give a better visual impression.


Figure: 2: Corrective 3D motion field rendered as green arrows on the textured human body model.


Results

Figure: 3: Pairs of images show free-viewpoint video results without (l) and with differential pose parameter update (r).

In the multi-view video sequences used for our experiments we tried to show a significant amount of head motion. Therefore, the videos show sort of an "acting scene" where the person is pretending to be in a dialogue that involves a lot of head motion and arm gestures.