Interactive Model-based Reconstruction of the Human Head
using an RGB-D Sensor

CASA 2014

M. Zollhöfer 1 J. Thies 1 M. Colaianni 1 M. Stamminger 1 G. Greiner 1
1 University of Erlangen-Nuremberg


Abstract

We present a novel method for the interactive markerless reconstruction of human heads using a single commodity RGB-D sensor. Our entire reconstruction pipeline is implemented on the GPU and allows to obtain high-quality reconstructions of the human head using an interactive and intuitive reconstruction paradigm. The core of our method is a fast GPU-based non-linear Quasi-Newton solver that allows us to leverage all information of the RGB-D stream and fit a statistical head model to the observations at interactive frame rates. By jointly solving for shape, albedo and illumination parameters, we are able to reconstruct high-quality models including illumination corrected textures. All obtained reconstructions have a common topology and can be directly used as assets for games, films and various virtual reality applications. We show motion retargeting, retexturing and relighting examples. The accuracy of the presented algorithm is evaluated by a comparison against ground truth data.


Paper Video Talk

Video


Bibtex

 
 @article{zollhofer2014interactive,
title = {Interactive model-based reconstruction of the human head using an RGB-D sensor},
author = {Zollh{\"o}fer, Michael and Thies, Justus and Colaianni, Matteo and Stamminger, Marc and Greiner, G{\"u}nther},
journal = {Computer Animation and Virtual Worlds},
volume = {25},
number = {3-4},
pages = {213--222},
year = {2014},
publisher = {Wiley Online Library}
}