InverseFaceNet:
Deep Single-Shot Inverse Face Rendering From A Single Image

arXiv (Submitted on 31 Mar 2017)

H. Kim 1 M. Zollhöfer 1 A. Tewari 1 J. Thies 1 C. Richardt 1 C.Theobalt 1
1MPI Informatics 2University of Erlangen-Nuremberg 3University of Bath


Abstract

We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot. By estimating all these parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible. Previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created dataset. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. In addition, we propose an analysis-by-synthesis breeding approach which iteratively updates the synthetic training corpus based on the distribution of real-world images, and we demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches.


Paper


Bibtex

 
@article{kim17InverseFaceNet,
title = {{InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image}},
author = {Kim, Hyeongwoo and Zoll{\"o}fer, Michael and Tewari, Ayush and Thies, Justus and Richardt, Christian and Theobalt Christian},
journal = {arXiv preprint arXiv:1703.10956},
year = {2017}
}