Illumination-invariant Robust Multiview 3D Human Motion Capture

Winter Conference on Applications of Computer Vision, WACV 2018

Download Video

Abstract

In this work we address the problem of capturing human body motion under changing lighting conditions in a multiview setup. In order to account for changing lighting conditions we propose to use an intermediate image representation that is invariant to the scene lighting. In our approach this is achieved by solving time-varying segmentation problems that use frame- and view-dependent appearance costs that are able to adjust to the present conditions. Moreover, we use an adaptive combination of our lighting-invariant segmentation with CNN-based joint detectors in order to increase the robustness to segmentation errors. In our experimental validation we demonstrate that our method is able to handle difficult conditions better than existing works.

Downloads


Terms of use

*The data we provide is meant for research purposes only and any use of it for non-scientific means is not allowed. This includes publishing any scientific results obtained with our data in non-scientific literature, such as tabloid press. We ask the researcher to respect our actors and not to use the data for any distasteful manipulations. If you use our data, you are required to cite the origin:
Nadia Robertini, Florian Bernard, Weipeng Xu and Christian Theobalt. Illumination-invariant Robust Multiview 3D Human Motion Capture. Winter Conference on Applications of Computer Vision, WACV 2018

Bibtex

    @inproceedings{Robertini:2018,
    author = {Robertini, Nadia and Bernard, Florian and Xu, Weipeng and and Theobalt, Christian},
    title = {Illumination-invariant Robust Multiview 3D Human Motion Capture},
    booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision, WACV 2018},
    year = {2018},
    url = {http://gvv.mpi-inf.mpg.de/projects/IntrinsicMoCap/}
    }
  

Acknowledgments

We thank all reviewers for their valuable feedback. We thank Helge Rhodin for his comparative results, Alkhazur Manakov and Hyeongwoo Kim for helping with the recordings. This work was funded by the ERC Starting Grant CapReal (335545).


Imprint | Data Protection