Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision

International Conference on 3D Vision (3DV 2017), Qingdao, China

Update (02/03/2018): There has been a minor correction to the annotations for TS3 and TS4 in the test set. The evaluation results would consequently differ slightly from the ones reported in literature thus far.

Download Video: HD (MP4, 1080p, 152 MB)


We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performances on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.



BibTeX, 1 KB

 author = {Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian},
 title = {Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision},
 booktitle = {3D Vision (3DV), 2017 Fifth International Conference on},
 url = {},
 year = {2017}


This work is was funded by the ERC Starting Grant project CapReal (335545). Dan Casas was supported by a Marie Curie Individual Fellow, grant agreement 707326, and Helge Rhodin by the Microsoft Research Swiss JRC. We also thank Foundry for license support.


For questions and clarifications please get in touch with:
Dushyant Mehta

This page is Zotero translator friendly. | Imprint | Data Protection