HandVoxNet: Deep Voxel-Based Network for 3D Hand
Shape and Pose Estimation from a Single Depth Map

Jameel Malik1,2,3     Ibrahim Abdelaziz1,2     Ahmed Elhayek2,4     Soshi Shimada5  
  Sk Aziz Ali1,2        Vladislav Golyanik5     Christian Theobalt5     Didier Stricker1,2

Abstract

3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many applications. The state-of-the-art methods directly regress 3D hand meshes from 2D depth images via 2D convolutional neural networks, which leads to artefacts in the estimations due to perspective distortions in the images.
In contrast, we propose a novel architecture with 3D convolutions trained in a weakly-supervised manner. The input to our method is a 3D voxelized depth map, and we rely on two hand shape representations. The first one is the 3D voxelized grid of the shape which is accurate but does not preserve the mesh topology and the number of mesh vertices. The second representation is the 3D hand surface which is less accurate but does not suffer from the limitations of the first representation. We combine the advantages of these two representations by registering the hand surface to the voxelized hand shape. In the extensive experiments, the proposed approach improves over the state of the art by 47.8% on the SynHand5M dataset. Moreover, our augmentation policy for voxelized depth maps further enhances the accuracy of 3D hand pose estimation on real data. Our method produces visually more reasonable and realistic hand shapes on NYU and BigHand2.2M datasets compared to the existing approaches.

Downloads


Citation

BibTeX, 1 KB

@inProceedings{HandVoxNet2020, 
author = {Jameel Malik and Ibrahim Abdelaziz and Ahmed Elhayek and Soshi Shimada and Sk Aziz Ali and Vladislav Golyanik and Christian Theobalt and Didier Stricker}, 
title = {HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation from a Single Depth Map}, 
booktitle = {Computer Vision and Pattern Recognition (CVPR)}, 
year = {2020} 
}  			

Acknowledgments

This work was supported by the project VIDETE (01IW18002) of the German Federal Ministry of Education and Research (BMBF) and the ERC Consolidator Grant 4DReply (770784).

Contact

For questions, clarifications, please get in touch with:
Soshi Shimada sshimada@mpi-inf.mpg.de
Vladislav Golyanik golyanik@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.