Human Pose Estimation Using Per-Point Body Region Assignment
Keywords:Machine vision, deep learning, neural networks, pose estimation, point clouds
In recent years, the task of human pose estimation has become increasingly important, due to the large scale of usage, including VR applications, as well as higher-level tasks, such as human behavior understanding. In this paper, we introduce a novel two-stage deep learning approach named Segmentation-Guided Pose Estimation (SGPE). The pipeline is based on two neural networks working in a sequential fashion, while both models effectively process unorganized point clouds on the input. First, the segmentation network performs a pointwise classification into the corresponding body regions. In the next step, the point cloud with the per-point region assignment, forming the fourth input channel, is passed to the regression network. This way, both local and global features of the point cloud are preserved, helping the model fully maintain the body pose structure. Our strategy achieves competitive results on all of the examined benchmark datasets, and outperforms state-of-the-art methods.