Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

Yihan Wang 1 , Muyang Li 2 , Han Cai 3 , Wei-Ming Chen 3 , Song Han 3
Tsinghua University, Carnegie Mellon University, Massachusetts Institute of Technology


Pose estimation plays a critical role in human-centered vision applications. However, it is difficult to deploy state-of-the-art HRNet-based pose estimation models on resource-constrained edge devices due to the high computational cost (more than 150 GMACs per frame). In this paper, we study efficient architecture design for real-time multi-person pose estimation on edge. We reveal that HRNet's high-resolution branches are redundant for models at the low-computation region via our gradual shrinking experiments. Removing them improves both efficiency and performance. Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including Fusion Deconv Head and Large Kernel Convs. Fusion Deconv Head removes the redundancy in high-resolution branches, allowing scale-aware feature fusion with low overhead. Large Kernel Convs significantly improve the model's capacity and receptive field while maintaining a low computational cost. With only 25% computation increment, 7x7 kernels achieve +14.0 mAP better than 3x3 kernels on the CrowdPose dataset. On mobile platforms, LitePose reduces the latency by up to 5.0x without sacrificing performance, compared with prior state-of-the-art efficient pose estimation models, pushing the frontier of real-time multi-person pose estimation on edge.

Real-Time Multi-Person Pose Estimation on Edge

Overview of LitePose

Redundancy in High-Resolution Branches

Large Kernel is Efficient

Light-weight Fusion Deconv Head

2.8-5x measured speedup compared with SOTA on the CrowdPose Dataset


    title={Lite pose: Efficient architecture design for 2d human pose estimation},
    author={Wang, Yihan and Li, Muyang and Cai, Han and Chen, Wei-Ming and Han, Song},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},

Acknowledgments: We thank National Science Foundation, MIT-IBM Watson AI Lab, Ford, Hyundai and Intel for supporting this research. We thank Ji Lin, Yaoyao Ding and Lianmin Zheng for their helpful comments on the project. We also thank Shengyu Wang and Ruihan Gao for their valuable feedback on the manuscript.