OCHMR: Occluded Human Mesh Recovery

1Carnegie Mellon University, 2Max Planck Institute
Interpolate start reference image.

Input Image

Interpolate start reference image.

Bottom-up Mesh Recovery

Interpolation end reference image.

OCHMR

Abstract

Top-down methods for monocular human mesh recovery have two stages: (1) detect human bounding boxes; (2) treat each bounding box as an independent single-human mesh recovery task. Unfortunately, the single-human assumption does not hold in images with multi-human occlusion and crowding. Consequently, top-down methods have difficulties in recovering accurate 3D human meshes under severe person-person occlusion. To address this, we present Occluded Human Mesh Recovery (OCHMR) - a novel top-down mesh recovery approach that incorporates image spatial context to overcome the limitations of the single-human assumption. The approach is conceptually simple and can be applied to any existing top-down architecture. Along with the input image, we condition the top-down model on spatial context from the image in the form of body-center heatmaps. To reason from the predicted body centermaps, we introduce Contextual Normalization (CoNorm) blocks to adaptively modulate intermediate features of the top-down model. The contextual conditioning helps our model disambiguate between two severely overlapping human bounding-boxes, making it robust to multi-person occlusion.

Compared with state-of-the-art methods, OCHMR achieves superior performance on challenging multi-person benchmarks like 3DPW, CrowdPose and OCHuman. Specifically, our proposed contextual reasoning architecture applied to the SPIN model with ResNet-50 backbone results in 75.2 PMPJPE on 3DPW-PC, 23.6 AP on CrowdPose and 37.7 AP on OCHuman datasets, a significant improvement of 6.9 mm, 6.4 AP and 20.8 AP respectively over the baseline.

Human Mesh Recovery under Crowding


Interpolate start reference image.

Input

Interpolate start reference image.

Body-Centermap

Interpolate start reference image.

Bottom-up (ROMP)

Interpolate start reference image.

OCHMR (Ours)


Interpolate start reference image.

Input

Interpolate start reference image.

Body-Centermap

Interpolate start reference image.

Bottom-up (ROMP)

Interpolate start reference image.

OCHMR (Ours)

BibTeX


      @InProceedings{Khirodkar_2022_CVPR,
          author    = {Khirodkar, Rawal and Tripathi, Shashank and Kitani, Kris},
          title     = {Occluded Human Mesh Recovery},
          booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
          month     = {June},
          year      = {2022},
          pages     = {1715-1725}
      }