ABSTRACT

Head-mounted displays (HMDs) are an essential display device for the observation of virtual reality (VR) environments. However, HMDs obstruct external capturing methods from recording the user’s upper face. This severely impacts social VR applications, such as teleconferencing, which commonly rely on external RGB-D sensors to capture a volumetric representation of the user. In this paper, we introduce an HMD removal framework based on generative adversarial networks (GANs), capable of jointly filling in missing color and depth data in RGB-D face images. Our framework includes an RGB-based identity loss function for identity preservation and several components aimed at surface reproduction. Our results demonstrate that our framework is able to remove HMDs from synthetic RGB-D face images while preserving the subject’s identity.

Full-text PDF ↗︎ Code ↗︎ Publisher page ↗︎


Test setup of the TogetherVR platform where two HMD-wearing subjects were captured with RGB-D sensors (left) and are represented in a shared virtual environment (right). For testing purposes, the two subjects were located in the same physical space. This work focused on resolving the occlusion in the RGB-D image as caused by the HMD.

Test setup of the TogetherVR platform where two HMD-wearing subjects were captured with RGB-D sensors (left) and are represented in a shared virtual environment (right). For testing purposes, the two subjects were located in the same physical space. This work focused on resolving the occlusion in the RGB-D image as caused by the HMD.

Qualitative results summary. Shown for color (RGB), depth (D), and estimated surface normals (SN). For visualization, D is normalized to [0, 1] and displayed with the inferno colormap from the matplotlib package. The normal vectors (x, y, z) for each pixel in SN are estimated based on D and are visualized with RGB values.

Qualitative results summary. Shown for color (RGB), depth (D), and estimated surface normals (SN). For visualization, D is normalized to [0, 1] and displayed with the inferno colormap from the matplotlib package. The normal vectors (x, y, z) for each pixel in SN are estimated based on D and are visualized with RGB values.

CITING

@inproceedings{numanGenerativeRGBDFace2021,
  title = {Generative {{RGB-D Face Completion}} for {{Head-Mounted Display Removal}}},
  author={Numan, Nels and ter Haar, Frank and Cesar, Pablo},
  booktitle={2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
  pages={109--116},
  year={2021},
  organization={IEEE},
  doi={10.1109/VRW52623.2021.00028}
}