Qi Feng, Hubert P. H. Shum, Shigeo Morishima

Resolving Hand-Object Occlusion for Mixed Reality with Joint Deep Learning and Model Optimization

The 33rd International Conference on Computer Animation and Social Agents (CASA 2020)

10.1002/cav.1956

By overlaying virtual imagery onto the real world, mixed reality facilitates diverse applications and has drawn increasing attention. Enhancing physical in-hand objects with a virtual appearance is a key component for many applications that require users to interact with tools such as surgery simulations Download hangul drawing yard. However, due to complex hand articulations and severe hand-object occlusions, resolving occlusions in hand-object interactions is a challenging topic 제이쿼리 모바일. Traditional tracking-based approaches are limited by strong ambiguities from occlusions and changing shapes, while reconstruction-based methods show a poor capability of handling dynamic scenes Lumion 8. In this paper, we propose a novel real-time optimization system to resolve hand-object occlusions by spatially reconstructing the scene with estimated hand joints and masks 무료 오디오북. To acquire accurate results, we propose a joint learning process that shares information between two models and jointly estimates hand poses and semantic segmentation 카카오톡 컴퓨터. To facilitate the joint learning system and improve its accuracy under occlusions, we propose an occlusion-aware RGB-D hand dataset that mitigates the ambiguity through precise annotations and photorealistic appearance 코렐드로우 9. Evaluations show more consistent overlays compared to literature, and a user study verifies a more realistic experience.