Qi Feng, Hubert P. H. Shum, Ryo Shimamura, Shigeo Morishima

Foreground-aware Dense Depth Estimation for 360 Images

Journal of WSCG

Vol.28 No.1-2 pp.79-88

10.24132/JWSCG.2020.28.10

With 360 imaging devices becoming widely accessible, omnidirectional content has gained popularity in multiple fields. The ability to estimate depth from a single omnidirectional image can benefit applications such as robotics navigation and virtual reality Download the North Font. However, existing depth estimation approaches produce sub-optimal results on real-world omnidirectional images with dynamic foreground objects. On the one hand, capture-based methods cannot obtain the foreground due to the limitations of the scanning and stitching schemes 1.13.2 포지 다운로드. On the other hand, it is
challenging for synthesis-based methods to generate highly-realistic virtual foreground objects that are comparable to the real-world ones Gold i.e.. In this paper, we propose to augment datasets with realistic foreground objects using an
image-based approach, which produces a foreground-aware photorealistic dataset for machine learning algorithms 지메일 이메일. By exploiting a novel scale-invariant RGB-D correspondence in the spherical domain, we repurpose abundant
non-omnidirectional datasets to include realistic foreground objects with correct distortions 피파 온라인 4 사전. We further propose a novel auxiliary deep neural network to estimate both the depth of the omnidirectional images and the mask of the foreground objects, where the two tasks facilitate each other Download tablet Windows. A new local depth loss considers small regions of interests and ensures that their depth estimations are not smoothed out during the global gradient’s optimization 정음 메모 패드. We demonstrate the system using human as the foreground due to its complexity and contextual importance, while the framework can be generalized to any other foreground objects 판도라 tv. Experimental results demonstrate more consistent global estimations and more accurate local estimations compared with state-of-the-arts.