Implicit Neural Representations: From Objects to 3D Scenes

Andreas Geiger
Andreas Geiger
14.3 هزار بار بازدید - 4 سال پیش - Keynote presented on June 19,
Keynote presented on June 19, 2020 at CVPR in the
2nd ScanNet Indoor Scene Understanding Challenge

Slides: http://www.cvlibs.net/talks/talk_cvpr...
Papers:
https://arxiv.org/abs/2003.04618
https://arxiv.org/abs/2003.12406
http://www.cvlibs.net/publications/Sc...

Abstract: Implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this talk, I will propose a hybrid model that uses both a neural implicit shape representation as well as 2D/3D convolutions for detailed reconstruction of objects and large-scale 3D scenes. I will further discuss a neural representation for capturing the visual appearance of an object in terms of its surface light field which allows for manipulating the light source and relight the scene using environment maps. Finally, I will show some of our recent efforts towards collecting material information of real world objects which is required for training such models. I will also briefly present the KITTI-360 dataset, a new outdoor dataset with 360 degree sensor information and semantic annotations in 3D and 2D which will be released this summer.
4 سال پیش در تاریخ 1399/03/26 منتشر شده است.
14,362 بـار بازدید شده
... بیشتر