Deep disentangled representations for volumetric reconstruction
InHua, G.; Jégou, H. (ed.), Computer Vision - ECCV 2016 Workshops, Proceedings, pp. 266-279
Computer Vision - ECCV 2016 Workshops, Amsterdam, The Netherlands, October 8-10 and 15-16, 2016
Article in monograph or in proceedings
Display more detailsDisplay less details
SW OZ DCC KI
Hua, G.; Jégou, H. (ed.), Computer Vision - ECCV 2016 Workshops, Proceedings
SubjectCognitive artificial intelligence; DI-BCB_DCC_Theme 4: Brain Networks and Neuronal Communication
We introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled graphics code. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the graphics code to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.