Deep disentangled representations for volumetric reconstruction
In
Hua, G.; Jégou, H. (ed.), Computer Vision - ECCV 2016 Workshops, Proceedings, pp. 266-279Annotation
Computer Vision - ECCV 2016 Workshops, Amsterdam, The Netherlands, October 8-10 and 15-16, 2016
Publication type
Article in monograph or in proceedings
Display more detailsDisplay less details
Editor(s)
Hua, G.
Jégou, H.
Organization
SW OZ DCC AI
Languages used
English (eng)
Book title
Hua, G.; Jégou, H. (ed.), Computer Vision - ECCV 2016 Workshops, Proceedings
Page start
p. 266
Page end
p. 279
Subject
Cognitive artificial intelligence; DI-BCB_DCC_Theme 4: Brain Networks and Neuronal CommunicationAbstract
We introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled graphics code. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the graphics code to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.
This item appears in the following Collection(s)
- Academic publications [242560]
- Electronic publications [129511]
- Faculty of Social Sciences [29963]
- Open Access publications [104133]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.