Weakly-supervised disentangling with recurrent transformations for 3D view synthesis

Jimei Yang, Scott Reed, Ming Hsuan Yang, Honglak Lee

Research output: Contribution to journalConference articlepeer-review

200 Citations (Scopus)


An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image. This is particularly challenging due to the partial observability inherent in projecting a 3D object onto the image space, and the ill-posedness of inferring object shape and pose. However, we can train a neural network to address the problem if we restrict our attention to specific object categories (in our case faces and chairs) for which we can gather ample training data. In this paper, we propose a novel recurrent convolutional encoder-decoder network that is trained end-to-end on the task of rendering rotated objects starting from a single image. The recurrent structure allows our model to capture long-term dependencies along a sequence of transformations. We demonstrate the quality of its predictions for human faces on the Multi-PIE dataset and for a dataset of 3D chair models, and also show its ability to disentangle latent factors of variation (e.g., identity and pose) without using full supervision.

Original languageEnglish
Pages (from-to)1099-1107
Number of pages9
JournalAdvances in Neural Information Processing Systems
Publication statusPublished - 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: 2015 Dec 72015 Dec 12

Bibliographical note

Funding Information:
This work was supported in part by ONR N00014-13-1-0762, NSF CAREER IIS-1453651, and NSF CMMI-1266184. We thank NVIDIA for donating a Tesla K40 GPU.

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Weakly-supervised disentangling with recurrent transformations for 3D view synthesis'. Together they form a unique fingerprint.

Cite this