Semi-supervised learning for optical flow with generative adversarial networks

Wei Sheng Lai, Jia Bin Huang, Ming Hsuan Yang

Research output: Contribution to journalConference articlepeer-review

76 Citations (Scopus)


Convolutional neural networks (CNNs) have recently been applied to the optical flow estimation problem. As training the CNNs requires sufficiently large amounts of labeled data, existing approaches resort to synthetic, unrealistic datasets. On the other hand, unsupervised methods are capable of leveraging real-world videos for training where the ground truth flow fields are not available. These methods, however, rely on the fundamental assumptions of brightness constancy and spatial smoothness priors that do not hold near motion boundaries. In this paper, we propose to exploit unlabeled videos for semi-supervised learning of optical flow with a Generative Adversarial Network. Our key insight is that the adversarial loss can capture the structural patterns of flow warp errors without making explicit assumptions. Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and baseline semi-supervised learning schemes.

Original languageEnglish
Pages (from-to)354-364
Number of pages11
JournalAdvances in Neural Information Processing Systems
Publication statusPublished - 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: 2017 Dec 42017 Dec 9

Bibliographical note

Funding Information:
This work is supported in part by the NSF CAREER Grant #1149783, gifts from Adobe and NVIDIA.

Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Semi-supervised learning for optical flow with generative adversarial networks'. Together they form a unique fingerprint.

Cite this