Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both short-term and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos.
|Title of host publication||Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings|
|Editors||Yair Weiss, Vittorio Ferrari, Cristian Sminchisescu, Martial Hebert|
|Number of pages||17|
|Publication status||Published - 2018|
|Event||15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany|
Duration: 2018 Sept 8 → 2018 Sept 14
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Other||15th European Conference on Computer Vision, ECCV 2018|
|Period||18/9/8 → 18/9/14|
Bibliographical noteFunding Information:
This work is supported in part by the NSF CAREER Grant #1149783, NSF Grant No. # 1755785, and gifts from Adobe and Nvidia.
Acknowledgments. This work is supported in part by the NSF CAREER Grant #1149783, NSF Grant No. # 1755785, and gifts from Adobe and Nvidia.
© Springer Nature Switzerland AG 2018.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Science(all)