Abstract
Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.
Original language | English |
---|---|
Title of host publication | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 266-274 |
Number of pages | 9 |
ISBN (Electronic) | 9781538604571 |
DOIs | |
Publication status | Published - 2017 Nov 6 |
Event | 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 - Honolulu, United States Duration: 2017 Jul 21 → 2017 Jul 26 |
Publication series
Name | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
---|---|
Volume | 2017-January |
Other
Other | 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
---|---|
Country/Territory | United States |
City | Honolulu |
Period | 17/7/21 → 17/7/26 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition