Abstract
Recently, the vulnerability of deep image classification models to adversarial attacks has been investigated. However, such an issue has not been thoroughly studied for image-to-image tasks that take an input image and generate an output image (e.g., colorization, denoising, deblurring, etc.) This paper presents comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks. For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints such as output quality degradation due to attacks, transferability of adversarial examples across different tasks, and characteristics of perturbations. We show that unlike image classification tasks, the performance degradation on image-to-image tasks largely differs depending on various factors, e.g., attack methods and task objectives. In addition, we analyze the effectiveness of conventional defense methods used for classification models in improving the robustness of the image-to-image models.
Original language | English |
---|---|
Title of host publication | 2022 26th International Conference on Pattern Recognition, ICPR 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1287-1293 |
Number of pages | 7 |
ISBN (Electronic) | 9781665490627 |
DOIs | |
Publication status | Published - 2022 |
Event | 26th International Conference on Pattern Recognition, ICPR 2022 - Montreal, Canada Duration: 2022 Aug 21 → 2022 Aug 25 |
Publication series
Name | Proceedings - International Conference on Pattern Recognition |
---|---|
Volume | 2022-August |
ISSN (Print) | 1051-4651 |
Conference
Conference | 26th International Conference on Pattern Recognition, ICPR 2022 |
---|---|
Country/Territory | Canada |
City | Montreal |
Period | 22/8/21 → 22/8/25 |
Bibliographical note
Publisher Copyright:© 2022 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition