Abstract
Since human observers are the ultimate receivers of digital images, image quality metrics should be designed from a human-oriented perspective. Conventionally, a number of full-reference image quality assessment (FR-IQA) methods adopted various computational models of the human visual system (HVS) from psychological vision science research. In this paper, we propose a novel convolutional neural networks (CNN) based FR-IQA model, named Deep Image Quality Assessment (DeepQA), where the behavior of the HVS is learned from the underlying data distribution of IQA databases. Different from previous studies, our model seeks the optimal visual weight based on understanding of database information itself without any prior knowledge of the HVS. Through the experiments, we show that the predicted visual sensitivity maps agree with the human subjective opinions. In addition, DeepQA achieves the state-of-the-art prediction accuracy among FR-IQA models.
Original language | English |
---|---|
Title of host publication | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1969-1977 |
Number of pages | 9 |
ISBN (Electronic) | 9781538604571 |
DOIs | |
Publication status | Published - 2017 Nov 6 |
Event | 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 - Honolulu, United States Duration: 2017 Jul 21 → 2017 Jul 26 |
Publication series
Name | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
---|---|
Volume | 2017-January |
Other
Other | 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
---|---|
Country/Territory | United States |
City | Honolulu |
Period | 17/7/21 → 17/7/26 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition