The high performance video quality assessment (VQA) algorithm is a necessary skill to provide high quality video to viewers. However, since the nonlinear perception function between the distortion level of the video and the subjective quality score is not precisely defined, there are many limitations in accurately predicting the quality of the video. In this paper, we propose a deep learning scheme named Deep Blind Video Quality Assessment (DeepBVQA) to achieve a more accurate and reliable video quality predictor by considering various spatial and temporal cues which have not been considered before. We used CNN to extract the spatial cues of each video in VQA and proposed new hand-crafted features for temporal cues. Performance experiments show that performance is better than other state-of-the-art no-reference (NR) VQA models and the introduction of hand-crafted temporal features is very efficient in VQA.
|Title of host publication
|2018 IEEE International Conference on Image Processing, ICIP 2018 - Proceedings
|IEEE Computer Society
|Number of pages
|Published - 2018 Aug 29
|25th IEEE International Conference on Image Processing, ICIP 2018 - Athens, Greece
Duration: 2018 Oct 7 → 2018 Oct 10
|Proceedings - International Conference on Image Processing, ICIP
|25th IEEE International Conference on Image Processing, ICIP 2018
|18/10/7 → 18/10/10
Bibliographical notePublisher Copyright:
© 2018 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Signal Processing