This paper proposes an efficient video coding method using audio-visual focus of attention, which is based on the observation that sound-emitting regions in an audio-visual sequence draw viewers' attention. First, an audio-visual source localization algorithm is presented, where the sound source is identified by using the correlation between the sound signal and the visual motion information. The localization result is then used to encode different regions in the scene with different quality in such a way that regions close to the source are encoded with higher quality than those far from the source. This is implemented in the framework of H.264/AVC by assigning different quantization parameters for different regions. Through experiments with both standard and high definition sequences, it is demonstrated that the proposed method can yield considerable coding gains over the constant quantization mode of H.264/AVC without noticeable degradation of perceived quality.
|Number of pages
|Journal of Visual Communication and Image Representation
|Published - 2011 Nov
Bibliographical noteFunding Information:
The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007–2011) under Grant Agreement No. 21644 (PetaMedia) and the Swiss National Foundation for Scientific Research in the framework of the NCCR Interactive Multimodal Information Management (IM2) .
All Science Journal Classification (ASJC) codes
- Signal Processing
- Media Technology
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering