Depth adjustment for stereoscopic image using visual fatigue prediction and depth-based view synthesis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

24 Citations (Scopus)

Abstract

A well-known problem in stereoscopic images is visual fatigue. In order to reduce visual fatigue, we propose depth adjustment method which controls the amounts of parallax of stereoscopic images by using visual fatigue prediction and depth-based view synthesis. We predict the visual fatigue level by examining the horizontal and vertical disparity characteristics of 3D images, and if the contents are judged to contain visual fatigue, depth adjustment is applied by using depth-based view synthesis. We present a method for extracting disparity characteristics from sparse corresponding features. Depth-based view synthesis algorithm is proposed to handle the hole regions in rendering process. We measured the correlations between visual fatigue prediction metrics and the subjective results, acquiring the ranges of 79% to 85%. Then, we performed depth-based view synthesis to the contents which are predicted to cause visual fatigue. A subjective evaluation showed that the proposed depth adjustment method generated comfortable stereoscopic images.

Original languageEnglish
Title of host publication2010 IEEE International Conference on Multimedia and Expo, ICME 2010
Pages956-961
Number of pages6
DOIs
Publication statusPublished - 2010
Event2010 IEEE International Conference on Multimedia and Expo, ICME 2010 - Singapore, Singapore
Duration: 2010 Jul 192010 Jul 23

Publication series

Name2010 IEEE International Conference on Multimedia and Expo, ICME 2010

Other

Other2010 IEEE International Conference on Multimedia and Expo, ICME 2010
Country/TerritorySingapore
CitySingapore
Period10/7/1910/7/23

All Science Journal Classification (ASJC) codes

  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Depth adjustment for stereoscopic image using visual fatigue prediction and depth-based view synthesis'. Together they form a unique fingerprint.

Cite this