Visual Fatigue Relaxation for Stereoscopic Video via Nonlinear Disparity Remapping

Changjae Oh, Bumsub Ham, Sunghwan Choi, Kwanghoon Sohn

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)


A nonlinear disparity remapping scheme is presented to enhance the visual comfort of stereoscopic videos. The stereoscopic video is analyzed for predicting a degree of fatigue with the viewpoint of three factors: 1) spatial frequency; 2) disparity magnitude; and 3) disparity motion. The degree of fatigue is then estimated in a local manner. It can be visualized as an index map so-called a "visual fatigue map," and an overall fatigue score is obtained by pooling the visual fatigue map. Based on this information, a nonlinear remapping operator is generated in two phases: 1) disparity range adaptation and 2) operator nonlinearization. First, a disparity range is automatically adjusted according to the determined overall fatigue score. Second, rather than linearly adjusting the disparity range of an original video to the determined disparity range, a nonlinear remapping operator is constructed in a manner that the disparity range of inducible problematic region is compressed, while that of comfortable region is stretched. The proposed scheme is verified via subjective evaluations where visual fatigue and depth sensation are compared among original videos, linearly remapped videos, and nonlinearly remapped videos. Experimental results show that the nonlinearly remapped videos provide more comfort than the linearly remapped videos without losing depth sensation.

Original languageEnglish
Article number7052309
Pages (from-to)142-153
Number of pages12
JournalIEEE Transactions on Broadcasting
Issue number2
Publication statusPublished - 2015 Jun 1

Bibliographical note

Publisher Copyright:
© 1963-12012 IEEE.

All Science Journal Classification (ASJC) codes

  • Media Technology
  • Electrical and Electronic Engineering


Dive into the research topics of 'Visual Fatigue Relaxation for Stereoscopic Video via Nonlinear Disparity Remapping'. Together they form a unique fingerprint.

Cite this