Sparsity based depth estimation and hole-filling algorithm for 2D to 3D video conversion

Jangwon Choi, Yoonsik Choe, Yong Goo Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Depth image-based rendering (DIBR) requires to efficiently fill out holes in rendered image, and this has a great impact on the quality of 3D depth perception. In this paper, we propose a hole-filling method using depth estimation and image inpainting for high quality 3D video. In this algorithm, we first generate rendered image and its associated depth map. Then, we estimate the depths in generated depth map's hole. Using these values, we fill out the hole with depth-aided image inpainting based on the sparsity of hole. This method enables the inpainting to refer similar background texture. The proposed algorithm also fills out the patches of hole based on an edge-preserving priority, so that background edges of the image texture can be reconstructed faithfully. Experimental results show that the proposed algorithm provides better objective and subjective quality than previous works.

Original languageEnglish
Title of host publication2012 International Conference on Signals and Electronic Systems, ICSES 2012 - The Conference Proceedings
DOIs
Publication statusPublished - 2012
Event2012 International Conference on Signals and Electronic Systems, ICSES 2012 - Wroclaw, Poland
Duration: 2012 Sept 182012 Sept 21

Publication series

Name2012 International Conference on Signals and Electronic Systems, ICSES 2012 - The Conference Proceedings

Other

Other2012 International Conference on Signals and Electronic Systems, ICSES 2012
Country/TerritoryPoland
CityWroclaw
Period12/9/1812/9/21

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Sparsity based depth estimation and hole-filling algorithm for 2D to 3D video conversion'. Together they form a unique fingerprint.

Cite this