Dynamic scene deblurring by depth guided model

Lerenhan Li, Jinshan Pan, Wei Sheng Lai, Changxin Gao, Nong Sang, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

29 Citations (Scopus)


Dynamic scene blur is usually caused by object motion, depth variation as well as camera shake. Most existing methods usually solve this problem using image segmentation or fully end-to-end trainable deep convolutional neural networks by considering different object motions or camera shakes. However, these algorithms are less effective when there exist depth variations. In this work, we propose a deep neural convolutional network that exploits the depth map for dynamic scene deblurring. Given a blurred image, we first extract the depth map and adopt a depth refinement network to restore the edges and structure in the depth map. To effectively exploit the depth map, we adopt the spatial feature transform layer to extract depth features and fuse with the image features through scaling and shifting. Our image deblurring network thus learns to restore a clear image under the guidance of the depth map. With substantial experiments and analysis, we show that the depth information is crucial to the performance of the proposed model. Finally, extensive quantitative and qualitative evaluations demonstrate that the proposed model performs favorably against the state-of-the-art dynamic scene deblurring approaches as well as conventional depth-based deblurring algorithms.

Original languageEnglish
Article number9043904
Pages (from-to)5273-5288
Number of pages16
JournalIEEE Transactions on Image Processing
Publication statusPublished - 2020

Bibliographical note

Publisher Copyright:
© 2020 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Dynamic scene deblurring by depth guided model'. Together they form a unique fingerprint.

Cite this