Dense and Sparse Reconstruction Error Based Saliency Descriptor

Huchuan Lu, Xiaohui Li, Lihe Zhang, Xiang Ruan, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

92 Citations (Scopus)

Abstract

In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction error. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. First, we compute dense and sparse reconstruction errors on the background templates for each image region. Second, the reconstruction errors are propagated based on the contexts obtained from K -means clustering. Third, the pixel-level reconstruction error is computed by the integration of multi-scale reconstruction errors. Both the pixel-level dense and sparse reconstruction errors are then weighted by image compactness, which could more accurately detect saliency. In addition, we introduce a novel Bayesian integration method to combine saliency maps, which is applied to integrate the two saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against 24 state-of-the-art methods in terms of precision, recall, and F-measure on three public standard salient object detection databases.

Original languageEnglish
Article number7396959
Pages (from-to)1592-1603
Number of pages12
JournalIEEE Transactions on Image Processing
Volume25
Issue number4
DOIs
Publication statusPublished - 2016 Apr

Bibliographical note

Publisher Copyright:
© 1992-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Dense and Sparse Reconstruction Error Based Saliency Descriptor'. Together they form a unique fingerprint.

Cite this