Ranking Saliency

Lihe Zhang, Chuanr Yang, Huchuan Lu, Xiang Ruan, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

100 Citations (Scopus)


Most existing bottom-up algorithms measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of only considering the contrast between salient objects and their surrounding regions, we consider both foreground and background cues in this work. We rank the similarity of image elements with foreground or background cues via graph-based manifold ranking. The saliency of image elements is defined based on their relevances to the given seeds or queries. We represent an image as a multi-scale graph with fine superpixels and coarse regions as nodes. These nodes are ranked based on the similarity to background and foreground queries using affinity matrices. Saliency detection is carried out in a cascade scheme to extract background regions and foreground salient objects efficiently. Experimental results demonstrate the proposed method performs well against the state-of-the-art methods in terms of accuracy and speed. We also propose a new benchmark dataset containing 5,168 images for large-scale performance evaluation of saliency detection methods.

Original languageEnglish
Article number7567535
Pages (from-to)1892-1904
Number of pages13
JournalIEEE transactions on pattern analysis and machine intelligence
Issue number9
Publication statusPublished - 2017 Sept 1

Bibliographical note

Publisher Copyright:
© 2017 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics


Dive into the research topics of 'Ranking Saliency'. Together they form a unique fingerprint.

Cite this