Top-down visual saliency via joint CRF and dictionary learning

Jimei Yang, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

113 Citations (Scopus)

Abstract

Top-down visual saliency is an important module of visual attention. In this work, we propose a novel top-down saliency model that jointly learns a Conditional Random Field (CRF) and a visual dictionary. The proposed model incorporates a layered structure from top to bottom: CRF, sparse coding and image patches. With sparse coding as an intermediate layer, CRF is learned in a feature-adaptive manner; meanwhile with CRF as the output layer, the dictionary is learned under structured supervision. For efficient and effective joint learning, we develop a max-margin approach via a stochastic gradient descent algorithm. Experimental results on the Graz-02 and PASCAL VOC datasets show that our model performs favorably against state-of-the-art top-down saliency methods for target object localization. In addition, the dictionary update significantly improves the performance of our model. We demonstrate the merits of the proposed top-down saliency model by applying it to prioritizing object proposals for detection and predicting human fixations.

Original languageEnglish
Article number7442536
Pages (from-to)576-588
Number of pages13
JournalIEEE transactions on pattern analysis and machine intelligence
Volume39
Issue number3
DOIs
Publication statusPublished - 2017 Mar

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Top-down visual saliency via joint CRF and dictionary learning'. Together they form a unique fingerprint.

Cite this