Multi-objective convolutional learning for face labeling

Sifei Liu, Jimei Yang, Chang Huang, Ming Hsuan Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

98 Citations (Scopus)

Abstract

This paper formulates face labeling as a conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments on both the LFW and Helen datasets demonstrate state-of-the-art results of the proposed algorithm, and accurate labeling results on challenging images can be obtained by the proposed algorithm for real-world applications.

Original languageEnglish
Title of host publicationIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
PublisherIEEE Computer Society
Pages451-459
Number of pages9
ISBN (Electronic)9781467369640
DOIs
Publication statusPublished - 2015 Oct 14
EventIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 - Boston, United States
Duration: 2015 Jun 72015 Jun 12

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume07-12-June-2015
ISSN (Print)1063-6919

Other

OtherIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
Country/TerritoryUnited States
CityBoston
Period15/6/715/6/12

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Multi-objective convolutional learning for face labeling'. Together they form a unique fingerprint.

Cite this