Abstract
We present a unified framework for the image classification of image sets taken under varying modality conditions. Our method is motivated by a key observation that the image feature distribution is simultaneously influenced by the semantic-class and the modality category label, which limits the performance of conventional methods for that task. With this insight, we introduce modality uniqueness as a discriminative weight that divides each modality cluster from all other clusters. By leveraging the modality uniqueness, our framework is formulated as unsupervised modality clustering and classifier learning based on modality-invariant similarity kernel. Specifically, in the assignment step, each training image is first assigned to the most similar cluster according to its modality. In the update step, based on the current cluster hypothesis, the modality uniqueness and the sparse dictionary are updated. These two steps are formulated in an iterative manner. Based on the final clusters, a modality-invariant marginalized kernel is then computed, where the similarities between the reconstructed features of each modality are aggregated across all clusters. Our framework enables the reliable inference of semantic-class category for an image, even across large photometric variations. Experimental results show that our method outperforms conventional methods on various benchmarks, such as landmark identification under severely varying weather conditions, domain-adapting image classification, and RGB and near-infrared image classification.
Original language | English |
---|---|
Article number | 7765067 |
Pages (from-to) | 884-899 |
Number of pages | 16 |
Journal | IEEE Transactions on Image Processing |
Volume | 26 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2017 Feb |
Bibliographical note
Publisher Copyright:© 1992-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design