Shaping Deep Feature Space Towards Gaussian Mixture for Visual Classification

Weitao Wan, Cheng Yu, Jiansheng Chen, Tong Wu, Yuanyi Zhong, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


The softmax cross-entropy loss function has been widely used to train deep models for various tasks. In this work, we propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification. Unlike the softmax cross-entropy loss, our method explicitly shapes the deep feature space towards a Gaussian Mixture distribution. With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution. The GM loss can be readily used to distinguish the adversarial examples based on the discrepancy between feature distributions of clean and adversarial examples. Furthermore, theoretical analysis shows that a symmetric feature space can be achieved by using the GM loss, which enables the models to perform robustly against adversarial attacks. The proposed model can be implemented easily and efficiently without introducing more trainable parameters. Extensive evaluations demonstrate that the method with the GM loss performs favorably on image classification, face recognition, and detection as well as recognition of adversarial examples generated by various attacks.

Original languageEnglish
Pages (from-to)2430-2444
Number of pages15
JournalIEEE transactions on pattern analysis and machine intelligence
Issue number2
Publication statusPublished - 2023 Feb 1

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics


Dive into the research topics of 'Shaping Deep Feature Space Towards Gaussian Mixture for Visual Classification'. Together they form a unique fingerprint.

Cite this