TY - JOUR
T1 - Teacher–Explorer–Student Learning
T2 - A Novel Learning Method for Open Set Recognition
AU - Jang, Jaeyeon
AU - Kim, Chang Ouk
N1 - Publisher Copyright:
IEEE
PY - 2023
Y1 - 2023
N2 - When an unknown example, one that was not seen during training, appears, most recognition systems usually produce overgeneralized results and determine that the example belongs to one of the known classes. To address this problem, teacher–explorer–student (T/E/S) learning, which adopts the concept of open set recognition (OSR) to reject unknown samples while minimizing the loss of classification performance on known samples, is proposed in this study. In this novel learning method, the overgeneralization of deep-learning classifiers is significantly reduced by exploring various possibilities for unknowns. The teacher network extracts hints about unknowns by distilling the pretrained knowledge about knowns and delivers this distilled knowledge to the student network. After learning the distilled knowledge, the student network shares its learned information with the explorer network. Next, the explorer network shares its exploration results by generating unknown-like samples and feeding those samples to the student network. As this alternating learning process is repeated, the student network experiences a variety of synthetic unknowns, reducing overgeneralization. The results of extensive experiments show that each component proposed in this article significantly contributes to improving OSR performance. It is found that the proposed T/E/S learning method outperforms current state-of-the-art methods.
AB - When an unknown example, one that was not seen during training, appears, most recognition systems usually produce overgeneralized results and determine that the example belongs to one of the known classes. To address this problem, teacher–explorer–student (T/E/S) learning, which adopts the concept of open set recognition (OSR) to reject unknown samples while minimizing the loss of classification performance on known samples, is proposed in this study. In this novel learning method, the overgeneralization of deep-learning classifiers is significantly reduced by exploring various possibilities for unknowns. The teacher network extracts hints about unknowns by distilling the pretrained knowledge about knowns and delivers this distilled knowledge to the student network. After learning the distilled knowledge, the student network shares its learned information with the explorer network. Next, the explorer network shares its exploration results by generating unknown-like samples and feeding those samples to the student network. As this alternating learning process is repeated, the student network experiences a variety of synthetic unknowns, reducing overgeneralization. The results of extensive experiments show that each component proposed in this article significantly contributes to improving OSR performance. It is found that the proposed T/E/S learning method outperforms current state-of-the-art methods.
KW - Computational modeling
KW - Exploration
KW - Generative adversarial networks
KW - Knowledge engineering
KW - Learning systems
KW - Prototypes
KW - Support vector machines
KW - Training
KW - generative adversarial learning
KW - knowledge distillation
KW - open set recognition (OSR)
KW - overgeneralization
UR - http://www.scopus.com/inward/record.url?scp=85179832474&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179832474&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2023.3336799
DO - 10.1109/TNNLS.2023.3336799
M3 - Article
AN - SCOPUS:85179832474
SN - 2162-237X
SP - 1
EP - 14
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -