EmbraceNet: A robust deep learning architecture for multimodal classification

Jun Ho Choi, Jong Seok Lee

Research output: Contribution to journalArticlepeer-review

76 Citations (Scopus)

Abstract

Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available.

Original languageEnglish
Pages (from-to)259-270
Number of pages12
JournalInformation Fusion
Volume51
DOIs
Publication statusPublished - 2019 Nov

Bibliographical note

Funding Information:
This research was supported by the MSIT (Ministry of Science and ICT), Korea , under the “ICT Consilience Creative Program” ( IITP-2018-2017-0-01015 ) supervised by the IITP (Institute for Information & communications Technology Promotion). In addition, this work was also supported by the IITP grant funded by the Korea government (MSIT) ( R7124-16-0004 , Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding).

Publisher Copyright:
© 2019 Elsevier B.V.

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Information Systems
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'EmbraceNet: A robust deep learning architecture for multimodal classification'. Together they form a unique fingerprint.

Cite this