Abstract
Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge [23] with the team name “Yonsei-MCML.”
Original language | English |
---|---|
Title of host publication | UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers |
Publisher | Association for Computing Machinery, Inc |
Pages | 1548-1556 |
Number of pages | 9 |
ISBN (Electronic) | 9781450359665 |
DOIs | |
Publication status | Published - 2018 Oct 8 |
Event | 2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 - Singapore, Singapore Duration: 2018 Oct 8 → 2018 Oct 12 |
Publication series
Name | UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers |
---|
Other
Other | 2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 18/10/8 → 18/10/12 |
Bibliographical note
Publisher Copyright:© 2018 Association for Computing Machinery.
All Science Journal Classification (ASJC) codes
- Software
- Human-Computer Interaction
- Information Systems