Action recognition using close-up of maximum activation and etri-activity3d livinglab dataset

Doyoung Kim, Inwoong Lee, Dohyung Kim, Sanghoon Lee

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

The development of action recognition models has shown great performance on various video datasets. Nevertheless, because there is no rich data on target actions in existing datasets, it is insufficient to perform action recognition applications required by industries. To satisfy this requirement, datasets composed of target actions with high availability have been created, but it is difficult to capture various characteristics in actual environments because video data are generated in a specific environment. In this paper, we introduce a new ETRI-Activity3D-LivingLab dataset, which provides action sequences in actual environments and helps to handle a network generalization issue due to the dataset shift. When the action recognition model is trained on the ETRI-Activity3D and KIST SynADL datasets and evaluated on the ETRI-Activity3D-LivingLab dataset, the performance can be severely degraded because the datasets were captured in different environments domains. To reduce this dataset shift between training and testing datasets, we propose a close-up of maximum activation, which magnifies the most activated part of a video input in detail. In addition, we present various experimental results and analysis that show the dataset shift and demonstrate the effectiveness of the proposed method.

Original languageEnglish
Article number6774
JournalSensors
Volume21
Issue number20
DOIs
Publication statusPublished - 2021 Oct 1

Bibliographical note

Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

All Science Journal Classification (ASJC) codes

  • Analytical Chemistry
  • Information Systems
  • Biochemistry
  • Atomic and Molecular Physics, and Optics
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Action recognition using close-up of maximum activation and etri-activity3d livinglab dataset'. Together they form a unique fingerprint.

Cite this