Abstract
The emergence of two-stream convolutional networks has boosted the performance of action recognition by concurrently extracting appearance and motion features from videos. However, most existing approaches simply combine the features by averaging the prediction scores from each recognition stream without realizing that some classes favor greater weight for appearance than motion. We propose a fusion method of two-stream convolutional networks for action recognition by introducing objective functions of weights with two assumptions: (1) the scores from streams do not weigh the same and (2) the weights vary across different classes. We evaluate our method by extensive experiments on UCF101, HMDB51, and Hollywood2 datasets in the context of action recognition. The results show that the proposed approach outperforms the standard two-stream convolutional networks by a large margin (5.7%, 4.8%, and 3.6%) on UCF101, HMDB51, and Hollywood2 datasets, respectively.
Original language | English |
---|---|
Article number | 053108 |
Journal | Optical Engineering |
Volume | 55 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2016 May 1 |
Bibliographical note
Publisher Copyright:© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE).
All Science Journal Classification (ASJC) codes
- Atomic and Molecular Physics, and Optics
- Engineering(all)