Human interaction recognition in YouTube videos

Sunyoung Cho, Seongho Lim, Hyeran Byun, Haejin Park, Sooyeong Kwak

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

This paper introduces the use of annotation tags for human activity recognition in video. Recent methods in human activity recognition use more complex and realistic datasets obtained from TV shows or movies, which makes it difficult to obtain the high recognition accuracies. We improve the recognition accuracies using annotation tags of the video. Tags tend to be related to video contents, and human activity videos frequently contain tags relevant to their activities. We first collect a human activity dataset containing tags from YouTube. Under this dataset, we automatically discover relevant tags and their correlation with human activities. We finally develop a framework using visual content and tags for activity recognition. We show that our approach can improve recognition accuracies compared with other approaches that only use visual content.

Original languageEnglish
Title of host publicationICICS 2011 - 8th International Conference on Information, Communications and Signal Processing
DOIs
Publication statusPublished - 2011
Event8th International Conference on Information, Communications and Signal Processing, ICICS 2011 - Singapore, Singapore
Duration: 2011 Dec 132011 Dec 16

Publication series

NameICICS 2011 - 8th International Conference on Information, Communications and Signal Processing

Other

Other8th International Conference on Information, Communications and Signal Processing, ICICS 2011
Country/TerritorySingapore
CitySingapore
Period11/12/1311/12/16

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Human interaction recognition in YouTube videos'. Together they form a unique fingerprint.

Cite this