A bayesian network framework for vision based semantic scene understanding

Seung Bin Im, Keum Sung Hwang, Sung Bae Cho

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

For a robot to understand a scene, we have to infer and extract meaningful information from vision sensor data. Since scene understanding consists in recognizing several visual contexts, we have to extract these contextual cues and understand their relationships. However, context extraction from visual information is difficult due to uncertain information in variable environments, imperfect nature of the feature extraction methods and high computational complexity of reasoning from the complex relationship. In order to manage the uncertainties effectively, in this paper, we adopted Bayesian probabilistic approach, and proposed a Bayesian network framework that synthesizes the low level features and the high level semantic cues. It contains how to develop and utilize an integrated Bayesian network model. In the experimental results of two applications, the efficacy of the proposed framework is shown.

Original languageEnglish
Title of host publication16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN
Pages839-844
Number of pages6
DOIs
Publication statusPublished - 2007
Event16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN - Jeju, Korea, Republic of
Duration: 2007 Aug 262007 Aug 29

Publication series

NameProceedings - IEEE International Workshop on Robot and Human Interactive Communication

Other

Other16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN
Country/TerritoryKorea, Republic of
CityJeju
Period07/8/2607/8/29

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint

Dive into the research topics of 'A bayesian network framework for vision based semantic scene understanding'. Together they form a unique fingerprint.

Cite this