New compact 3-dimensional shape descriptor for a depth camera in indoor environments

Hyukdoo Choi, Euntai Kim

Research output: Contribution to journalArticlepeer-review

Abstract

This study questions why existing local shape descriptors have high dimensionalities (up to hundreds) despite simplicity of local shapes. We derived an answer from a historical context and provided an alternative solution by proposing a new compact descriptor. Although existing descriptors can express complicated shapes and depth sensors have been improved, complex shapes are rarely observed in an ordinary environment and a depth sensor only captures a single side of a surface with noise. Therefore, we designed a new descriptor based on principal curvatures, which is compact but practically useful. For verification, the CoRBS dataset, the RGB-D Scenes dataset and the RGB-D Object dataset were used to compare the proposed descriptor with existing descriptors in terms of shape, instance, and category recognition rate. The proposed descriptor showed a comparable performance with existing descriptors despite its low dimensionality of 4.

Original languageEnglish
Article number876
JournalSensors (Switzerland)
Volume17
Issue number4
DOIs
Publication statusPublished - 2017 Apr 16

Bibliographical note

Funding Information:
This research was supported by the Industrial Strategic technology development, 10047635, Development of Hydraulic Robot Control Technology based on Accurate and Fast Force Control for Complex Tasks funded By the Ministry of Trade, Industry & Energy (MI, Korea).

Publisher Copyright:
© 2017 by the authors. Licensee MDPI, Basel, Switzerland.

All Science Journal Classification (ASJC) codes

  • Analytical Chemistry
  • Information Systems
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'New compact 3-dimensional shape descriptor for a depth camera in indoor environments'. Together they form a unique fingerprint.

Cite this