With the increasing popularity of egocentric vision applications, studies have focused on hand identification or ownership decision, which is the process of disambiguating the hands of the camera wearer and other individuals’ hands. However, hand ownership decisions for egocentric applications such as manipulating a human–computer interface in the first-person view and operating a machine on the manufacturing floor may require highly personalized training data. Therefore, this paper proposes a hand ownership decision method that utilizes depth images filtered with kinematic constraints of the wearer to separate wearers’ hands from others’ hands in the egocentric view. In this approach, when a wearer dons an egocentric camera system on their chest and kinematic information of the wearer is known, the hands are detected and arm pose is estimated using the human arm kinematic model. The estimated arm pose is then evaluated with kinematic constraints that describe the range of motion of the wearer’s arm. For enhancing the performance in consecutive frames, a smoothing filter was adopted for giving a reward or penalty to the hand in each frame. The pose estimation method is verified using a laser tracker system. The experimental results show that the proposed method can distinguish the wearer’s hands from others’ hands in real-time. Furthermore, an application of operating a machine is described to demonstrate the effectiveness of the proposed method.
|Number of pages
|Journal of Ambient Intelligence and Humanized Computing
|Published - 2023 Mar
Bibliographical notePublisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
All Science Journal Classification (ASJC) codes
- Computer Science(all)