Who takes what: Using RGB-D camera and inertial sensor for unmanned monitor

Hsin Wei Kao, Ting Yuan Ke, Ching-Ju Lin, Yu-Chee Tseng

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Advanced Internet of Things (IoT) techniques have made human-environment interaction much easier. Existing solutions usually enable such interactions without knowing the identities of action performers. However, identifying users who are interacting with environments is a key to enable personalized service. To provide such add-on service, we propose WTW (who takes what), a system that identifies which user takes what object. Unlike traditional vision-based approaches, which are typically vulnerable to blockage, our WTW combines the feature information of three types of data, i.e., images, skeletons and IMU data, to enable reliable user-object matching and identification. By correlating the moving trajectory of a user monitored by inertial sensors with the movement of an object recorded in the video, our WTW reliably identifies a user and matches him/her with the object on action. Our prototype evaluation shows that WTW achieves a recognition rate of over 90% even in a crowd. The system is reliable even when users locate close by and take objects roughly at the same time.

Original languageEnglish
Title of host publication2019 International Conference on Robotics and Automation, ICRA 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages8063-8069
Number of pages7
ISBN (Electronic)9781538660263
DOIs
StatePublished - 1 May 2019
Event2019 International Conference on Robotics and Automation, ICRA 2019 - Montreal, Canada
Duration: 20 May 201924 May 2019

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
Volume2019-May
ISSN (Print)1050-4729

Conference

Conference2019 International Conference on Robotics and Automation, ICRA 2019
CountryCanada
CityMontreal
Period20/05/1924/05/19

Fingerprint Dive into the research topics of 'Who takes what: Using RGB-D camera and inertial sensor for unmanned monitor'. Together they form a unique fingerprint.

  • Cite this

    Kao, H. W., Ke, T. Y., Lin, C-J., & Tseng, Y-C. (2019). Who takes what: Using RGB-D camera and inertial sensor for unmanned monitor. In 2019 International Conference on Robotics and Automation, ICRA 2019 (pp. 8063-8069). [8793858] (Proceedings - IEEE International Conference on Robotics and Automation; Vol. 2019-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRA.2019.8793858