Who takes what: Using RGB-D camera and inertial sensor for unmanned monitor

Hsin Wei Kao, Ting Yuan Ke, Ching-Ju Lin, Yu-Chee Tseng

研究成果: Conference contribution同行評審

1 引文 斯高帕斯(Scopus)

摘要

Advanced Internet of Things (IoT) techniques have made human-environment interaction much easier. Existing solutions usually enable such interactions without knowing the identities of action performers. However, identifying users who are interacting with environments is a key to enable personalized service. To provide such add-on service, we propose WTW (who takes what), a system that identifies which user takes what object. Unlike traditional vision-based approaches, which are typically vulnerable to blockage, our WTW combines the feature information of three types of data, i.e., images, skeletons and IMU data, to enable reliable user-object matching and identification. By correlating the moving trajectory of a user monitored by inertial sensors with the movement of an object recorded in the video, our WTW reliably identifies a user and matches him/her with the object on action. Our prototype evaluation shows that WTW achieves a recognition rate of over 90% even in a crowd. The system is reliable even when users locate close by and take objects roughly at the same time.

原文English
主出版物標題2019 International Conference on Robotics and Automation, ICRA 2019
發行者Institute of Electrical and Electronics Engineers Inc.
頁面8063-8069
頁數7
ISBN(電子)9781538660263
DOIs
出版狀態Published - 20 五月 2019
事件2019 International Conference on Robotics and Automation, ICRA 2019 - Montreal, Canada
持續時間: 20 五月 201924 五月 2019

出版系列

名字Proceedings - IEEE International Conference on Robotics and Automation
2019-May
ISSN(列印)1050-4729

Conference

Conference2019 International Conference on Robotics and Automation, ICRA 2019
國家Canada
城市Montreal
期間20/05/1924/05/19

指紋 深入研究「Who takes what: Using RGB-D camera and inertial sensor for unmanned monitor」主題。共同形成了獨特的指紋。

引用此