Vision-based adaptive grasping of a humanoid robot arm

Kai-Tai Song*, Shih Cheng Tsai

*Corresponding author for this work

研究成果: Conference contribution同行評審

12 引文 斯高帕斯(Scopus)

摘要

This paper presents a motion planning and control design of a humanoid robot arm for vision-based grasping in an obstructed environment. A Kinect depth camera is utilized to recognize and find the target object in the environment and grasp it in real-time. First, gradient direction in a depth image is applied to segment environment into several planes. Then, speed up robust feature(SURF) is used to match features between segmented planes and locate the target object. This approach effectively speeds up the matching operation by decreasing the area to match in image planes. Moreover, this study proposes a design for safe operation of the robot arm in an unknown environment. Two safe indices are designed to improve the robustness in safe grasping in an obstructed environment. One index defines the degree of influence of obstacles to the manipulator. Another index classifies the workspace into three regions, namely safe, uncertainty and danger region. The robot employs these indices to move to safe regions by using a potential field for motion planning. Practical experiments show that the six degree-of-freedom robot arm can effectively avoid obstacles and complete the grasping task.

原文English
主出版物標題2012 IEEE International Conference on Automation and Logistics, ICAL 2012
頁面155-160
頁數6
DOIs
出版狀態Published - 1 十一月 2012
事件2012 IEEE International Conference on Automation and Logistics, ICAL 2012 - Zhengzhou, China
持續時間: 15 八月 201217 八月 2012

出版系列

名字IEEE International Conference on Automation and Logistics, ICAL
ISSN(列印)2161-8151

Conference

Conference2012 IEEE International Conference on Automation and Logistics, ICAL 2012
國家China
城市Zhengzhou
期間15/08/1217/08/12

指紋 深入研究「Vision-based adaptive grasping of a humanoid robot arm」主題。共同形成了獨特的指紋。

引用此