Retrieving 3D objects with articulated limbs by depth image input

Jun Yang Lin, May Fang She, Ming Han Tsai, I-Chen Lin, Yo Chung Lau, Hsu Hang Liu

研究成果: Conference contribution同行評審

3 引文 斯高帕斯(Scopus)

摘要

Existing 3D model retrieval approaches usually implicitly assume that the target models are rigid-body. When they are applied to retrieving articulated models, the retrieved results are substantially influenced by the model postures. This paper presents a novel approach to retrieve 3D models from a database based on one or few input depth images. While related methods compared the inputs with whole shapes of 3D model projections at certain viewpoints, the proposed method extracts the limbs and torso regions from projections and analyzes the features of local regions. The use of both global and local features can alleviate the disturbance of model postures in model retrieval. Therefore, the system can retrieve models of an identical category but in different postures. Our experiments demonstrate that this approach can efficiently retrieve relevant models within a second, and it provides higher retrieval accuracy than those of compared methods for rigid 3D models or models with articulated limbs.

原文English
主出版物標題GRAPP
編輯Ana Paula Claudio, Dominique Bechmann, Jose Braz
發行者SciTePress
頁面101-111
頁數11
ISBN(電子)9789897582875
DOIs
出版狀態Published - 1 一月 2018
事件13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2018 - Funchal, Madeira, Portugal
持續時間: 27 一月 201829 一月 2018

出版系列

名字VISIGRAPP 2018 - Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
1

Conference

Conference13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2018
國家Portugal
城市Funchal, Madeira
期間27/01/1829/01/18

指紋 深入研究「Retrieving 3D objects with articulated limbs by depth image input」主題。共同形成了獨特的指紋。

引用此