Alignment of deep features in 3D models for camera pose estimation

Jui Yuan Su*, Shyi Chyi Cheng, Chin Chun Chang, Jun-Wei Hsieh

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Using a set of semantically annotated RGB-D images with known camera poses, many existing 3D reconstruction algorithms can integrate these images into a single 3D model of the scene. The semantically annotated scene model facilitates the construction of a video surveillance system using a moving camera if we can efficiently compute the depth maps of the captured images and estimate the poses of the camera. The proposed model-based video surveillance consists of two phases, i.e. the modeling phase and the inspection phase. In the modeling phase, we carefully calibrate the parameters of the camera that captures the multi-view video for modeling the target 3D scene. However, in the inspection phase, the camera pose parameters and the depth maps of the captured RGB images are often unknown or noisy when we use a moving camera to inspect the completeness of the object. In this paper, the 3D model is first transformed into a colored point cloud, which is then indexed by clustering—with each cluster representing a surface fragment of the scene. The clustering results are then used to train a model-specific convolution neural network (CNN) that annotates each pixel of an input RGB image with a correct fragment class. The prestored camera parameters and depth information of fragment classes are then fused together to estimate the depth map and the camera pose of the current input RGB image. The experimental results show that the proposed approach outperforms the compared methods in terms of the accuracy of camera pose estimation.

Original languageEnglish
Title of host publicationMultiMedia Modeling - 25th International Conference, MMM 2019, Proceedings
EditorsBenoit Huet, Ioannis Kompatsiaris, Stefanos Vrochidis, Vasileios Mezaris, Wen-Huang Cheng, Cathal Gurrin
PublisherSpringer Verlag
Number of pages13
ISBN (Print)9783030057152
StatePublished - 1 Jan 2019
Event25th International Conference on MultiMedia Modeling, MMM 2019 - Thessaloniki, Greece
Duration: 8 Jan 201911 Jan 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11296 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference25th International Conference on MultiMedia Modeling, MMM 2019


  • 3D model
  • 3D point cloud clustering
  • Camera pose estimation
  • Deep learning
  • Unsupervised fragment classification

Fingerprint Dive into the research topics of 'Alignment of deep features in 3D models for camera pose estimation'. Together they form a unique fingerprint.

Cite this