Practical error analysis of cross-ratio-based planar localization

Jen-Hui Chuang*, Jau Hong Kao, Horag Horng Lin, Yu Ting Chiu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations


Recently, more and more computer vision researchers are paying attention to error analysis so as to fulfill various accuracy requirements arising from different applications. As a geometric invariant under projective transformations, cross-ratio is the basis of many recognition and reconstruction algorithms which are based on projective geometry. We propose an efficient way of analyzing localization error for computer vision systems which use cross-ratios in planar localization. By studying the inaccuracy associated with cross-ratio-based computations, we inspect the possibility of using linear transformation to approximate localization error due to 2-D noises of image extraction for reference points. Based on such a computationally efficient analysis, a practical way of choosing point features in an image so as to establish the probabilistically most accurate planar location system using crossratios is developed.

Original languageEnglish
Title of host publicationAdvances in Image and Video Technology - Second Pacific Rim Symposium, PSIVT 2007, Proceedings
Number of pages10
StatePublished - 1 Dec 2007
Event2nd IEEE Pacific Rim Symposium on Video and Image Technology, PSIVT 2007 - Santiago, Chile
Duration: 17 Dec 200719 Dec 2007

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4872 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference2nd IEEE Pacific Rim Symposium on Video and Image Technology, PSIVT 2007


  • Cross-ratio
  • Error analysis
  • Error ellipse
  • Robot localization

Fingerprint Dive into the research topics of 'Practical error analysis of cross-ratio-based planar localization'. Together they form a unique fingerprint.

Cite this