Unsupervised auxiliary visual words discovery for large-scale image object retrieval

Yin Hsi Kuo*, Hsuan Tien Lin, Wen-Huang Cheng, Yi Hsuan Yang, Winston H. Hsu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

14 Scopus citations

Abstract

Image object retrieval-locating image occurrences of specific objects in large-scale image collections-is essential for manipulating the sheer amount of photos. Current solutions, mostly based on bags-of-words model, suffer from low recall rate and do not resist noises caused by the changes in lighting, viewpoints, and even occlusions. We propose to augment each image with auxiliary visual words (AVWs), semantically relevant to the search targets. The AVWs are automatically discovered by feature propagation and selection in textual and visual image graphs in an unsupervised manner. We investigate variant optimization methods for effectiveness and scalability in large-scale image collections. Experimenting in the large-scale consumer photos, we found that the the proposed method significantly improves the traditional bag-of-words (111% relatively). Meanwhile, the selection process can also notably reduce the number of features (to 1.4%) and can further facilitate indexing in large-scale image object retrieval.

Original languageEnglish
Title of host publication2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011
PublisherIEEE Computer Society
Pages905-912
Number of pages8
ISBN (Print)9781457703942
DOIs
StatePublished - 1 Jan 2011

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Fingerprint Dive into the research topics of 'Unsupervised auxiliary visual words discovery for large-scale image object retrieval'. Together they form a unique fingerprint.

Cite this