Knowledge leverage from contours to bounding boxes: A concise approach to annotation

Jie Zhi Cheng*, Feng Ju Chang, Kuang Jui Hsu, Yen-Yu Lin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations


In the class based image segmentation problem, one of the major concerns is to provide large training data for learning complex graphical models. To alleviate the labeling effort, a concise annotation approach working on bounding boxes is introduced. The main idea is to leverage the knowledge learned from a few object contours for the inference of unknown contours in bounding boxes. To this end, we incorporate the bounding box prior into the concept of multiple image segmentations to generate a set of distinctive tight segments, with the condition that at least one tight segment approaching to the true object contour. A good tight segment is then selected via semi-supervised regression, which bears the augmented knowledge transferred from object contours to bounding boxes. The experimental results on the challenging Pascal VOC dataset corroborate that our new annotation method can potentially replace the manual annotations.

Original languageEnglish
Title of host publicationComputer Vision, ACCV 2012 - 11th Asian Conference on Computer Vision, Revised Selected Papers
Number of pages15
EditionPART 1
StatePublished - 11 Apr 2013
Event11th Asian Conference on Computer Vision, ACCV 2012 - Daejeon, Korea, Republic of
Duration: 5 Nov 20129 Nov 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume7724 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference11th Asian Conference on Computer Vision, ACCV 2012
CountryKorea, Republic of

Fingerprint Dive into the research topics of 'Knowledge leverage from contours to bounding boxes: A concise approach to annotation'. Together they form a unique fingerprint.

Cite this