Segmentation guided local proposal fusion for co-saliency detection

Chung Chi Tsai, Xiaoning Qian, Yen Yu Lin

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

8 Scopus citations

Abstract

We address two issues hindering existing image co-saliency detection methods. First, it has been shown that object boundaries can help improve saliency detection; But segmentation may suffer from significant intra-object variations. Second, aggregating the strength of different saliency proposals via fusion helps saliency detection covering entire object areas; However, the optimal saliency proposal fusion often varies from region to region, and the fusion process may lead to blurred results. Object segmentation and region-wise proposal fusion are complementary to help address the two issues if we can develop a unified approach. Our proposed segmentation-guided locally adaptive proposal fusion is the first of such efforts for image co-saliency detection to the best of our knowledge. Specifically, it leverages both object-aware segmentation evidence and region-wise consensus among saliency proposals via solving a joint co-saliency and co-segmentation energy optimization problem over a graph. Our approach is evaluated on a benchmark dataset and compared to the state-of-the-art methods. Promising results demonstrate its effectiveness and superiority.
Original languageAmerican English
Title of host publication2017 IEEE International Conference on Multimedia and Expo (ICME)
PublisherIEEE Computer Society
Pages523-528
Number of pages6
ISBN (Print)9781509060672
DOIs
StatePublished - 28 Aug 2017

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo

Keywords

  • Adaptive fusion
  • Alternating optimization
  • Co-saliency
  • Co-segmentation
  • Energy minimization

Fingerprint Dive into the research topics of 'Segmentation guided local proposal fusion for co-saliency detection'. Together they form a unique fingerprint.

Cite this