Deep Co-Saliency Detection via Stacked Autoencoder-Enabled Fusion and Self-Trained CNNs

Chung Chi Tsai, Kuang Jui Hsu, Yen-Yu Lin*, Xiaoning Qian, Yung Yu Chuang

*Corresponding author for this work

Research output: Contribution to journalArticle

Abstract

Image co-saliency detection via fusion-based or learning-based methods faces cross-cutting issues. Fusion-based methods often combine saliency proposals using a majority voting rule. Their performance hence highly depends on the quality and coherence of individual proposals. Learning-based methods typically require ground-truth annotations for training, which are not available for co-saliency detection. In this work, we present a two-stage approach to address these issues jointly. At the first stage, an unsupervised deep learning model with stacked autoencoder (SAE) is proposed to evaluate the quality of saliency proposals. It employs latent representations for image foregrounds, and auto-encodes foreground consistency and foreground-background distinctiveness in a discriminative way. The resultant model, SAE-enabled fusion (SAEF), can combine multiple saliency proposals to yield a more reliable saliency map. At the second stage, motivated by the fact that fusion often leads to over-smoothed saliency maps, we develop self-trained convolutional neural networks (STCNN) to alleviate this negative effect. STCNN takes the saliency maps produced by SAEF as inputs. It propagates information from regions of high confidence to those of low confidence. During propagation, feature representations are distilled, resulting in sharper and better co-saliency maps. Our approach is comprehensively evaluated on three benchmarks, including MSRC, iCoseg, and Cosal2015, and performs favorably against the state-of-the-arts. In addition, we demonstrate that our method can be applied to object co-segmentation and object co-localization, achieving the state-of-the-art performance in both applications.

Original languageEnglish
Article number8809285
Pages (from-to)1016-1031
Number of pages16
JournalIEEE Transactions on Multimedia
Volume22
Issue number4
DOIs
StatePublished - Apr 2020

Keywords

  • adaptive fusion
  • CNNs
  • Co-saliency detection
  • optimization
  • reconstruction residual
  • self-paced learning
  • stacked autoencoder

Fingerprint Dive into the research topics of 'Deep Co-Saliency Detection via Stacked Autoencoder-Enabled Fusion and Self-Trained CNNs'. Together they form a unique fingerprint.

  • Cite this