We present a novel computational model for simul- taneous image co-saliency detection and co-segmentation that concurrently explores the concepts of saliency and objectness in multiple images. It has been shown that co-saliency detection via aggregating multiple saliency proposals by diverse visual cues can better highlight salient objects; however, the optimal proposals are typically region dependent and the fusion process often leads to blurred results. Co-segmentation can help preserve object boundaries; but it may suffer from complex scenes. To address these issues, we develop an unified method that addresses co-saliency detection and co-segmentation jointly via solving an energy minimization problem over a graph. Our method iteratively carries out the region-wise adaptive saliency map fusion and object segmentation to transfer useful information between the two complementary tasks. Through the optimization iterations, sharp saliency maps are gradually obtained to recover entire salient objects by referring to object segmentation, while these segmentation are progressively improved owing to the better saliency prior. We evaluate our method on four public benchmark datasets while comparing it to the state-of-the-art methods. Extensive experiments demonstrate that our method can provide consistently higher-quality results on both co-saliency detection and co-segmentation.
- Co-saliency detection
- energy minimization
- joint optimization
- locally adaptive proposal fusion