Co-segmentation for space-time co-located collections

Hadar Averbuch-Elor*, Johannes Kopf, Tamir Hazan, Daniel Cohen-Or

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

We present a co-segmentation technique for space-time co-located image collections. These prevalent collections capture various dynamic events, usually by multiple photographers, and may contain multiple co-occurring objects which are not necessarily part of the intended foreground object, resulting in ambiguities for traditional co-segmentation techniques. Thus, to disambiguate what the common foreground object is, we introduce a weakly supervised technique, where we assume only a small seed, given in the form of a single segmented image. We take a distributed approach, where local belief models are propagated and reinforced with similar images. Our technique progressively expands the foreground and background belief models across the entire collection. The technique exploits the power of the entire set of image without building a global model, and thus successfully overcomes large variability in appearance of the common foreground object. We demonstrate that our method outperforms previous co-segmentation techniques on challenging space-time co-located collections, including dense benchmark datasets which were adapted for our novel problem setting.

Original languageEnglish
Pages (from-to)1761-1772
Number of pages12
JournalVisual Computer
Volume34
Issue number12
DOIs
StatePublished - 1 Dec 2018

Keywords

  • Belief propagation
  • Foreground extraction
  • Image co-segmentation
  • Non-rigid and deformable motion analysis

Fingerprint

Dive into the research topics of 'Co-segmentation for space-time co-located collections'. Together they form a unique fingerprint.

Cite this