Multiple hypothesis video segmentation from superpixel flows

Amelio Vazquez-Reina*, Shai Avidan, Hanspeter Pfister, Eric Miller

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Multiple Hypothesis Video Segmentation (MHVS) is a method for the unsupervised photometric segmentation of video sequences. MHVS segments arbitrarily long video streams by considering only a few frames at a time, and handles the automatic creation, continuation and termination of labels with no user initialization or supervision. The process begins by generating several pre-segmentations per frame and enumerating multiple possible trajectories of pixel regions within a short time window. After assigning each trajectory a score, we let the trajectories compete with each other to segment the sequence. We determine the solution of this segmentation problem as the MAP labeling of a higher-order random field. This framework allows MHVS to achieve spatial and temporal long-range label consistency while operating in an on-line manner. We test MHVS on several videos of natural scenes with arbitrary camera and object motion.

Original languageEnglish
Title of host publicationComputer Vision, ECCV 2010 - 11th European Conference on Computer Vision, Proceedings
PublisherSpringer Verlag
Number of pages14
EditionPART 5
ISBN (Print)3642155545, 9783642155543
StatePublished - 2010
Externally publishedYes
Event11th European Conference on Computer Vision, ECCV 2010 - Heraklion, Crete, Greece
Duration: 10 Sep 201011 Sep 2010

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 5
Volume6315 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference11th European Conference on Computer Vision, ECCV 2010
CityHeraklion, Crete


Dive into the research topics of 'Multiple hypothesis video segmentation from superpixel flows'. Together they form a unique fingerprint.

Cite this