Probabilistic Space-Time Video Modeling via Piecewise GMM

Hayit Greenspan*, Jacob Goldberger, Arnaldo Mayer

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

127 Scopus citations

Abstract

In this paper, we describe a statistical video representation and modeling scheme. Video representation schemes are needed to segment a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Gaussian mixture modeling extracts coherent space-time regions in feature space, and corresponding coherent segments (video-regions) in the video content. A key feature of the system is the analysis of video input as a single entity as opposed to a sequence of separate frames. Space and time are treated uniformly. The probabilistic space-time video representation scheme is extended to a piecewise GMM framework in which a succession of GMMs are extracted for the video sequence, instead of a single global model for the entire sequence. The piecewise GMM framework allows for the analysis of extended video sequences and the description of nonlinear, nonconvex motion patterns. The extracted space-time regions allow for the detection and recognition of video events. Results of segmenting video content into static versus dynamic video regions and video content editing are presented.

Original languageEnglish
Pages (from-to)384-396
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume26
Issue number3
DOIs
StatePublished - Mar 2004

Funding

FundersFunder number
Israeli Ministry of Science05530462

    Keywords

    • Detection of events in video
    • Gaussian mixture model
    • Video representation
    • Video segmentation

    Fingerprint

    Dive into the research topics of 'Probabilistic Space-Time Video Modeling via Piecewise GMM'. Together they form a unique fingerprint.

    Cite this