Multi-view diffusion maps

Ofir Lindenbaum*, Arie Yeredor, Moshe Salhov, Amir Averbuch

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Scopus citations


In this paper, we address the challenging task of achieving multi-view dimensionality reduction. The goal is to effectively use the availability of multiple views for extracting a coherent low-dimensional representation of the data. The proposed method exploits the intrinsic relation within each view, as well as the mutual relations between views. The multi-view dimensionality reduction is achieved by defining a cross-view model in which an implied random walk process is restrained to hop between objects in the different views. The method is robust to scaling and insensitive to small structural changes in the data. We define new diffusion distances and analyze the spectra of the proposed kernel. We show that the proposed framework is useful for various machine learning applications such as clustering, classification, and manifold learning. Finally, by fusing multi-sensor seismic data we present a method for automatic identification of seismic events.

Original languageEnglish
Pages (from-to)127-149
Number of pages23
JournalInformation Fusion
StatePublished - Mar 2020


  • Diffusion maps
  • Dimensionality reduction
  • Manifold learning
  • Multi-view


Dive into the research topics of 'Multi-view diffusion maps'. Together they form a unique fingerprint.

Cite this