Sparse similarity-preserving hashing

Jonathan Masci, Alex M. Bronstein, Michael M. Bronstein, Pablo Sprechmann, Guillermo Sapiro

Research output: Contribution to conferencePaperpeer-review

8 Scopus citations

Abstract

In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multimodal data shows the benefits of the proposed method.

Original languageEnglish
StatePublished - 2014
Event2nd International Conference on Learning Representations, ICLR 2014 - Banff, Canada
Duration: 14 Apr 201416 Apr 2014

Conference

Conference2nd International Conference on Learning Representations, ICLR 2014
Country/TerritoryCanada
CityBanff
Period14/04/1416/04/14

Funding

FundersFunder number
ERC Starting
NSSEFF
National Science Foundation
Office of Naval Research
Army Research Office
National Geospatial-Intelligence Agency
National Gallery of Art
National Sleep Foundation
European Research Council335491, 307047

    Fingerprint

    Dive into the research topics of 'Sparse similarity-preserving hashing'. Together they form a unique fingerprint.

    Cite this