Sparse sign-consistent Johnson-Lindenstrauss matrices: Compression with neuroscience-based constraints

Zeyuan Allen-Zhu, Rati Gelashvili, Silvio Micali*, Nir Shavit

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Johnson-Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign consistent (i.e., all entries in a single column must be either all nonnegative or all nonpositive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix should be sparse. We construct sparse JL matrices that are sign consistent and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.

Original languageEnglish
Pages (from-to)16872-16876
Number of pages5
JournalProceedings of the National Academy of Sciences of the United States of America
Issue number47
StatePublished - 25 Nov 2014
Externally publishedYes


  • Johnson-Lindenstrauss compression
  • Sign-consistent matrices
  • Synaptic-connectivity matrices


Dive into the research topics of 'Sparse sign-consistent Johnson-Lindenstrauss matrices: Compression with neuroscience-based constraints'. Together they form a unique fingerprint.

Cite this