On identity testing of tensors, low-rank recovery and compressed sensing

Michael A. Forbes, Amir Shpilka

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


We study the problem of obtaining efficient, deterministic, black-box polynomial identity testing algorithms for depth-3 set-multilinear circuits (over arbitrary fields). This class of circuits has an efficient, deterministic, white-box polynomial identity testing algorithm (due to Raz and Shpilka [36]), but has no known such black-box algorithm. We recast this problem as a question of finding a low-dimensional subspace H, spanned by rank 1 tensors, such that any non-zero tensor in the dual space ker(H) has high rank. We obtain explicit constructions of essentially optimal-size hitting sets for tensors of degree 2 (matrices), and obtain the first quasi-polynomial sized hitting sets for arbitrary tensors. We also show connections to the task of performing low-rank recovery of matrices, which is studied in the field of compressed sensing. Low-rank recovery asks (say, over ℝ) to recover a matrix M from few measurements, under the promise that M is rank ≤ r. In this work, we restrict our attention to recovering matrices that are exactly rank ≤ r using deterministic, non-adaptive, linear measurements, that are free from noise. Over ℝ, we provide a set (of size 4nr) of such measurements, from which M can be recovered in O(rn 2+r 3n) field operations, and the number of measurements is essentially optimal. Further, the measurements can be taken to be all rank-1 matrices, or all sparse matrices. To the best of our knowledge no explicit constructions with those properties were known prior to this work. We also give a more formal connection between low-rank recovery and the task of sparse (vector) recovery: any sparse-recovery algorithm that exactly recovers vectors of length n and sparsity 2r, using m non-adaptive measurements, yields a low-rank recovery scheme for exactly recovering n x n matrices of rank ≤ r, making 2nm non-adaptive measurements. Furthermore, if the sparse-recovery algorithm runs in time τ, then the low-rank recovery algorithm runs in time O(rn 2+nτ). We obtain this reduction using linear-algebraic techniques, and not using convex optimization, which is more commonly seen in compressed sensing algorithms. Finally, we also make a connection to rank-metric codes, as studied in coding theory. These are codes with codewords consisting of matrices (or tensors) where the distance of matrices M and N is rank(M-N), as opposed to the usual hamming metric. We obtain essentially optimal-rate codes over matrices, and provide an efficient decoding algorithm. We obtain codes over tensors as well, with poorer rate, but still with efficient decoding.

Original languageEnglish
Title of host publicationSTOC '12 - Proceedings of the 2012 ACM Symposium on Theory of Computing
Number of pages9
StatePublished - 2012
Externally publishedYes
Event44th Annual ACM Symposium on Theory of Computing, STOC '12 - New York, NY, United States
Duration: 19 May 201222 May 2012

Publication series

NameProceedings of the Annual ACM Symposium on Theory of Computing
ISSN (Print)0737-8017


Conference44th Annual ACM Symposium on Theory of Computing, STOC '12
Country/TerritoryUnited States
CityNew York, NY


  • derandomization
  • low-rank recovery
  • polynomial identity testing
  • sparse recovery
  • tensor rank


Dive into the research topics of 'On identity testing of tensors, low-rank recovery and compressed sensing'. Together they form a unique fingerprint.

Cite this