TY - JOUR
T1 - On the entropy loss and gap of condensers
AU - Aviv, Nir
AU - Ta-Shma, Amnon
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019
Y1 - 2019
N2 - Many algorithms are proven to work under the assumption that they have access to a source of random, uniformly distributed bits. However, in practice, sources of randomness are often imperfect, giving n random bits that have only k < n min-entropy. The value n − k is called the entropy gap of the source. Randomness condensers are hash functions that hash any such source to a shorter source with reduced entropy gap д. The goal is to lose as little entropy as possible in this process. Condensers also have an error parameter ε and use a small seed of uniformly distributed bits whose length we desire to minimize as well. In this work, we study the exact dependencies between the different parameters of seeded randomness condensers. We obtain a non-explicit upper bound, showing the existence of condensers with entropy loss log(1 + logдε1 ) + O(1) and seed length log(nεд−k ) + O(1). In particular, this implies the existence of condensers with O(log ε1 ) entropy gap and constant entropy loss. This extends (with slightly improved parameters) the non-explicit upper bound for condensers presented in the work of Dodis et al. (2014), which gives condensers with entropy loss at least log log ε1 . We also give a non-explicit upper bound for lossless condensers, which have entropy gap д ≥ logεε1 + O(1) and seed length log(nε−2дk ) + O(1). Furthermore, we address an open question raised in (Dodis et al. 2014), where Dodis et al. showed an explicit construction of condensers with constant gap and O(log log ε1 ) loss, using seed length O(n log ε1 ). In the same article they improve the seed length to O(k log k) and ask whether it can be further improved. In this work, we reduce the seed length of their construction to O(log(nε ) log(kε )) by a simple concatenation. In the analysis, we use and prove a tight equivalence between condensers and extractors with multiplicative error. We note that a similar, but non-tight, equivalence was already proven by Dodis et al. (Dodis et al. 2014) using a weaker variant of extractors called unpredictability extractors. We also remark that this equivalence underlies the work of Ben-Aroya et al. (Ben-Aroya et al. 2016) and later work on explicit two-source extractors, and we believe it is interesting in its own right.
AB - Many algorithms are proven to work under the assumption that they have access to a source of random, uniformly distributed bits. However, in practice, sources of randomness are often imperfect, giving n random bits that have only k < n min-entropy. The value n − k is called the entropy gap of the source. Randomness condensers are hash functions that hash any such source to a shorter source with reduced entropy gap д. The goal is to lose as little entropy as possible in this process. Condensers also have an error parameter ε and use a small seed of uniformly distributed bits whose length we desire to minimize as well. In this work, we study the exact dependencies between the different parameters of seeded randomness condensers. We obtain a non-explicit upper bound, showing the existence of condensers with entropy loss log(1 + logдε1 ) + O(1) and seed length log(nεд−k ) + O(1). In particular, this implies the existence of condensers with O(log ε1 ) entropy gap and constant entropy loss. This extends (with slightly improved parameters) the non-explicit upper bound for condensers presented in the work of Dodis et al. (2014), which gives condensers with entropy loss at least log log ε1 . We also give a non-explicit upper bound for lossless condensers, which have entropy gap д ≥ logεε1 + O(1) and seed length log(nε−2дk ) + O(1). Furthermore, we address an open question raised in (Dodis et al. 2014), where Dodis et al. showed an explicit construction of condensers with constant gap and O(log log ε1 ) loss, using seed length O(n log ε1 ). In the same article they improve the seed length to O(k log k) and ask whether it can be further improved. In this work, we reduce the seed length of their construction to O(log(nε ) log(kε )) by a simple concatenation. In the analysis, we use and prove a tight equivalence between condensers and extractors with multiplicative error. We note that a similar, but non-tight, equivalence was already proven by Dodis et al. (Dodis et al. 2014) using a weaker variant of extractors called unpredictability extractors. We also remark that this equivalence underlies the work of Ben-Aroya et al. (Ben-Aroya et al. 2016) and later work on explicit two-source extractors, and we believe it is interesting in its own right.
KW - Entropy gap
KW - Entropy loss
KW - Key derivation
KW - Randomness condensers
KW - Randomness extractors
KW - Unpredictability extractors
UR - http://www.scopus.com/inward/record.url?scp=85065832331&partnerID=8YFLogxK
U2 - 10.1145/3317691
DO - 10.1145/3317691
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85065832331
SN - 1942-3454
VL - 11
JO - ACM Transactions on Computation Theory
JF - ACM Transactions on Computation Theory
IS - 3
M1 - 15
ER -