Randomness vs. fault-tolerance

Ran Canetti*, Eyal Kushilevitz, Rafail Ostrovsky, Adi Rosen

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

10 Scopus citations

Abstract

We investigate the relations between the fault tolerance (or resilience) and the randomness requirements of multiparty protocols. Fault-tolerance is measured in terms of the maximum number of colluding faulty players, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest players). Randomness is measured in terms of the total number of random bits needed by the players in order to execute the protocol. Previously, the upper bound on the amount of randomness needed for securely computing any non-trivial function f was polynomial both in n, the total number of parties, and the circuit-size C(f). This was the state of knowledge even for the special case t = 1 (i.e., when there is at most one malicious player). In this paper, we show that for any linear-size circuit, and for any value t < n/2, O(poly(t) · log n) randomness is sufficient. More generally, we show that for any function f with circuit-size C(f), we need only O(poly(t) · log n + poly(t) · C(f)/n) randomness in order to withstand any coalition of size at most t. Moreover, in our protocol only t + 1 players flip coins and the rest of the players are deterministic. Our results generalize to the case of adaptive adversaries as well.

Original languageEnglish
Pages35-44
Number of pages10
DOIs
StatePublished - 1997
EventProceedings of the 1997 16th Annual ACM Symposium on Principles of Distributed Computing - Santa Barbara, CA, USA
Duration: 21 Aug 199724 Aug 1997

Conference

ConferenceProceedings of the 1997 16th Annual ACM Symposium on Principles of Distributed Computing
CitySanta Barbara, CA, USA
Period21/08/9724/08/97

Fingerprint

Dive into the research topics of 'Randomness vs. fault-tolerance'. Together they form a unique fingerprint.

Cite this