Approximating the minimum vertex cover in sublinear time and a connection to distributed algorithms

Michal Parnas, Dana Ron

Research output: Contribution to journalArticlepeer-review


For a given graph G over n vertices, let OPTG denote the size of an optimal solution in G of a particular minimization problem (e.g., the size of a minimum vertex cover). A randomized algorithm will be called an α-approximation algorithm with an additive error for this minimization problem if for any given additive error parameter ε{lunate} > 0 it computes a value over(OPT, ̃) such that, with probability at least 2 / 3, it holds that OPTG ≤ over(OPT, ̃) ≤ α {dot operator} OPTG + ε{lunate} n. Assume that the maximum degree or average degree of G is bounded. In this case, we show a reduction from local distributed approximation algorithms for the vertex cover problem to sublinear approximation algorithms for this problem. This reduction can be modified easily and applied to other optimization problems that have local distributed approximation algorithms, such as the dominating set problem. We also show that for the minimum vertex cover problem, the query complexity of such approximation algorithms must grow at least linearly with the average degree over(d, ̄) of the graph. This lower bound holds for every multiplicative factor α and small constant ε{lunate} as long as over(d, ̄) = O (n / α). In particular this means that for dense graphs it is not possible to design an algorithm whose complexity is o (n).

Original languageEnglish
Pages (from-to)183-196
Number of pages14
JournalTheoretical Computer Science
Issue number1-3
StatePublished - 22 Aug 2007


  • Distributed algorithms
  • Minimum vertex cover
  • Sublinear approximation algorithms


Dive into the research topics of 'Approximating the minimum vertex cover in sublinear time and a connection to distributed algorithms'. Together they form a unique fingerprint.

Cite this