TY - GEN
T1 - Brief announcement
T2 - 2014 ACM Symposium on Principles of Distributed Computing, PODC 2014
AU - Alistarh, Dan
AU - Censor-Hille, Keren
AU - Shavit, Nir
PY - 2014
Y1 - 2014
N2 - Lock-free concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers would prefer to treat concurrent code as if it were wait-free, guaranteeing that all operations always make progress. Unfortunately, designing wait-free algorithms is in general a complex undertaking, and the resulting algorithms are not always efficient, so most non-blocking commercial code is only lock-free, and the design of efficient wait-free algorithms has been left to the academic community. In [2], we suggest a solution to this problem. We show that, for a large class of lock-free algorithms, under scheduling conditions which approximate those found in commercial hardware architectures, lock-free algorithms behave as if they are wait-free. In other words, programmers can keep on designing simple lock-free algorithms instead of complex wait-free ones, and in practice, they will get wait-free progress. Our main contribution is a new way of analyzing a general class of lock-free algorithms under a stochastic scheduler. Our analysis relates the individual performance of processes with the global performance of the system using Markov chain lifting between a complex per-process chain and a simpler system progress chain. We show that lock-free algorithms are not only wait-free with probability 1, but that in fact a broad subset of lock-free algorithms can be closely bounded in terms of the average number of steps required until an operation completes.
AB - Lock-free concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers would prefer to treat concurrent code as if it were wait-free, guaranteeing that all operations always make progress. Unfortunately, designing wait-free algorithms is in general a complex undertaking, and the resulting algorithms are not always efficient, so most non-blocking commercial code is only lock-free, and the design of efficient wait-free algorithms has been left to the academic community. In [2], we suggest a solution to this problem. We show that, for a large class of lock-free algorithms, under scheduling conditions which approximate those found in commercial hardware architectures, lock-free algorithms behave as if they are wait-free. In other words, programmers can keep on designing simple lock-free algorithms instead of complex wait-free ones, and in practice, they will get wait-free progress. Our main contribution is a new way of analyzing a general class of lock-free algorithms under a stochastic scheduler. Our analysis relates the individual performance of processes with the global performance of the system using Markov chain lifting between a complex per-process chain and a simpler system progress chain. We show that lock-free algorithms are not only wait-free with probability 1, but that in fact a broad subset of lock-free algorithms can be closely bounded in terms of the average number of steps required until an operation completes.
KW - Distributed computing
KW - Lock-free algorithms
KW - Progress properties
KW - Schedulers
KW - Shared memory
KW - Wait-free algorithms
UR - http://www.scopus.com/inward/record.url?scp=84905455022&partnerID=8YFLogxK
U2 - 10.1145/2611462.2611502
DO - 10.1145/2611462.2611502
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:84905455022
SN - 9781450329446
T3 - Proceedings of the Annual ACM Symposium on Principles of Distributed Computing
SP - 50
EP - 52
BT - PODC 2014 - Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing
PB - Association for Computing Machinery
Y2 - 15 July 2014 through 18 July 2014
ER -