TY - JOUR
T1 - Fast concurrent queues for x86 processors
AU - Morrison, Adam
AU - Afek, Yehuda
PY - 2013/8
Y1 - 2013/8
N2 - Conventional wisdom in designing concurrent data structures is to use the most powerful synchronization primitive, namely compare-and-swap (CAS), and to avoid contended hot spots. In building concurrent FIFO queues, this reasoning has led researchers to propose combining-based concurrent queues. This paper takes a different approach, showing how to rely on fetch-and-add (F&A), a less powerful primitive that is available on x86 processors, to construct a nonblocking (lock-free) linearizable concurrent FIFO queue which, despite the F&A being a contended hot spot, outperforms combining-based implementations by 1.5× to 2.5× in all concurrency levels on an x86 server with four multicore processors, in both single-processor and multi-processor executions.
AB - Conventional wisdom in designing concurrent data structures is to use the most powerful synchronization primitive, namely compare-and-swap (CAS), and to avoid contended hot spots. In building concurrent FIFO queues, this reasoning has led researchers to propose combining-based concurrent queues. This paper takes a different approach, showing how to rely on fetch-and-add (F&A), a less powerful primitive that is available on x86 processors, to construct a nonblocking (lock-free) linearizable concurrent FIFO queue which, despite the F&A being a contended hot spot, outperforms combining-based implementations by 1.5× to 2.5× in all concurrency levels on an x86 server with four multicore processors, in both single-processor and multi-processor executions.
KW - Concurrent queue
KW - Fetch-andadd
KW - Nonblocking algorithm
UR - http://www.scopus.com/inward/record.url?scp=84885203434&partnerID=8YFLogxK
U2 - 10.1145/2517327.2442527
DO - 10.1145/2517327.2442527
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:84885203434
SN - 1523-2867
VL - 48
SP - 103
EP - 112
JO - ACM SIGPLAN Notices
JF - ACM SIGPLAN Notices
IS - 8
ER -