Fast Inference from Transformers via Speculative Decoding

Yaniv Leviathan*, Matan Kalman, Yossi Matias

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

156 Scopus citations

Abstract

Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method can accelerate existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.

Original languageEnglish
Pages (from-to)19274-19286
Number of pages13
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Externally publishedYes
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Funding

FundersFunder number
LaMDA
Theta Labs teams at Google

    Fingerprint

    Dive into the research topics of 'Fast Inference from Transformers via Speculative Decoding'. Together they form a unique fingerprint.

    Cite this