Skip to main content

Speculative decoding with a draft model

You can enable speculative decoding by pairing the target model with a pre-trained draft model. This improves inference efficiency by allowing a fast draft model to propose multiple tokens that the larger target model verifies in parallel. As a result, the model can accept multiple tokens per forward pass, increasing throughput.
This feature is currently limited to a curated list of target models.

N-gram speculative decoding

You can toggle the switch to enable N-gram speculative decoding. When enabled, past tokens are leveraged to pre-generate future tokens. For predictable tasks, this can deliver substantial performance gains. You can also set the Maximum N-gram Size, which defines how many tokens are predicted in advance. We recommend keeping the default value of 3.
Higher values can further reduce latency when successful. However, predicting too many tokens at once may lower prediction efficiency and, in extreme cases, even increase latency.
Last modified on April 13, 2026