Speculative Decoding with a Draft Model
Enables speculative decoding by pairing the target model with a pre-trained draft model. This option improves inference efficiency by allowing a fast draft model to propose multiple tokens that can be verified in parallel by a larger target model, thereby enabling multiple tokens to be accepted per target model forward pass and increasing throughput compared with one-at-a-time autoregressive decoding.This feature is currently limited to a curated list of target models.
N-gram speculative decoding
You can toggle the switch to enable N-gram speculative decoding. When enabled, past tokens are leveraged to pre-generate future tokens. For predictable tasks, this can deliver substantial performance gains. You can also set theMaximum N-gram Size, which defines how many tokens are predicted in advance. We recommend keeping the default value of 3.
Higher values can further reduce latency when successful. However, predicting too many tokens at once may lower prediction efficiency and, in extreme cases, even increase latency.