---
license: mit
library_name: transformers
pipeline_tag: text-generation
tags:
- dflash
- speculative-decoding
- diffusion
- efficiency
- flash-decoding
- qwen
- kimi
- diffusion-language-model
---
# Kimi-K2.5-DFlash
[**Paper**](https://arxiv.org/abs/2602.06036) | [**GitHub**](https://github.com/z-lab/dflash) | [**Blog**](https://z-lab.ai/projects/dflash/)
**DFlash** is a novel speculative decoding method that utilizes a lightweight **block diffusion** model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.
This model is the **drafter** component. It must be used in conjunction with the target model `moonshotai/Kimi-K2.5`.
## Quick Start
### Installation
SGLang:
```bash
uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/20547/head#subdirectory=python"
```
vLLM:
```bash
uv pip install vllm
uv pip install -U vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
```
Please refer to [PR39930](https://github.com/vllm-project/vllm/pull/39930) to see how to use DFlash with Kimi-K2.5 on vLLM.
### Launch Server
SGLang:
```bash
# Optional: enable schedule overlapping (experimental, may not be stable)
# export SGLANG_ENABLE_SPEC_V2=1
# export SGLANG_ENABLE_DFLASH_SPEC_V2=1
# export SGLANG_ENABLE_OVERLAP_PLAN_STREAM=1
python -m sglang.launch_server \
--model-path moonshotai/Kimi-K2.5 \
--speculative-algorithm DFLASH \
--speculative-draft-model-path z-lab/Kimi-K2.5-DFlash \
--speculative-num-draft-tokens 8 \
--tp-size 8 \
--attention-backend trtllm_mla \
--speculative-draft-attention-backend fa4 \
--mem-fraction-static 0.9 \
--speculative-dflash-draft-window-size 4096 \
--trust-remote-code
```
> **Tip:** For long-context or agentic workloads, add `--speculative-dflash-draft-window-size WINDOW_SIZE` to enable sliding-window attention for the drafter.
### Usage
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:30000/v1", api_key="EMPTY")
response = client.chat.completions.create(
model="moonshotai/Kimi-K2.5",
messages=[{"role": "user", "content": "Write a quicksort in Python."}],
max_tokens=4096,
)
print(response.choices[0].message.content)
```
## Benchmark Results
### Acceptance Length
- Thinking: enabled
- Max new tokens: 4096
- Block size: 8
- SGLang results.
| Dataset | Accept Length |
|-----------|---------------|
| GSM8K | 5.3 |
| Math500 | 5.5 |
| HumanEval | 5.3 |
| MBPP | 4.5 |
| MT-Bench | 3.7 |
### Throughput
| Dataset | C=32 |
|-----------|------|
| GSM8K | 2015 |
| Math500 | 3096 |
| HumanEval | 3146 |
| MBPP | 2940 |
| MT-Bench | 2146 |
## Acknowledgements
Special thanks to [David Wang](https://davidwa.ng/) for his outstanding engineering support on this project. We are also grateful to [Modal](https://modal.com/), [InnoMatrix](https://innomatrix.ai), and [Yotta Labs](https://www.yottalabs.ai/) for providing the compute resources used to train this draft model.
## Citation
If you find DFlash useful, please cite our work. To share feedback on DFlash or request new model support, please fill out this form: [DFlash Feedback](https://forms.gle/4YNwfqb4nJdqn6hq9).
```bibtex
@article{chen2026dflash,
title = {{DFlash: Block Diffusion for Flash Speculative Decoding}},
author = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
journal = {arXiv preprint arXiv:2602.06036},
year = {2026}
}
```