Qwen3.5-35B-A3B — REAP 20% Expert Pruning
This is a 20% expert-pruned version of Qwen/Qwen3.5-35B-A3B, produced using the REAP (Router-weighted Expert Activation Pruning) method from the paper "REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression" (ICLR 2026).
The model retains 205 of 256 experts (~80% of original experts) while remaining competitive on standard benchmarks.
What We Did
Method: REAP Layerwise Pruning
REAP prunes MoE experts by scoring each expert's importance using a combination of:
- Router gate-values — how often and how strongly the router selects each expert
- Expert activation norms — the magnitude of each expert's output contribution
Router logit weights are renormalized to sum to 1 after pruning (critical for maintaining output scale). Pruning is applied layer-by-layer (layerwise mode).
Calibration Data
Observations were collected over a mixed calibration dataset of 1,000 samples per category:
theblackcat102/evol-codealpaca-v1(250 samples)open-r1/Mixture-of-Thoughts[code](250 samples)open-r1/Mixture-of-Thoughts[math](250 samples)open-r1/Mixture-of-Thoughts[science](250 samples)
Max sequence length: 4096 tokens. Angular distance measure for expert similarity.
Pruning Config
| Parameter | Value |
|---|---|
| Compression ratio | 20% (51 experts removed) |
| Original experts | 256 |
| Remaining experts | 205 |
| Pruning method | REAP |
| Router weight renormalization | ✓ |
| Seed | 42 |
| Calibration samples | 1,000 total |
Benchmark Results
All evaluations run with vLLM (tensor-parallel across 8x RTX 3090), greedy decoding, 0-shot.
Coding (EvalPlus, greedy, 0-shot)
| Benchmark | Original | Pruned (20%) | Delta |
|---|---|---|---|
| HumanEval (pass@1) | 76.2% | 73.2% | -3.0% |
| HumanEval+ (pass@1) | 72.0% | 70.1% | -1.9% |
Multiple Choice / Reasoning (lm-eval, 0-shot, 250 samples/task)
| Benchmark | Original | Pruned (20%) | Delta |
|---|---|---|---|
| MMLU | 84.34% | 80.89% | -3.45% |
| MMLU - Humanities | 82.40% | 76.35% | -6.05% |
| MMLU - Social Sciences | 90.04% | 88.38% | -1.66% |
| MMLU - STEM | 81.46% | 78.88% | -2.58% |
| MMLU - Other | 84.52% | 81.05% | -3.47% |
| ARC-Challenge | 60.00% | 60.40% | +0.40% |
| ARC-Easy | 84.00% | 83.20% | -0.80% |
| BoolQ | 88.00% | 89.20% | +1.20% |
| HellaSwag (norm) | 76.40% | 75.60% | -0.80% |
| OpenBookQA (norm) | 45.20% | 47.20% | +2.00% |
| RTE | 81.20% | 82.00% | +0.80% |
| WinoGrande | 77.20% | 76.80% | -0.40% |
Perplexity (WikiText-2, 10k tokens, llama.cpp)
| Model | PPL |
|---|---|
| Original (256 experts) | 6.83 |
| Pruned 20% (205 experts) | 9.51 |
Throughput (4x RTX 3090, TP=4, vLLM, enforce_eager)
| Batch Size | Original tok/s | Pruned tok/s | Speedup |
|---|---|---|---|
| 1 | 12.3 | 12.5 | 1.02x |
| 4 | 37.0 | 36.0 | 0.97x |
| 8 | 74.4 | 70.3 | 0.95x |
| 16 | 89.3 | 86.0 | 0.96x |
Note: throughput speedup is minimal at this compression level with current vLLM routing overhead. The primary benefit is reduced VRAM footprint.
Memory Footprint
| Model | Size | Shards |
|---|---|---|
| Original | ~71 GB (bf16) | 14 safetensors |
| Pruned 20% | ~53 GB (bf16) | 2 safetensors |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "0xSero/Qwen3.5-35B-A3B-REAP-20pct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [{"role": "user", "content": "Write a quicksort in Python."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
With vLLM:
vllm serve 0xSero/Qwen3.5-35B-A3B-REAP-20pct \
--tensor-parallel-size 4 \
--gpu-memory-utilization 0.9 \
--max-model-len 32768
Reproducing
git clone https://github.com/cerebras/reap
cd reap
bash scripts/build.sh
python -m reap.layerwise_prune \
--model_name Qwen/Qwen3.5-35B-A3B \
--dataset_name "theblackcat102/evol-codealpaca-v1:250,open-r1/Mixture-of-Thoughts[code]:250,open-r1/Mixture-of-Thoughts[math]:250,open-r1/Mixture-of-Thoughts[science]:250" \
--compression_ratio 0.20 \
--prune_method reap \
--seed 42 \
--renormalize_router_weights true
Citation
@inproceedings{lasby2025reap,
title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
author={Lasby, Mike and others},
booktitle={ICLR 2026},
year={2026}
}
- Downloads last month
- 802
Model tree for 0xSero/Qwen-3.5-28B-A3B-REAP
Base model
Qwen/Qwen3.5-35B-A3B-BasePaper for 0xSero/Qwen-3.5-28B-A3B-REAP
Evaluation results
- pass@1 on HumanEvalself-reported0.732
- pass@1 on HumanEval+self-reported0.701
- accuracy on MMLUself-reported0.809