Qwen2.5-72B-Instruct β€” GGUF (aaardpark)

35 GB Q3_K_M GGUF. 88% GSM8K at 3-bit.

Looking for a smaller version? See aaardpark/Qwen2.5-32B-Instruct-GGUF β€” 15 GB, fits on a 24 GB machine.

Quick stats

File Size BPW Min RAM Speed (M5 Max, Metal)
Qwen2.5-72B-Instruct-aaardpark-Q3_K_M.gguf 35 GB 3.9 48 GB ~5 tok/s

How to use

Download

huggingface-cli download aaardpark/Qwen2.5-72B-Instruct-GGUF \
  Qwen2.5-72B-Instruct-aaardpark-Q3_K_M.gguf --local-dir .

Run

llama.cpp:

llama-cli -m Qwen2.5-72B-Instruct-aaardpark-Q3_K_M.gguf -ngl 99 -p "Hello!"

LM Studio: Search for aaardpark/Qwen2.5-72B-Instruct-GGUF in the model browser.

Prompt format

This model uses the ChatML template:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Benchmarks

Base model evaluation (lm-evaluation-harness)

Metric FP16 This Quant (3-bit)
Perplexity (wikitext-2) 2.670 3.163
GSM8K (5-shot) 90% 88%
MMLU avg (5-shot) 77.6% 76.8%
TruthfulQA 58.5% 56.9%

Measured on Qwen2.5-72B (base) with lm-evaluation-harness. The quantization method is identical for base and Instruct variants.

GGUF perplexity (wikitext-2, llama.cpp)

Variant PPL
Base Q8_0 (exact weights) 3.028
Base Q3_K_M (this format) 2.904
Instruct Q3_K_M 3.962

vs other quantization methods

Method Bits PPL (72B) GSM8K Notes
FP16 16 2.670 90% Baseline
This quant 3 3.163 88% 35 GB
RTN 3-bit 3 3.750 β€” Standard rounding
RTN 4-bit 4 2.790 88% 45 GB
This quant (4-bit) 4 2.747 93% Effectively lossless

Why this quant is different

Standard 3-bit quantization (RTN) rounds each weight to the nearest grid point uniformly. Our method uses calibration data to identify which weights are critical for model quality, then allocates quantization precision accordingly. Same bit budget, better weight choices.

The result: 88% GSM8K and 76.8% MMLU at 3-bit, within 2 points of FP16 on both benchmarks.

Which file should I choose?

This file is 35 GB. Realistic RAM requirements:

  • β‰₯64 GB RAM: comfortable, full 128K context window
  • 48 GB RAM: works with 16K-32K context
  • 32 GB RAM: tight, short context only β€” consider the 32B variant instead
  • <32 GB RAM: use the 32B variant (15 GB)

On Apple Silicon with Metal offload (-ngl 99), expect ~5 tok/s on M5 Max. NVIDIA GPUs need ~40 GB VRAM for full offload.

Method

Importance-weighted per-group optimization. Calibration data identifies which weights are critical for model quality, then quantization precision is allocated accordingly. ~20 minutes per quant on a single GPU. Output is standard Q3_K_M GGUF format β€” no custom kernels required.

  • Group size: 128
  • GGUF format: Q3_K_M (via llama.cpp)
  • Context: 128K tokens

Acknowledgments

Built on Qwen/Qwen2.5-72B-Instruct by Alibaba Cloud.

Downloads last month
174
GGUF
Model size
73B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for aaardpark/Qwen2.5-72B-Instruct-GGUF

Base model

Qwen/Qwen2.5-72B
Quantized
(82)
this model