gemma-4-31B-it | aard-3bit
15.3 GB GGUF of google/gemma-4-31B-it. Fits 16 GB VRAM. Vision tower weights preserved at FP16.
Runs in llama.cpp, LM Studio, Ollama, and any GGUF-compatible runtime.
Why this quant?
Most 3-bit Gemma 4 GGUFs are labeled "low quality" by their providers. This one scores 96% on GSM8K — within 2 points of full precision.
Standard 3-bit quantization rounds every weight the same way. This quant uses calibration data to figure out which weights actually matter, then protects those during quantization. Same file format, same bit budget, better weight choices.
Quick stats
| File | gemma-4-31B-it-aaardpark-Q3_K_M.gguf |
| Size | 15.3 GB |
| Format | GGUF Q3_K_M (3.98 BPW) |
| GSM8K (5-shot, n=50) | 96% (FP16: 98%) |
| Min VRAM | 16 GB |
Download & Run
huggingface-cli download aaardpark/gemma-4-31B-it-GGUF \
gemma-4-31B-it-aaardpark-Q3_K_M.gguf --local-dir .
llama-cli -m gemma-4-31B-it-aaardpark-Q3_K_M.gguf -ngl 99 --chat-template gemma
Outputs
A store offers 30% off, then an additional 20% off the sale price. What's the total discount?
The total discount is 44%. Most people simply add the percentages together (30% + 20% = 50%). However, discounts are applied sequentially, not cumulatively. The second discount is applied to the already reduced price: $1 - (0.7 \times 0.8) = 1 - 0.56 = 0.44$ or 44%.
If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
5 minutes. Each machine makes one widget every 5 minutes. 100 machines can make 100 widgets in the same 5-minute window.
Write a summary of the French Revolution, but every sentence must start with the letter 'T' and no word can exceed 8 letters.
The king lost power. The poor felt pain. They fought for rights. The crowd took over. Terror came soon. The rule ended.
Write a one-sentence story about a cat.
The ginger cat spent the entire afternoon stalking a single dust mote dancing in a shaft of sunlight.
Find the minimum spanning tree with exactly K red edges. Describe your algorithm, prove its correctness, and implement it in Python.
This is a variation of the Minimum Spanning Tree (MST) problem with a constraint on the number of edges of a specific color. The most efficient way to solve this is using WQS (WQS-C) or the "Aliens" trick. Since we want exactly $K$ red edges, we introduce a penalty $\lambda$ added to every red edge's weight...
(Produced correct WQS/Lagrangian relaxation algorithm with convexity proof sketch, binary search over penalty with Kruskal's inner loop, proper tie-breaking for red edges, and working Python implementation. O(log(MaxWeight) * E log E) complexity.)
VRAM requirements
| VRAM | Experience |
|---|---|
| 16 GB | Fits. ~3.5K context with 1.8 GB headroom. |
| 24 GB | Comfortable. 8K+ context. |
| 32 GB+ | Full context window. |
Details
- Base model: google/gemma-4-31B-it
- Method: Importance-weighted per-group grid search (3-bit, gs=128) + Q3_K_M via llama.cpp
- Vision tower: Weights preserved at FP16
- Calibration: Chat-formatted prompts across code, math, reasoning, factual, creative
More from aaardpark
- Qwen 2.5 72B Instruct GGUF — 35 GB, 88% GSM8K
- Qwen 2.5 32B Instruct GGUF — 15 GB, runs on 24 GB machines
- Downloads last month
- 104
3-bit
Model tree for aaardpark/gemma-4-31B-it_aard-3bit
Base model
google/gemma-4-31B-it