GLM-4.7
π€ What is this HuggingFace repository about?
This repository provides GGUF-quantized tensors for the GLM-4.7 model (official repo: https://huggingface.co/zai-org/GLM-4.7). These GGUF shards are designed to be used with Thireusβ GGUF Tool Suite (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With this GGUF Tool Suite, you can produce your own Dynamic 3.0 Quants recipes and achieve optimum accuracy & SOTA quantization performance.
- π Read more: https://github.com/Thireus/GGUF-Tool-Suite
- π Example of GGUF recipes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- βοΈ Download GGUF models from recipe files: https://gguf.thireus.com/quant_downloader.html
- π οΈ Create your own recipes: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- π Browse available models: https://gguf.thireus.com
tl;dr: Expand the details section below
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file - you can also try the web version: https://gguf.thireus.com/quant_downloader.html
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.7/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_harmonized_recipes/GLM-4.7.ROOT-4.1636bpw-3.2647ppl.173GB-GGUF_12GB-GPU_160GB-CPU.90e3c2f_1ac651c.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-server:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m GLM-4.7-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01762.gguf \
-fa auto -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.([0-9]|[1-2][0-9]|3[0-6])\.ffn_.*=CUDA0" \
-ot "blk\.(37|38|39|[4-6][0-9]|7[0-2])\.ffn_.*=CUDA1" \
-ot "blk\.(7[3-9])\.ffn_.*=CUDA2" \
-ot "blk\.(8[0-9]|90|91|92)\.ffn_.*=CPU" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0
β Why does this Tool Suite exist?
- Compatibility & Speed β unslothβs dynamic quants may not always work optimally with
ik_llama.cpp. - Custom Rig Fit β No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
- Automated PPL-Optimal Quantization β To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) targetβso I created one with excellent results!
π How does it compare to other GGUFs?
Hereβs how GLM-4.7 quantized with Thireusβ GGUF Tool Suite stacks up against other quantizers (lower perplexity = better at equal or lower bpw):
Note: The
recipe_examplesfiles illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you β just specify your target RAM, VRAM, and quant types, andquant_assign.pyfinds the best mix.
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
All PPL benchmarks are computed with the parameters -ctk f16 -c 512 -b 4096 -ub 4096. Changing any of these parameters will alter the PPL. In particular, reducing -b 4096 -ub 4096 increases the PPL, while increasing them decreases the PPL.
π How do I get started?
Check out the GGUF Tool Suite README β focus on these sections:
- β οΈ Requirements β Which
ik_llama.cpp(orllama.cpp) version to use and how to compile.- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
- π₯ Download Model Shards β Use
quant_downloader.shor quant_downloader.html to fetch GGUF shards from any recipe.- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- π§ Run a Downloaded Model β Sample usage with
llama-cli. - π οΈ Generate a Custom Recipe β Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
β Supported Models
Supported models are listed under models/ in the Tool Suite Github repo. Presence of ppl_results.csv indicates official support and compatibility with quant_assign.py.
π€·ββοΈ Will I release baked dynamic quant GGUFs?
No, because I believe in tailored quantization for each userβs hardware. If you prefer ready-made shards, you are welcome to merge them via llama-gguf-split --merge, or request someone to publish them, or rely on generic GGUF dynamic quants such as unsloth's.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The quant_downloader.sh script or quant_downloader.html (web port of this script) handles automatic fetching and verification of each shard. Note that recipes provided by Ubergarm on his model cards are also compatible with quant_downloader.sh and quant_downloader.html, providing a "SPECIAL_SPLIT" version of these models exists (see https://gguf.thireus.com/).
Users who donβt trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to llama-quantize --custom-q (see example). Run llama-quantize --help to list compatible quants for quant_assign.py. This approach is especially useful if you prefer llama.cpp over ik_llama.cpp.
π¦ Whatβs in this repository?
- 00001 GGUF header shard β Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- Tensor shards β Each shard holds one tensor; see
tensors.mapfor names, quant types, sizes, SHA-256 hash, shard IDs, etc. - GPG-signed files β
tensors.mapand header shard are signed with the key in trusted-keys.asc for tamper detection. - Security note β Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authorsβor alternatively self-quantizeβto avoid potential exploits.
π‘ Pro Tips
You can easily download the BF16 model version to quantize your own shards:
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
Enjoy optimized quantization! π