These are UD quantizations of huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated, packaged for llama.cpp / GGUF inference.
Quick Start
- Download the latest release of llama.cpp.
- Download your preferred model variant from the files below.
- Use the corresponding
mmprojfile for multimodal inference.
Which version should I choose?
These variants are built using a Unsloths Amazing tensor distribution recipe to preserve as much quality as possible while reducing memory usage.
Notes
- This repo is a quantized release, of the finetuned version :
- Base model: huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated
- Runtime: llama.cpp
- Downloads last month
- 17,101
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for groxaxo/Huihui-gemma-4-26B-A4B-it-abliterated-GGUF
Base model
google/gemma-4-26B-A4B