Notes

  • 04-12-2026: The Q4_K_M I uploaded seems to have some issues, the PPL / KLD was throwing nan so I'll remove the model for now and try to get a working quant up tomorrow.

Description

This repo contains specialized MoE-quants for MiniMax-M2.7. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.

Quant Size Mixture PPL 1-(Mean PPL(Q)/PPL(base)) KLD
Q8_0 226.43 GiB (8.51 BPW) Q8_0 7.880138 Β± 0.060034 +0.2412% 0.029715 Β± 0.000649
Q5_K_M 157.23 GiB (5.91 BPW) Q8_0 / Q5_K / Q5_K / Q6_K 7.871878 Β± 0.059897 +0.1361% 0.038926 Β± 0.000692
IQ4_XS 101.10 GiB (3.80 BPW) Q8_0 / IQ3_S / IQ3_S / IQ4_XS 8.290674 Β± 0.063543 +5.4635% 0.128807 Β± 0.001070
IQ3_S 77.86 GiB (2.92 BPW) Q6_K / IQ2_S / IQ2_S / IQ3_S 8.815764 Β± 0.067859 +12.1430% 0.282740 Β± 0.001687

kld_graph ppl_graph

Downloads last month
1,479
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for AesSedai/MiniMax-M2.7-GGUF

Quantized
(48)
this model