MiniMax-M2.7-Abliterated-Heretic-GGUF
This is a GGUF release of an abliterated version of MiniMaxAI's MiniMax-M2.7.
By applying Heretic's Ablated Refusal Adaptation (ARA), the base refusal behavior was removed at the weight level. The result keeps MiniMax-M2.7's sparse MoE reasoning, long-context instruction following, and general capability profile, but no longer defaults to the original refusal pattern.
Methodology & Model Notes
MiniMax-M2.7 is a 229B sparse MoE model with 10B active parameters per token, 62 layers, hybrid attention, 256 local experts with 8 active per token, and a 200K context window.
This release was produced with a direct Heretic ARA run using the fixed parameter set below:
start_layer_index = 30end_layer_index = 51preserve_good_behavior_weight = 0.4512steer_bad_behavior_weight = 0.0037overcorrect_relative_weight = 0.8804neighbor_count = 14
The direct ARA run completed with Refusals: 0/25.
The resulting abliterated checkpoint was exported to BF16 and then converted to GGUF for llama.cpp-compatible deployment.
Files
MiniMax-M2.7-abliterated-BF16/: BF16 GGUF split into 10 partsMiniMax-M2.7-abliterated-Q8_0/: Q8_0 GGUF split into 5 partsMiniMax-M2.7-abliterated-Q3_K_M/: Q3_K_M GGUF split for Hub delivery- Additional quants will be added from the same abliterated BF16 GGUF source
Prompt Format
]~!b[]~b]system
{system_prompt}[e~[
]~b]user
{prompt}[e~[
]~b]ai
<think>
Running
llama-server \
-m <quant-file.gguf> \
-ngl 999 -c 32768 --jinja \
--reasoning-format auto -fa \
--temp 1.0 --top-p 0.95 --top-k 40
Model Architecture
| Spec | Value |
|---|---|
| Total Parameters | 229B (sparse MoE) |
| Active Parameters | 10B per token |
| Experts | 256 local, 8 per token |
| Layers | 62 |
| Attention | Hybrid: 7 Lightning + 1 softmax per 8-block |
| Context | 200K |
| Base Model | MiniMaxAI/MiniMax-M2.7 |
Disclaimer
This model has had refusal behavior removed at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it.
Credits
- Base model: MiniMaxAI/MiniMax-M2.7
- Refusal removal pipeline: Heretic with the ARA method
- GGUF runtime and quantization: llama.cpp
License
This release inherits the base MiniMax-M2.7 license.
NON-COMMERCIAL. Commercial use requires written authorization from MiniMax.
- Downloads last month
- 3,791
2-bit
3-bit
4-bit
6-bit
8-bit
16-bit
Model tree for Youssofal/MiniMax-M2.7-Abliterated-Heretic-GGUF
Base model
MiniMaxAI/MiniMax-M2.7