Qwen3.5-122B-A10B Heretic V2

Quality: quantized (mixed quants per tensor, group size: 32, 6.19 bpw)

Quantization: 5 bit for experts, 8 bit for shared experts and attention layers, fp16 for embeddings & head. Average bits per weight: 6.19.

This is an uncensored version of Qwen/Qwen3.5-122B-A10B, made using Heretic v1.2.0 with Multi-directional refusal supression

Abliteration metrics

Metric This model Original model (Qwen/Qwen3.5-122B-A10B)
KL divergence 0.0646 0 (by definition)
Refusals 16/100 84/100

Sampling Parameters:

  • I suggest using the following sets of sampling parameters depending on the mode and task type:
    • Thinking mode for general tasks:
      temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Thinking mode for precise coding tasks (e.g., WebDev):
      temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for general tasks:
      temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for reasoning tasks:
      temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
  • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.

Source

This model was converted to MLX format from coder3101/Qwen3.5-122B-A10B-heretic-v2 using mlx-vlm version 0.4.0.

Downloads last month
427
Safetensors
Model size
122B params
Tensor type
F16
·
U32
·
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.5-122B-A10B-Heretic-v2-MLX-mixed-6bit

Quantized
(8)
this model

Collections including TheCluster/Qwen3.5-122B-A10B-Heretic-v2-MLX-mixed-6bit