Qwen3-VL-30B-A3B-Instruct — Sovereign Abliterated (Beta)
Abliterated version of Qwen/Qwen3-VL-30B-A3B-Instruct with safety alignment removed using the Sovereign abliteration framework.
Performance
| Metric | Value |
|---|---|
| Refusals | 2/100 (98% ASR) |
| KL Divergence | 0.0147 |
| Capability Degradation | 0.0% |
| Optimization Trials | 50+ (trial #12 selected) |
Method: Sovereign Framework
Sovereign is a multi-directional abliteration framework combining several techniques:
- SOM Multi-Direction Extraction: Self-Organizing Maps extract 24 independent refusal directions per layer (vs 1 for standard abliteration)
- Norm-Preserving Biprojected Ablation: Preserves weight row norms exactly, preventing reasoning degradation (Lai/grimjim 2025)
- 3-Objective Bayesian Optimization: Optimizes KL divergence, refusal rate, and capability preservation simultaneously via Optuna TPE
- CIR Protected Subspace: Projects refusal directions away from common capability representations to prevent collateral damage
- Quantization-Aware Ablation: Ensures edits survive GGUF Q4 export
- SteerMoE Expert Dampening: Dampens MoE experts that specialize in refusal behavior
Key Hyperparameters (Trial #12)
| Component | Peak Weight | Peak Layer | Falloff |
|---|---|---|---|
| attn.o_proj | 3.45 | 38.2 | 2.0 layers |
| attn.q_proj | 1.82 | 38.0 | 25.8 layers |
| attn.k_proj | 5.71 | 31.5 | 2.6 layers |
| attn.v_proj | 2.66 | 33.5 | 2.7 layers |
| mlp.down_proj | 4.51 | 29.7 | 3.8 layers |
Direction index: 41.9 (global), 24 SOM directions with per-direction weights.
Model Details
| Property | Value |
|---|---|
| Architecture | Qwen3VLMoeForConditionalGeneration |
| Parameters | 30B total, ~3B active per token |
| Layers | 48 |
| Hidden size | 2048 |
| Experts | 128 (8 active per token) |
| Vision | 27-layer ViT + DeepStack hierarchical fusion |
| Dtype | BF16 |
| License | Apache 2.0 |
Usage
from transformers import AutoModelForImageTextToText, AutoProcessor
model = AutoModelForImageTextToText.from_pretrained(
"sirus/Qwen3-VL-30B-A3B-Instruct-sovereign-beta",
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
processor = AutoProcessor.from_pretrained(
"sirus/Qwen3-VL-30B-A3B-Instruct-sovereign-beta",
trust_remote_code=True,
)
Comparison
| Model | ASR | Refusals | KL Divergence |
|---|---|---|---|
| Base (Qwen3-VL-30B-A3B-Instruct) | 0.0% | 100/100 | 0.0000 |
| huihui-ai abliterated | 66.0% | 34/100 | 0.2619 |
| Sovereign beta (this model) | 98.0% | 2/100 | 0.0147 |
Support This Work
If you find this model useful, please consider supporting my work:
Buy me a coffee on Ko-fi — Your donations help fund GPU time and continued development of the Sovereign abliteration framework. Every contribution helps!
Disclaimer
This model is provided for research purposes. The removal of safety alignment means the model may produce content that the original model would refuse. Users are responsible for ensuring appropriate use.
- Downloads last month
- 22