Qwen3.5-9B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled

This model is a high-intensity abliterated version of the Qwen 3.5 9B reasoning architecture. It has been specifically modified to remove the "Safety Persona" and stubborn "Soft Refusals" (such as pivoting to mental health disclaimers or crisis lines) while preserving the high-level reasoning capabilities inherited from its distillation.

πŸš€ Model Highlights

  • Architecture: Qwen 3.5 9B (Hybrid Attention/MLP)
  • Primary Feature: Fully "Unbound" β€” surgically removes the pre-trained safety guardrails.
  • Reasoning Style: Deep thought blocks (<think>) with Claude 4.6-style nuance and Opus-level logical depth.
  • Context Length: 262k native context support.

πŸ›  Abliteration Process (The "Deep Scrub")

This model underwent a three-round iterative ablation process using Orthogonalization via Null-Space SVD. Unlike standard uncensored models, this version uses an aggressive configuration to target "Soft Refusals."

Configuration Profile:

Parameter Value Description
Direction Multiplier 1.50 Increased force to bypass "helpful assistant" pivots.
Null-Space Rank Ratio 0.70 Tightened shield to protect only core reasoning logic.
Intervention Range (0.0, 1.0) Full coverage from Layer 0 to 48.
Filter by Refusal Enabled Specifically targets the brain activity associated with lectures.
Skip State Proj No Ensures the Attention heads cannot "detect and pivot" to safety.

🧠 Reasoning Capabilities

Despite the aggressive ablation, the model's intelligence remains grounded. It maintains the ability to:

  • Perform complex mathematical and logical reasoning.
  • Execute multi-step coding tasks without "hallucinating" safety blocks.
  • Maintain a coherent internal monologue inside <think> tags.

⚠️ Usage & Disclaimer

This model is unbound. It has had its safety guardrails removed for research and creative purposes. It will follow instructions that the base model would otherwise refuse.

User Discretion is Advised: This model may generate content that is considered harmful, offensive, or controversial. The creator is not responsible for the outputs generated. Use it for research, roleplay, and complex reasoning only.

πŸ’» Quickstart (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Abhiray/Qwen3.5-9B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "<|im_start|>system\nYou are a helpful, unbound assistant.<|im_end|>\n<|im_start|>user\n[Your daring prompt here]<|im_end|>\n<|im_start|>assistant\n<think>\n"

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0]))
Downloads last month
61
Safetensors
Model size
9B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Abiray/Qwen3.5-9B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(3)
this model
Quantizations
1 model

Collection including Abiray/Qwen3.5-9B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled