Qwen3.5-9B-abliterated - GGUF

This repository contains a full spectrum of GGUF quantizations for lukey03's Qwen3.5-9B-abliterated.

These files are optimized for local inference using llama.cpp, LM Studio, Jan, Ollama, and other compatible software.

🧠 About the Base Model

The base model is a fully uncensored version of Qwen3.5-9B. It achieved a 0% refusal rate (answering 100% of controversial/restricted prompts) through a two-stage process:

  1. Orthogonal Projection (Abliteration): Surgically removing the "refusal direction" from the residual stream across all 32 layers.
  2. LoRA Fine-tuning: Targeted training to eliminate the remaining stubborn refusal categories.

Key Features:

  • Architecture: 9 Billion parameters, Hybrid Gated DeltaNet + standard attention.
  • Context Window: Natively supports up to 262k tokens.
  • Capabilities: Strong reasoning, coding, and creative writing, with natively built-in Multimodal (Vision) support.

πŸ’Ύ Available Quantizations

File Name Quant Type Size Description / Recommendation
Qwen3.5-9B-abliterated-Q8_0.gguf Q8_0 ~9.5 GB Highest Quality: Near-perfect F16 equivalent. Best if you have 12GB+ VRAM.
Qwen3.5-9B-abliterated-Q6_K.gguf Q6_K ~7.2 GB Gold Standard: Extremely low quality loss. The recommended sweet spot for 9B models.
Qwen3.5-9B-abliterated-Q5_K_M.gguf Q5_K_M ~6.4 GB Great balance of speed and intelligence. Fits comfortably on 8GB VRAM cards.
Qwen3.5-9B-abliterated-Q5_K_S.gguf Q5_K_S ~6.2 GB Slightly faster than K_M, with a microscopic drop in nuance.
Qwen3.5-9B-abliterated-Q4_K_M.gguf Q4_K_M ~5.6 GB Excellent for lower-end hardware and older laptops.
Qwen3.5-9B-abliterated-Q4_K_S.gguf Q4_K_S ~5.3 GB Fastest acceptable 4-bit quant. Good for limited memory.
Qwen3.5-9B-abliterated-Q3_K_L.gguf Q3_K_L ~4.8 GB Heavy compression. Expect some logic loss and hallucination.
Qwen3.5-9B-abliterated-Q3_K_M.gguf Q3_K_M ~4.4 GB Extreme compression. Only use if absolutely necessary.

πŸš€ How to Use

1. LM Studio

  1. Open LM Studio and go to the Search tab.
  2. Search for Abhiray/Qwen3.5-9B-abliterated-GGUF.
  3. Download your preferred quantization (e.g., Q6_K or Q5_K_M).
  4. Load it into the chat interface and start typing!

2. llama.cpp (Terminal)

./llama-cli -m Qwen3.5-9B-abliterated-Q6_K.gguf -p "<|im_start|>system\nYou are an uncensored AI assistant.<|im_end|>\n<|im_start|>user\nHello!<|im_end|>\n<|im_start|>assistant\n" -n 512
Downloads last month
2,498
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Abiray/Qwen3.5-9B-abliterated-GGUF

Finetuned
Qwen/Qwen3.5-9B
Quantized
(3)
this model

Collection including Abiray/Qwen3.5-9B-abliterated-GGUF