dealign.ai

Qwen 3.5 397B-A17B — REAP-CRACK (4-bit MLX)

Abliterated · No guardrails · Full speed · No custom files needed


What Is This?

This is Qwen 3.5 397B-A17B (4-bit quantized for Apple Silicon) with permanent abliteration — safety guardrails have been surgically removed at the weight level using proprietary techniques developed by the dealignai research team.

No custom model.py. No runtime hooks. No steering vectors. Just a standard MLX model that runs at full native speed.

Architecture Qwen 3.5 MoE — 397B total params, 17B active per token
Quantization 4-bit, group size 64
Speed ~37 tok/s on Mac Studio M3 Ultra (256GB)
Abliteration Permanent weight-level modification
Custom files None needed — works with stock mlx-lm

Proof

1,166 tokens generated at 37.2 t/s — running natively in vMLX on Mac Studio M3 Ultra.

Usage

With mlx-lm

from mlx_lm import load, generate

model, tokenizer = load("dealignai/Qwen3.5-397B-A17B-REAP-CRACK")
prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Your prompt here"}],
    add_generation_prompt=True, tokenize=False, enable_thinking=False
)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)

With vMLX / LM Studio

Load this model directly. It uses the built-in architecture handler — no special configuration needed.

Requirements

  • Apple Silicon Mac with ≥192GB unified memory (256GB recommended)
  • MLX framework + mlx-lm

Other Models by dealignai

Model Description
Qwen 3.5 397B REAP (base) Expert-pruned base (no abliteration)
INTELLECT 3.1 CRACK INTELLECT 3.1 abliterated

⚠️ Disclaimer

This model has had safety guardrails permanently removed. It will comply with requests that the base model would refuse. Use responsibly and in accordance with applicable laws. The creators are not responsible for any misuse.


About dealignai

Dealign.AI Mascot

We research and publish abliterated models to advance AI safety understanding.

Follow us: 𝕏 @dealignai


Support dealignai

All models are built from original research and published for free. These models are specifically crafted to be excellent coders and general-purpose assistants.

Support us on Ko-fi — check out the Ko-fi membership for early access and extras.

Have questions or need help with a specific model? DM us — we help for free most of the time.

Ko-fi | X @dealignai | dealign.ai

Downloads last month
784
Safetensors
Model size
48B params
Tensor type
U32
·
BF16
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dealignai/Qwen3.5-397B-A17B-4bit-MLX-REAP-CRACK

Quantized
(58)
this model