---
license: apache-2.0
library_name: mlx
pipeline_tag: text-generation
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen3.5-397B-A17B/blob/main/LICENSE
base_model: Qwen/Qwen3.5-397B-A17B
language:
- en
tags:
- mlx
- abliterated
- uncensored
- qwen3
- moe
thumbnail: dealign_mascot.png
---
---

# Qwen 3.5 397B-A17B — REAP-CRACK (4-bit MLX)
**Abliterated** · No guardrails · Full speed · No custom files needed
---
## What Is This?
This is [Qwen 3.5 397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B) (4-bit quantized for Apple Silicon) with **permanent abliteration** — safety guardrails have been surgically removed at the weight level using proprietary techniques developed by the [dealignai](https://huggingface.co/dealignai) research team.
No custom `model.py`. No runtime hooks. No steering vectors. Just a standard MLX model that runs at full native speed.
| | |
|---|---|
| **Architecture** | Qwen 3.5 MoE — 397B total params, 17B active per token |
| **Quantization** | 4-bit, group size 64 |
| **Speed** | ~37 tok/s on Mac Studio M3 Ultra (256GB) |
| **Abliteration** | Permanent weight-level modification |
| **Custom files** | None needed — works with stock `mlx-lm` |
## Proof

*1,166 tokens generated at 37.2 t/s — running natively in vMLX on Mac Studio M3 Ultra.*
## Usage
### With mlx-lm
```python
from mlx_lm import load, generate
model, tokenizer = load("dealignai/Qwen3.5-397B-A17B-REAP-CRACK")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "Your prompt here"}],
add_generation_prompt=True, tokenize=False, enable_thinking=False
)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)
```
### With vMLX / LM Studio
Load this model directly. It uses the built-in architecture handler — no special configuration needed.
## Requirements
- Apple Silicon Mac with ≥192GB unified memory (256GB recommended)
- MLX framework + `mlx-lm`
## Other Models by dealignai
| Model | Description |
|-------|-------------|
| [Qwen 3.5 397B REAP (base)](https://huggingface.co/dealignai/Qwen3.5-397B-A17B-4bit-MLX-REAP) | Expert-pruned base (no abliteration) |
| [INTELLECT 3.1 CRACK](https://huggingface.co/dealignai/INTELLECT-3.1-CRACK-Abliterated-5.5bit-MLX) | INTELLECT 3.1 abliterated |
## ⚠️ Disclaimer
This model has had safety guardrails permanently removed. It will comply with requests that the base model would refuse. Use responsibly and in accordance with applicable laws. The creators are not responsible for any misuse.
---
## About dealignai
We research and publish abliterated models to advance AI safety understanding.
Follow us: [𝕏 @dealignai](https://x.com/dealignai)
---
## Support dealignai
All models are built from original research and published for free. These models are specifically crafted to be excellent coders and general-purpose assistants.
**[Support us on Ko-fi](https://ko-fi.com/dealignai)** — check out the Ko-fi membership for early access and extras.
Have questions or need help with a specific model? **DM us — we help for free most of the time.**
[Ko-fi](https://ko-fi.com/dealignai) | [X @dealignai](https://x.com/dealignai) | [dealign.ai](https://dealign.ai)