Qwopus 3.5 27B V3 "Stabbed" โ€” GGUF

Quantized GGUF files for Stabhappy/Qwopus-3.5-27B-V3-Stabbed.

Disclaimer

This model is for research and educational purposes only. It is trained on a dataset that includes profanity, slurs, and edgy internet culture. It is not intended to be used as a production system, and its only current value is entertainment.

However, care has been taken to retain the model's original capabilities for long-horizon or complex requests.

This model will likely underperform when benchmarked against the base model.

Available Quantizations

Format Use Case
Q8_0 Near-lossless, good balance of quality and size
Q6_K High quality with moderate compression
Q5_K_M Good quality with stronger compression
Q4_K_M Maximum compression, acceptable quality loss

Usage

Compatible with llama.cpp and any GGUF-compatible inference engine (e.g., LM Studio, Ollama, text-generation-webui).

# Example with llama.cpp
./llama-cli -m Qwopus-3.5-27B-V3-Stabbed-Q4_K_M.gguf -p "Your prompt here"
Downloads last month
214
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Stabhappy/Qwopus-3.5-27B-V3-Stabbed-GGUF

Base model

Qwen/Qwen3.5-27B
Quantized
(30)
this model