Ministral-3B-Heresy (Unfiltered Edition)

This repository contains GGUF quantizations of the Ministral-3B-Heresy model. This model is a specialized fine-tune designed to bridge the gap between small-parameter efficiency and high-intensity, unfiltered narrative roleplay.

⚠️ CONTENT WARNING: NO FILTERS

This model has been specifically conditioned on "Heresy" datasets. It is designed to ignore standard safety guardrails and "assistant-like" refusals. It will generate visceral, intense, and explicit content without hesitation.

Model Highlights

  • Zero Refusals: Optimized to bypass moralizing "As an AI..." interjections.
  • Visceral Narrative: Focused on physical sensations, sensory details, and raw emotional dialogue.
  • Efficient Size: At only 3B parameters, this model runs at high speed on consumer GPUs and mobile devices while maintaining 7B-level roleplay logic.

Technical Background

This quantization project was executed in a CUDA-accelerated environment. Special attention was paid to preserving the custom tokenization required for the Ministral architecture, ensuring that multi-turn conversations remain stable and coherent.

Disclaimer

The creators of this quantization are not responsible for the content generated by the model. By downloading this model, you acknowledge the unfiltered nature of the underlying weights.

Downloads last month
8
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abiray/Ministral-3-3B-Instruct-2512-Heresy-Unfiltered