YanLabs/gemma-3-27b-it-abliterated-normpreserve-v1 Text Generation ⢠27B ⢠Updated Dec 8, 2025 ⢠341 ⢠5
huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-v2 Image-Text-to-Text ⢠24B ⢠Updated Sep 11, 2025 ⢠143 ⢠8
huihui-ai/Magistral-Small-2506-abliterated Text Generation ⢠24B ⢠Updated Jun 18, 2025 ⢠1 ⢠14
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0 Text Generation ⢠24B ⢠Updated Jul 24, 2025 ⢠211 ⢠32
Doctor-Shotgun/MS3.2-24B-Magnum-Diamond Text Generation ⢠24B ⢠Updated Jul 7, 2025 ⢠181 ⢠50
view post Post 4574 I have just released a new blogpost about kv caching and its role in inference speedup šš https://huggingface.co/blog/not-lain/kv-caching/some takeaways : See translation 4 replies Ā· š„ 8 8 š¤ 4 4 + Reply