Paulo Faria
Azoresman
AI & ML interests
None yet
Recent Activity
reacted to SeaWolf-AI's post with ๐ฅ about 22 hours ago
๐งฌ Introducing Darwin-9B-NEG โ the first model with Native Entropy Gating (NEG)
๐ Try it now: https://huggingface.co/FINAL-Bench/Darwin-9B-NEG
๐ Q4 bit : https://huggingface.co/FINAL-Bench/Darwin-9B-MFP4
We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model
that embeds an architecturally-internalised sense of self-confidence directly
into the transformer โ our proprietary Native Entropy Gating (NEG) technology.
๐ GPQA Diamond (198 PhD-level questions):
โธ Baseline Darwin-9B (no NEG) โ 51.01 %
โธ Pure NEG (greedy ยท 1ร cost) โ 63.64 % ๐ฅ +12.63 %p
โธ + Permutation (4ร cost) โ 76.26 %
โธ + Ensemble Refinement (~20ร) โ 84.34 % ๐
With only 9 billion parameters and 1ร inference cost, Pure NEG jumps
+12.63 %p over the same model without NEG. Going all-in with ensemble
refinement pushes it to 84.34 % โ surpassing the published Qwen3.5-9B
leaderboard score (81.7 %) by +2.64 %p.
๐ฌ What makes NEG different from Multi-Turn Iteration (MTI)?
Classical MTI needs 3-8ร extra inference passes. NEG instead lives
INSIDE the single decoding loop. Two tiny modules ride with the
transformer: NEG-Head predicts per-token entropy from the last hidden
state, and NEG-Gate conditionally restricts the top-k choice when
confidence is low. The gate activates in only 4.36 % of tokens โ
essentially free at inference time.
โจ Key differentiators
โข Architecturally internalised โ model file *is* the feature
โข 1ร inference cost (vs. 3-8ร for MTI)
โข Drop-in with vLLM / SGLang / TGI / transformers โ no extra engine
โข +12.63 %p reasoning at zero latency overhead
โข Single-file deployment, Apache 2.0 licensed
๐งฌ Lineage
Qwen/Qwen3.5-9B โ Darwin-9B-Opus (V7 evolutionary merge) โ Darwin-9B-NEG (V8 + NEG training)
#Darwin #NEG #NativeEntropyGating #GPQA #Reasoning #LLM #OpenSource #Apache2 liked a model 6 days ago
Ex0bit/Gemma4-26B-A4B-PRISM-PRO-DQ-GGUF reacted to danielhanchen's post with ๐ฅ about 1 month ago
Introducing Unsloth Studio โจ
A new open-source web UI to train and run LLMs.
โข Run models locally on Mac, Windows, Linux
โข Train 500+ models 2x faster with 70% less VRAM
โข Supports GGUF, vision, audio, embedding models
โข Auto-create datasets from PDF, CSV, DOCX
โข Self-healing tool calling and code execution
โข Compare models side by side + export to GGUF
GitHub: https://github.com/unslothai/unsloth
Blog and Guide: https://unsloth.ai/docs/new/studio
Available now on Hugging Face, NVIDIA, Docker and Colab.Organizations
None yet