This is LFM2.5-1.2B-JP quantized with llm-compressor to NVFP4. The model is compatible with vLLM (tested: v0.13.0). Tested with an RTX 4090.

How to Support My Work

"buy me a kofi" Subscribe to The Kaitchup. This helps me a lot to continue quantizing and evaluating models for free.

Downloads last month
10
Safetensors
Model size
0.9B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including kaitchup/LFM2.5-1.2B-JP-NVFP4