Open4bits / LFM2.5-1.2B-Base-Quantized
This repository provides multiple quantized variants of the LFM 2.5 Base (1.2B parameters) model for efficient inference and deployment.
The original model is developed and released by LiquidAI:
Original model:
https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base
These quantizations are maintained and published by ArkAiLab under the Open4bits organization to improve accessibility across a wide range of hardware.
Available Quantization Formats
Each format is stored in a separate directory:
- FP16 โ Baseline half-precision weights
- FP8 โ High-performance low-precision format (GPU support required)
- INT8 โ Balanced performance and memory usage (BitsAndBytes)
- NF4 (4-bit) โ Maximum compression using BitsAndBytes double quant
Model Information
- Model Name: LFM 2.5 Base
- Parameters: ~1.2B
- Architecture: Custom LiquidAI architecture
- Original Author: LiquidAI
- Quantized By: ArkAiLab (Open4bits)
This model requires trust_remote_code=True when loading.
Quantization Details
- Quantized using PyTorch and Hugging Face Transformers
- INT8 and NF4 formats use BitsAndBytes
- FP8 provided where hardware support allows
- No GPTQ, AWQ, or llama.cpp used
- Safe for Google Colab and Kaggle
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Open4bits/LFM2.5-1.2B-Base-Quantized"
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
device_map="auto"
)
inputs = tokenizer("Hello, world!", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Organization
This repository is maintained by ArkAiLab under the Open4bits initiative.
ArkAiLab (Main Organization): https://huggingface.co/ArkAiLab-Adl
Open4bits (Quantization Projects): https://huggingface.co/Open4bits
License
This repository follows the same license as the original LiquidAI model.
Please refer to the original model repository for full licensing details.
Disclaimer
This is an unofficial quantized release.
All credit for the original model architecture and training goes to LiquidAI.
Model tree for Open4bits/LFM2.5-1.2B-Base-Quantized
Base model
LiquidAI/LFM2.5-1.2B-Base