Liquid AI
Try LFM • Documentation • LEAP

LFM2.5-1.2B-Instruct-bf16

MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference.

Model Details

Property Value
Parameters 1.2B
Precision bfloat16
Size 2.2 GB
Context Length 128K

Recommended Sampling Parameters

Parameter Value
temperature 0.1
top_k 50
top_p 0.1
repetition_penalty 1.05
max_tokens 512

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler, make_logits_processors

model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-bf16")

prompt = "What is the capital of France?"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

sampler = make_sampler(temp=0.1, top_k=50, top_p=0.1)
logits_processors = make_logits_processors(repetition_penalty=1.05)

response = generate(
    model,
    tokenizer,
    prompt=prompt,
    max_tokens=512,
    sampler=sampler,
    logits_processors=logits_processors,
    verbose=True,
)

License

This model is released under the LFM 1.0 License.

Downloads last month
232
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LiquidAI/LFM2.5-1.2B-Instruct-MLX-bf16

Finetuned
(9)
this model

Collection including LiquidAI/LFM2.5-1.2B-Instruct-MLX-bf16