---
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ja
- ko
- fr
- es
- de
- ar
- zh
pipeline_tag: image-text-to-text
tags:
- vision
- vlm
- liquid
- lfm2
- lfm2-vl
- edge
- llama.cpp
- lfm2.5
- lfm2.5-vl
- gguf
base_model:
- LiquidAI/LFM2.5-VL-450M
---
# LFM2.5-VL-450M-GGUF
LFM2.5-VL is a new generation of vision models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-VL-450M
## 🏃 How to run LFM2.5-VL
Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
full precision (F16/F16):
```
llama-mtmd-cli -hf LiquidAI/LFM2.5-VL-450M-GGUF:F16
```
fastest inference (Q4_0/Q8_0):
```
llama-mtmd-cli -hf LiquidAI/LFM2.5-VL-450M-GGUF:Q4_0
```
## 📬 Contact
- Got questions or want to connect? [Join our Discord community](https://discord.com/invite/liquid-ai)
- If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).