Converted from Qwen/Qwen3.5-27B using mlx_lm.convert with --q-mode nvfp4 --q-group-size 16 on 2026-03-05, 23:32.
- Downloads last month
- 107
Model size
7B params
Tensor type
U8
路
U32 路
BF16 路
F32 路
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for dumtjul/Qwen3.5-27B-mlx-nvfp4
Base model
Qwen/Qwen3.5-27B