Qwen-Image 8-bit MLX

MLX-optimized 8-bit quantized version of Qwen-Image for Apple Silicon.

First MLX port of Qwen-Image! ๐ŸŽ‰

Quick Start

pip install mflux

mflux-generate-qwen \
  --model mlx-community/Qwen-Image-2512-8bit \
  --prompt "A photorealistic cat wearing a tiny top hat" \
  --steps 20

Performance

Metric Value
Size 34GB (8-bit quantized)
Speed ~8.5s/step on M-series Mac
20 steps ~2:50 total

Model Details

  • Base Model: Qwen/Qwen-Image (Dec 2025)
  • Quantization: 8-bit
  • Framework: MLX (Apple Silicon optimized)
  • Converted with: mflux 0.14.0

Hardware Requirements

  • Apple Silicon Mac (M1/M2/M3/M4/M5)
  • ~40GB unified memory recommended for 8-bit

Sample Output

Generated with prompt: "A photorealistic cat wearing a tiny top hat, studio lighting"

All Quantizations

License

Apache 2.0 (same as base model)

Credits

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/Qwen-Image-2512-8bit

Base model

Qwen/Qwen-Image
Finetuned
(63)
this model