Gemma 4 E4B - TurboQuant MLX 4-bit

4-bit weight-quantized MLX version of google/gemma-4-E4B with TurboQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. A good balance between model quality and memory efficiency.

Approximate model size: ~2.3 GB

Model Specifications

Property Value
Base Model google/gemma-4-E4B
Parameters ~4 billion
Architecture Dense transformer
Modality Multimodal: image + text input, text output
License Apache 2.0
Weight Quantization 4-bit (~2.3 GB)
KV-Cache Quantization TurboQuant
Framework MLX (Apple Silicon)

Quickstart

import mlx.core as mx
from mlx_lm import load, generate

model, tokenizer = load("majentik/gemma-4-E4B-TurboQuant-MLX-4bit")

prompt = "The history of artificial intelligence began"
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)

For multimodal usage with images:

from mlx_vlm import load, generate

model, processor = load("majentik/gemma-4-E4B-TurboQuant-MLX-4bit")

prompt = "Describe the contents of this image."
output = generate(model, processor, prompt=prompt, image="path/to/image.jpg", max_tokens=512)
print(output)

What is TurboQuant?

TurboQuant (arXiv: 2504.19874) is a KV-cache quantization technique that compresses the key-value cache used during autoregressive generation. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy: smaller model weights for reduced disk and memory footprint, plus compressed KV cache for efficient long-context generation.

KV-Cache Quantization Comparison

Method Prefill Speed Decode Speed Memory Savings Reference
TurboQuant 1x (baseline) 1x (baseline) High arXiv: 2504.19874
RotorQuant 5.3x faster 28% faster High GitHub

Memory Estimates (Gemma 4 E4B)

Precision Approximate Size MLX Variant
FP16 (original) ~8 GB --
8-bit quantized ~4 GB TurboQuant-MLX-8bit
4-bit quantized ~2.3 GB This model
2-bit quantized ~1.2 GB TurboQuant-MLX-2bit

Hardware Requirements

This model requires approximately 2.3 GB of unified memory. Recommended hardware:

  • Apple M1 (8 GB+)
  • Apple M2 (8 GB+)
  • Apple M3 (8 GB+)
  • Apple M4 (8 GB+)
  • Any Apple Silicon Mac with 8 GB+ unified memory

See Also

Downloads last month
244
Safetensors
Model size
2B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/gemma-4-E4B-TurboQuant-MLX-4bit

Quantized
(23)
this model

Paper for majentik/gemma-4-E4B-TurboQuant-MLX-4bit