MiniMax-M2.7 — 100GB (MLX)

A quantized build of MiniMaxAI/MiniMax-M2.7 produced by baa.ai.

Property Value
Size on disk 100.1 GB
Format MLX
Base model MiniMaxAI/MiniMax-M2.7

Usage

from mlx_lm import load, generate

model, tokenizer = load("baa-ai/MiniMax-M2.7-RAM-100GB-MLX")
response = generate(
    model, tokenizer,
    prompt="Hello!",
    max_tokens=512,
    verbose=True,
)

Variants

Variant Size
90GB 90.1 GB
100GB 100.1 GB
120GB 120.1 GB
155GB 155.2 GB
203GB 203.2 GB

License

Inherited from the upstream MiniMax-M2.7 license: non-commercial use permitted; commercial use requires written authorization from MiniMax.


baa.ai

Downloads last month
53
Safetensors
Model size
229B params
Tensor type
BF16
·
U32
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for baa-ai/MiniMax-M2.7-RAM-100GB-MLX

Quantized
(35)
this model

Collection including baa-ai/MiniMax-M2.7-RAM-100GB-MLX