very awesome

loads and runs browser-use on a single r9700 32gb, needed a gguf to run on lmstudio

larger quants work fine on a w6800 + r9700, though I didn't immediately notice a performance difference

Downloads last month
-
GGUF
Model size
31B params
Architecture
qwen3vlmoe
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sky-mighty/bu-30b-a3b-preview-quantized

Quantized
(4)
this model