Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mudler
/
Qwen3.5-35B-A3B-APEX-GGUF

Text Generation
GGUF
quantized
Mixture of Experts
apex
mixed-precision
llama-cpp
layer-wise
qwen3
apex-quant
conversational
Model card Files Files and versions
xet
Community
8
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Getting "///////" output with this and other quants

#8 opened 6 days ago by
dmcleod97

When NVFP4 will arrive completely to llama.cpp and used together with APEX method - this model will become nearly perfect !

#7 opened 7 days ago by
valikk123

Precision discrepancy between variants

#6 opened 9 days ago by
represiv

Great work

#5 opened 9 days ago by
rbestuar

MLX versions

#4 opened 12 days ago by
pcomte

Qwopus/Opus distillation possible?

1
#3 opened 15 days ago by
Dogdjgift

Broken ollama links

#2 opened 15 days ago by
jakeying

Is I-* variant models supposed to be better or worse than non I-* variant?

1
#1 opened 15 days ago by
j5IGNPff2vG
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs