--- license: apache-2.0 library_name: transformers pipeline_tag: image-text-to-text base_model: Qwen/Qwen3.5-35B-A3B base_model_relation: quantized tags: - transformers - safetensors - qwen3_5_moe - quantized - nvfp4 - fp4 - 4-bit - vllm - llm-compressor - image-text-to-text - conversational --- # Qwen3.5-35B-A3B-NVFP4 This is a quantized version of [Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B). This model accepts text and images as inputs and generates text as outputs. The weights and activations were quantized to FP4 using [llm-compressor](https://github.com/vllm-project/llm-compressor), reducing the model size from 67.0 GB to 21.8 GB (~3.1x reduction) while maintaining 98.8% average accuracy recovery. --- ## Inference As of 2/27/2026, this model is supported in vLLM nightly. To serve the model: ```bash vllm serve Kbenkhaled/Qwen3.5-35B-A3B-NVFP4 \ --reasoning-parser qwen3 \ --enable-prefix-caching ``` --- ## Evaluation Evaluated with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), 0-shot, thinking mode ON. | Benchmark | Qwen3.5-35B-A3B | Qwen3.5-35B-A3B-NVFP4 (this model) | Recovery | |---|---|---|---| | GPQA Diamond | 81.31% | 80.81% | 99.4% | | IFEval | 95.56% | 92.93% | 97.2% | | MMLU-Redux | 92.51% | 92.31% | 99.8% | | **Average** | **89.79%** | **88.68%** | **98.8%** |