BlackSamorez's picture
Upload README.md with huggingface_hub
f1e3f4c verified
This is the official QAT FP-Quant checkpoint of `meta-llama/Llama-3.2-1B-Instruct`, produced as described in the [**"Bridging the Gap Between Promise and Performance for Microscaling FP4 Quantization"**](https://arxiv.org/abs/2509.23202) paper.
This model can be run on Blackwell-generation NVIDIA GPUs via [QuTLASS](https://github.com/IST-DASLab/qutlass) and [FP-Quant](https://github.com/IST-DASLab/FP-Quant) in either [transformers](https://huggingface.co/docs/transformers/main/en/quantization/fp_quant) or [vLLM](https://github.com/vllm-project/vllm/pull/24440).
The approximate recipe for training this model (up to local batch size and LR) is available [here](https://github.com/IST-DASLab/nanochat-qat/blob/qat/transformers_distill.py).
This checkpoint has the following performance relative to the original model and the RTN quantization:
| Model | MMLU | GSM8k | Hellaswag | Winogrande | Avg |
|-------|------|-------|-----------|------------|-----|
| `meta-llama/Llama-3.2-1B-Instruct` | 46.2 | 46.3 | 59.8 | 61.6 | 53.5 |
| RTN | 32.8 | 25.0 | 56.2 | 59.0 | 43.3 |
| QAT (THIS) | 32.7 | 37.6 | 57.5 | 58.4 | 46.6 |