open-llama-3b-openthought-sft-gguf
GGUF Q4_K_M quantization of OpenLLaMA 3B v2 fine-tuned on OpenThoughts-114k.
Files
| File | Quant | Size | BPW |
|---|---|---|---|
| open-llama-3b-openthought-sft-q4km.gguf | Q4_K_M | 2.5 GB | 6.02 |
Source
Two-stage merge from:
- ping98k/open-llama-3b-openthought-mid-lora (full-loss SFT)
- ping98k/open-llama-3b-openthought-sft-lora (assistant-only SFT)
- Downloads last month
- 145
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for ping98k/open-llama-3b-openthought-sft-gguf
Base model
openlm-research/open_llama_3b_v2