open-llama-3b-openthought-sft-gguf

GGUF Q4_K_M quantization of OpenLLaMA 3B v2 fine-tuned on OpenThoughts-114k.

Files

File Quant Size BPW
open-llama-3b-openthought-sft-q4km.gguf Q4_K_M 2.5 GB 6.02

Source

Two-stage merge from:

  1. ping98k/open-llama-3b-openthought-mid-lora (full-loss SFT)
  2. ping98k/open-llama-3b-openthought-sft-lora (assistant-only SFT)

See also: ping98k/open-llama-3b-openthought-sft-4bit

Downloads last month
145
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ping98k/open-llama-3b-openthought-sft-gguf

Quantized
(30)
this model