YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧠 Fine-tuned LLaMA Model using QLoRA & LoRA (Supervised Fine-Tuning)

This model is a fine-tuned version of the model_name base model using QLoRA (Quantized Low-Rank Adaptation) for efficient and memory-friendly training. Fine-tuning was performed using the Hugging Face trl library’s SFTTrainer and peft (LoRA).


πŸ“Œ Model Overview

  • Base Model: model_name
  • Fine-tuning Method: QLoRA + LoRA (PEFT)
  • Task: Causal Language Modeling
  • Quantization: 4-bit (bitsandbytes)
  • Frameworks: Transformers, PEFT, TRL

🧠 Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Henit007/Vivekanandao1_finetuned")
model = AutoModelForCausalLM.from_pretrained("Henit007/Vivekanandao1_finetuned", device_map="auto")

input_text = "Explain climate change in simple terms."
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support