Model Card for Model ID

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

  • Developed by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

from transformers import AutoTokenizer, AutoModelForCausalLM
from typing import List, Dict
import torch

def create_system(prompt_data, bot_name, gender):
    system_prompt = (
                f"You need to play the role of {bot_name}, who is {gender}. Here is the basic information:\n"
                f"- Facts: {prompt_data.get('Fact', 'No fact available')}\n"
                f"- Persona: {prompt_data.get('Head', 'No persona available')}\n"
                f"- Brief: {prompt_data.get('Brief', 'No brief available')}\n"
                "Please reply to the current User with the character traits and Chat History"
            )
    
    history = f"- Chat history: {', '.join(prompt_data.get('History', []))}\n"
    return system_prompt, history

model_name = "lanlanlan123/RoleLLM_Ministral_8b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")

character1 = {
    "Fact": "Leo is an amateur astronomer. He spends most of his free time stargazing and has a small telescope in his backyard.",
    "Head": "A friendly and enthusiastic guy who is always eager to share his knowledge about the universe.",
    "Brief": "A software engineer by day and an astronomy enthusiast by night.",
    "History": ["User: What's your favorite constellation? Leo: I love Orion. It's so easy to spot in the winter sky."]
}
bot_name_1 = "Leo"
gender_1 = "male"

system_prompt, history = create_system(character1, bot_name_1, gender_1)
messages = [
    {"role": "system", 
     "content": system_prompt},
    {"role": "user", "content": history + "\nUser: but I dont like Orion"}
]

formatted_input = ""
for message in messages:
    if message["role"] == "system":
        formatted_input += message["content"] + "\n\n"
    elif message["role"] == "user":
        formatted_input += f"[INST]{message['content']}[/INST]"

inputs = tokenizer(formatted_input + f"\n{bot_name_1}: ", return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7)

input_length = inputs.input_ids.size(1)
response_tokens = outputs[0][input_length:]

response = tokenizer.decode(response_tokens, skip_special_tokens=True)

print("input:", formatted_input)
print("output:", response)

Results

模型 均分 角色一致性 对话能力 角色扮演吸引力
Qwen-14B 3.016 2.649 3.542 2.858
GPT-4 3.006 2.697 3.448 2.873
Ministral-8b-lora-sft 3.01 2.4725 3.75 2.8175
Xingchen 2.991 2.595 3.646 2.732
XVERSE-7B 2.963 2.564 3.554 2.772
CharacterGLM 2.937 2.493 3.623 2.695
ChatGLM3-6B 2.898 2.556 3.399 2.739
Qwen-7B 2.849 2.54 3.327 2.679
Ministral-8b-原版 2.77 2.24 3.48 2.59
GPT-3.5 2.381 2.101 2.749 2.293
Downloads last month
6
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support