HuggingFaceFW/fineweb-edu
Viewer • Updated • 3.5B • 580k • 1.08k
How to use raincandy-u/Rain-v2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="raincandy-u/Rain-v2") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("raincandy-u/Rain-v2")
model = AutoModelForCausalLM.from_pretrained("raincandy-u/Rain-v2")How to use raincandy-u/Rain-v2 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "raincandy-u/Rain-v2"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "raincandy-u/Rain-v2",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/raincandy-u/Rain-v2
How to use raincandy-u/Rain-v2 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "raincandy-u/Rain-v2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "raincandy-u/Rain-v2",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "raincandy-u/Rain-v2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "raincandy-u/Rain-v2",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use raincandy-u/Rain-v2 with Docker Model Runner:
docker model run hf.co/raincandy-u/Rain-v2
Rain-v2 是一个约 1 亿参数的英文自回归语言模型,在 RTX 4090 约两天内完成预训练,展示了在有限算力下从数据到模型的完整实践路径。
总量约 10 B。
易输出错误事实或伪造信息。未经对齐,会生成偏见/有害/违法内容;请勿直接面向终端用户。
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("raincandy-u/Rain-v2", torch_dtype=torch.bfloat16, device_map="auto")
tok = AutoTokenizer.from_pretrained("your-namespace/Rain-v2")
prompt = "Here's a fairy tale about a little pig. A long, long time ago, there was a little pig called "
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=120, temperature=0.8, top_p=0.9)
print(tok.decode(out[0], skip_special_tokens=True))