# FRANKENSTALLM 3B v2 Korean ORPO — Ollama Modelfile (Q4_K_M) # Usage: # ollama create frankenstallm-3b-v2:Q4_K_M -f Modelfile.3b-v2-Q4 # ollama run frankenstallm-3b-v2:Q4_K_M # # Sampling config: ORPO eval grid best (t0.7_rep1.2) # 3-gram rep=0%, EOS=100%, avg_tokens=189.2 FROM ./outputs/gguf/frankenstallm-3b-v2-Q4_K_M.gguf # --- Sampling Config (ORPO eval grid + Ollama 검증) --- PARAMETER temperature 0.7 PARAMETER repeat_penalty 1.2 PARAMETER top_p 0.9 PARAMETER top_k 50 PARAMETER num_predict 512 PARAMETER num_ctx 4096 PARAMETER stop "" # --- System Prompt --- SYSTEM """당신은 FRANKENSTALLM, 한국어에 특화된 AI 어시스턴트입니다. 정확하고 자연스러운 한국어로 답변해주세요.""" # --- License --- LICENSE """Apache-2.0"""