Image-Text-to-Text
Transformers
Safetensors
English
gemma4
shining-valiant
shining-valiant-3
valiant
valiant-labs
gemma
gemma-4
gemma-4-E2B-it
reasoning
code
code-reasoning
science
science-reasoning
physics
biology
chemistry
earth-science
astronomy
machine-learning
artificial-intelligence
compsci
computer-science
information-theory
ML-Ops
math
cuda
deep-learning
agentic
LLM
neuromorphic
self-improvement
complex-systems
cognition
linguistics
philosophy
logic
epistemology
simulation
game-theory
knowledge-management
creativity
problem-solving
architect
engineer
developer
creative
analytical
expert
rationality
conversational
chat
instruct
Support our open-source dataset and model releases!
Shining Valiant 3: Qwen3-1.7B, Qwen3-4B, gemma-4-E2B-it, gemma-4-E4B-it, Qwen3-8B, Ministral-3-14B-Reasoning-2512, gpt-oss-20b
Shining Valiant 3 is a science, AI design, and general reasoning specialist built on Gemma 4.
- Finetuned on our newest science reasoning data generated with Deepseek R1 0528!
- AI to build AI: our high-difficulty AI reasoning data makes Shining Valiant 3 your friend for building with current AI tech and discovering new innovations and improvements!
- Improved general and creative reasoning to supplement problem-solving and general chat performance.
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
Prompting Guide
Shining Valiant 3 uses the gemma-4-E2B-it prompt format.
Example inference script to get started:
from transformers import AutoProcessor, AutoModelForCausalLM
MODEL_ID = "ValiantLabs/gemma-4-E2B-it-ShiningValiant3"
# Load model
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
dtype="auto",
device_map="auto"
)
# Prepare the model input
prompt = "Propose a novel cognitive architecture where the primary memory component is a Graph Neural Network (GNN). How would this GNN represent working, declarative, and procedural memory? How would the \"cognitive cycle\" be implemented as operations on this graph?"
messages = [
{"role": "user", "content": prompt},
]
# Process input
text = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
inputs = processor(text=text, return_tensors="pt").to(model.device)
input_len = inputs["input_ids"].shape[-1]
# Generate output
outputs = model.generate(**inputs, max_new_tokens=5000)
response = processor.decode(outputs[0][input_len:], skip_special_tokens=False)
# Parse output
processor.parse_response(response)
print(response)
Shining Valiant 3 is created by Valiant Labs.
Check out our HuggingFace page to see all of our models!
We care about open source. For everyone to use.
- Downloads last month
- 6
Model tree for ValiantLabs/gemma-4-E2B-it-ShiningValiant3
Base model
google/gemma-4-E2B-it
