Math-MCQ-Generator-v1
Model Description
This is a fine-tuned version of deepseek-ai/deepseek-math-7b-instruct specialized for generating high-quality mathematics multiple choice questions (MCQs). The model has been trained using QLoRA (Quantized Low-Rank Adaptation) to efficiently adapt the base model for educational content generation.
Capabilities
- Subject: Mathematics
- Question Types: Multiple Choice Questions (MCQs)
- Topics: Applications of Trigonometry, Conic Sections, and more
- Difficulty Levels: Easy, Medium, Hard
- Cognitive Skills: Recall, Direct Application, Pattern Recognition, Strategic Reasoning, Trap Aware
Training Information
- Base Model:
deepseek-ai/deepseek-math-7b-instruct - Training Method: QLoRA (4-bit quantization)
- Dataset Size: 1519 examples
- Training Epochs: 5
- Final Loss: ~0.20
- Training Date: 2025-09-03
Usage
Via Python API
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load model
base_model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-math-7b-instruct")
model = PeftModel.from_pretrained(base_model, "danxh/math-mcq-generator-v1")
tokenizer = AutoTokenizer.from_pretrained("danxh/math-mcq-generator-v1")
# Generate MCQ
prompt = '''### Instruction:
Generate a math MCQ similar in style to the provided examples.
### Input:
chapter: Applications of Trigonometry
topics: ['Heights and Distances']
Difficulty: medium
Cognitive Skill: direct_application
### Response:
'''
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Performance
The model demonstrates strong performance in generating contextually appropriate mathematics MCQs with:
- Proper question formatting
- Relevant multiple choice options
- Appropriate difficulty scaling
- Subject-matter accuracy
License
MIT License - Feel free to use, modify, and distribute.