Add initial model card for Qwen2.5-Math-7B-Caco

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ library_name: transformers
4
+ license: apache-2.0
5
+ ---
6
+
7
+ # Qwen2.5-Math-7B-Caco: Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning
8
+
9
+ This repository hosts the `Qwen2.5-Math-7B-Caco` model, fine-tuned using the Caco framework as presented in the paper [Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning](https://huggingface.co/papers/2510.04081). Caco (Code-Assisted Chain-of-ThOught) is a novel framework that automates the synthesis of high-quality, verifiable, and diverse instruction-CoT reasoning data through code-driven augmentation.
10
+
11
+ **Official Resources:**
12
+ * [Paper](https://huggingface.co/papers/2510.04081)
13
+ * [GitHub Repository](https://github.com/LHL3341/Caco)
14
+ * [Hugging Face Models Collection](https://huggingface.co/collections/LHL3341/caco-68e0cb7b8a5f0071fac1f611)
15
+
16
+ ## Abstract
17
+ Reasoning capability is pivotal for Large Language Models (LLMs) to solve complex tasks, yet achieving reliable and scalable reasoning remains challenging. While Chain-of-Thought (CoT) prompting has become a mainstream approach, existing methods often suffer from uncontrolled generation, insufficient quality, and limited diversity in reasoning paths. Recent efforts leverage code to enhance CoT by grounding reasoning in executable steps, but such methods are typically constrained to predefined mathematical problems, hindering scalability and generalizability. In this work, we propose Caco (Code-Assisted Chain-of-ThOught), a novel framework that automates the synthesis of high-quality, verifiable, and diverse instruction-CoT reasoning data through code-driven augmentation. Unlike prior work, Caco first fine-tunes a code-based CoT generator on existing math and programming solutions in a unified code format, then scales the data generation to a large amount of diverse reasoning traces. Crucially, we introduce automated validation via code execution and rule-based filtering to ensure logical correctness and structural diversity, followed by reverse-engineering filtered outputs into natural language instructions and language CoTs to enrich task adaptability. This closed-loop process enables fully automated, scalable synthesis of reasoning data with guaranteed executability. Experiments on our created Caco-1.3M dataset demonstrate that Caco-trained models achieve strong competitive performance on mathematical reasoning benchmarks, outperforming existing strong baselines. Further analysis reveals that Caco's code-anchored verification and instruction diversity contribute to superior generalization across unseen tasks. Our work establishes a paradigm for building self-sustaining, trustworthy reasoning systems without human intervention.
18
+
19
+ ## Caco Framework
20
+ Caco implements its approach through three key stages:
21
+ 1. **Unifying Code CoT**: Collecting diverse seed reasoning traces from both mathematical and algorithmic problems, and converting them into a standardized executable format.
22
+ 2. **Scaling Code CoT**: Training a dedicated code generator that not only expands the dataset but also realizes Pattern-level Augmentation by restructuring reasoning logic (e.g., decomposition, reformulation, alternative solution paths).
23
+ 3. **Instruction Reversing**: Back-translating code into natural language problems with contextual and stylistic variations, followed by natural language CoT solution generation dual verification for correctness.
24
+
25
+ This closed-loop process enables fully automated, scalable synthesis of reasoning data with guaranteed executability.
26
+
27
+ ## Performance
28
+ Models trained on Caco data achieve consistent improvements across mathematics, logic puzzles, scientific QA, and code reasoning, surpassing strong baselines and demonstrating broad cross-domain generalization. The `Qwen2.5-Math-7B-Caco` model shows competitive performance:
29
+
30
+ | Model | MATH | Olympiad | Theorem-QA |
31
+ |:------------------|:----:|:--------:|:----------:|
32
+ | DeepSeekMath-7B-Caco | 68.2 | 29.5 | 33.8 |
33
+ | **Qwen2.5-7B-Caco** | **82.4** | **46.5** | **46.0** |
34
+ | Llama3-8B-Caco | 70.6 | 34.1 | 31.0 |
35
+
36
+ ## Usage
37
+ For detailed instructions on installation, data generation, training, and evaluation, please refer to the [official GitHub repository](https://github.com/LHL3341/Caco)'s "Quick Start" and "Evaluation" sections.
38
+
39
+ ## Citation
40
+ If you find our code, model, or data useful, please kindly cite our [paper](https://arxiv.org/abs/2510.04081):
41
+
42
+ ```bibtex
43
+ @article{caco,
44
+ title={Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning},
45
+ author={Honglin Lin and Qizhi Pei and Xin Gao and Zhuoshi Pan and Yu Li and Juntao Li and Conghui He and Lijun Wu},
46
+ journal={arXiv preprint arXiv:2510.04081},
47
+ year={2025}
48
+ }
49
+ ```