Update README.md
Browse files
README.md
CHANGED
|
@@ -136,13 +136,13 @@ Network Architecture: Qwen-7B-Instruct
|
|
| 136 |
**Input Type(s):** Text <br>
|
| 137 |
**Input Format(s):** String <br>
|
| 138 |
**Input Parameters:** One-Dimensional (1D) <br>
|
| 139 |
-
**Other Properties Related to Input:** Context length up to
|
| 140 |
|
| 141 |
## Output: <br>
|
| 142 |
**Output Type(s):** Text <br>
|
| 143 |
**Output Format:** String <br>
|
| 144 |
**Output Parameters:** One-Dimensional (1D) <br>
|
| 145 |
-
**Other Properties Related to Output:** Context length up to
|
| 146 |
|
| 147 |
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 148 |
|
|
@@ -168,17 +168,17 @@ The training corpus for OpenCodeReasoning-Nemotron-7B-v1.1 is [OpenCodeReasoning
|
|
| 168 |
|
| 169 |
Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
|
| 170 |
Labeling Method: Hybrid: Automated, Human, Synthetic <br>
|
| 171 |
-
Properties:
|
| 172 |
|
| 173 |
## Evaluation Dataset:
|
| 174 |
-
We used the datasets listed in the next section to evaluate OpenCodeReasoning-Nemotron-7B. <br>
|
| 175 |
**Data Collection Method: Hybrid: Automated, Human, Synthetic <br>**
|
| 176 |
**Labeling Method: Hybrid: Automated, Human, Synthetic <br>**
|
| 177 |
|
| 178 |
|
| 179 |
|
| 180 |
### License/Terms of Use: <br>
|
| 181 |
-
GOVERNING TERMS: Use of this model is governed by [Apache 2.0](https://huggingface.co/nvidia/
|
| 182 |
|
| 183 |
### Deployment Geography:
|
| 184 |
Global<br>
|
|
|
|
| 136 |
**Input Type(s):** Text <br>
|
| 137 |
**Input Format(s):** String <br>
|
| 138 |
**Input Parameters:** One-Dimensional (1D) <br>
|
| 139 |
+
**Other Properties Related to Input:** Context length up to 65,536 tokens <br>
|
| 140 |
|
| 141 |
## Output: <br>
|
| 142 |
**Output Type(s):** Text <br>
|
| 143 |
**Output Format:** String <br>
|
| 144 |
**Output Parameters:** One-Dimensional (1D) <br>
|
| 145 |
+
**Other Properties Related to Output:** Context length up to 65,536 tokens <br>
|
| 146 |
|
| 147 |
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 148 |
|
|
|
|
| 168 |
|
| 169 |
Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
|
| 170 |
Labeling Method: Hybrid: Automated, Human, Synthetic <br>
|
| 171 |
+
Properties: 1.165M samples from OpenCodeReasoning (https://huggingface.co/datasets/nvidia/OpenCodeReasoning)
|
| 172 |
|
| 173 |
## Evaluation Dataset:
|
| 174 |
+
We used the datasets listed in the next section to evaluate OpenCodeReasoning-Nemotron-7B-v1.1. <br>
|
| 175 |
**Data Collection Method: Hybrid: Automated, Human, Synthetic <br>**
|
| 176 |
**Labeling Method: Hybrid: Automated, Human, Synthetic <br>**
|
| 177 |
|
| 178 |
|
| 179 |
|
| 180 |
### License/Terms of Use: <br>
|
| 181 |
+
GOVERNING TERMS: Use of this model is governed by [Apache 2.0](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B-v1.1/blob/main/LICENSE).
|
| 182 |
|
| 183 |
### Deployment Geography:
|
| 184 |
Global<br>
|