Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,8 @@ language:
|
|
| 6 |
- en
|
| 7 |
---
|
| 8 |
|
|
|
|
|
|
|
| 9 |
# Introduction
|
| 10 |
|
| 11 |
We announce **Motif 2.6B**, a 2.6 billion parameter language model trained from scratch on AMD Instinct™ MI250X GPUs. Motif 2.6B marks our very first step toward building helpful, reliable AI aligned with human values. With this initial release, our goal is for Motif 2.6B to match the performance of well-known open-source models such as Gemma, Llama, and Phi — particularly those in the sLLM regime.
|
|
@@ -16,7 +18,7 @@ We announce **Motif 2.6B**, a 2.6 billion parameter language model trained from
|
|
| 16 |
- Training time: 42 days
|
| 17 |
- Training data: 2.4T tokens
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
# Evaluation
|
| 22 |
|
|
|
|
| 6 |
- en
|
| 7 |
---
|
| 8 |
|
| 9 |
+
Last update: 8th June 2025
|
| 10 |
+
|
| 11 |
# Introduction
|
| 12 |
|
| 13 |
We announce **Motif 2.6B**, a 2.6 billion parameter language model trained from scratch on AMD Instinct™ MI250X GPUs. Motif 2.6B marks our very first step toward building helpful, reliable AI aligned with human values. With this initial release, our goal is for Motif 2.6B to match the performance of well-known open-source models such as Gemma, Llama, and Phi — particularly those in the sLLM regime.
|
|
|
|
| 18 |
- Training time: 42 days
|
| 19 |
- Training data: 2.4T tokens
|
| 20 |
|
| 21 |
+
$Notice: A detailed technical report will be released at a later time.*
|
| 22 |
|
| 23 |
# Evaluation
|
| 24 |
|