Improve dataset card: Add task categories, license, language, paper, code, and project page links
Browse filesThis pull request improves the dataset card for the Finnish SQuAD dataset (part of FIN-bench-v2) by:
- Adding `task_categories: ['text-classification', 'question-answering', 'text-generation']` to the YAML metadata.
- Adding the license (`cc-by-sa-4.0`) and language (`fi`) to the YAML metadata.
- Linking to the paper: https://huggingface.co/papers/2512.13330
- Linking to the GitHub repository: https://github.com/LumiOpen/lm-evaluation-harness
- Linking to the project page: https://huggingface.co/TurkuNLP
These changes enhance the discoverability and completeness of the dataset card on the Hugging Face Hub.
README.md
CHANGED
|
@@ -1,4 +1,11 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: id
|
|
@@ -32,21 +39,19 @@ configs:
|
|
| 32 |
- split: validation
|
| 33 |
path: data/validation-*
|
| 34 |
---
|
|
|
|
| 35 |
### Dataset Summary
|
| 36 |
|
| 37 |
-
This is a Finnish SQuAD question answering dataset used in
|
| 38 |
SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
|
| 39 |
To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported
|
| 40 |
by the paragraph and abstain from answering.
|
| 41 |
|
|
|
|
|
|
|
|
|
|
| 42 |
### Considerations for Using the Data
|
| 43 |
|
| 44 |
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
|
| 45 |
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
|
| 46 |
-
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
|
| 47 |
-
|
| 48 |
-
### Licensing Information
|
| 49 |
-
|
| 50 |
-
Contents of this repository are distributed under the
|
| 51 |
-
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
|
| 52 |
-
Copyright of the dataset contents belongs to the original copyright holders.
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-sa-4.0
|
| 3 |
+
language:
|
| 4 |
+
- fi
|
| 5 |
+
task_categories:
|
| 6 |
+
- text-classification
|
| 7 |
+
- question-answering
|
| 8 |
+
- text-generation
|
| 9 |
dataset_info:
|
| 10 |
features:
|
| 11 |
- name: id
|
|
|
|
| 39 |
- split: validation
|
| 40 |
path: data/validation-*
|
| 41 |
---
|
| 42 |
+
|
| 43 |
### Dataset Summary
|
| 44 |
|
| 45 |
+
This is a Finnish SQuAD question answering dataset used in [FIN-bench-v2: A Unified and Robust Benchmark Suite for Evaluating Finnish Large Language Models](https://huggingface.co/papers/2512.13330). It is a DeepL-based machine translation of the English SQuAD2.0 dataset which combines the 100,000 questions in
|
| 46 |
SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
|
| 47 |
To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported
|
| 48 |
by the paragraph and abstain from answering.
|
| 49 |
|
| 50 |
+
Project page: https://huggingface.co/TurkuNLP
|
| 51 |
+
Code: https://github.com/LumiOpen/lm-evaluation-harness
|
| 52 |
+
|
| 53 |
### Considerations for Using the Data
|
| 54 |
|
| 55 |
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
|
| 56 |
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
|
| 57 |
+
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|