license: cc-by-4.0
configs:
- config_name: COCOtext
data_files:
- split: train
path: COCOtext/train-*
- split: val
path: COCOtext/val-*
- config_name: TextOCR
data_files:
- split: train
path: TextOCR/train-*
- split: val
path: TextOCR/val-*
- config_name: pubtables-1m
data_files:
- split: train
path: pubtables-1m/train-*
- split: val
path: pubtables-1m/val-*
- split: test
path: pubtables-1m/test-*
dataset_info:
- config_name: COCOtext
features:
- name: sample_id
dtype: string
- name: dataset_name
dtype: string
- name: task_name
dtype: string
- name: query
sequence: string
- name: annotations
sequence: string
- name: img_id
dtype: string
- name: query_info
struct:
- name: notes
dtype: string
- name: source_attribution
sequence: string
- name: source_license
dtype: string
- name: source_url
dtype: string
- name: annotations_info
struct:
- name: notes
dtype: string
- name: source_attribution
sequence: string
- name: source_license
dtype: string
- name: source_url
dtype: string
- name: image_info
struct:
- name: notes
dtype: string
- name: image_sha256
dtype: string
splits:
- name: train
num_bytes: 66261559
num_examples: 46970
- name: val
num_bytes: 12498555
num_examples: 8892
download_size: 15280439
dataset_size: 78760114
- config_name: TextOCR
features:
- name: sample_id
dtype: string
- name: dataset_name
dtype: string
- name: task_name
dtype: string
- name: query
sequence: string
- name: annotations
sequence: string
- name: img_id
dtype: string
- name: query_info
struct:
- name: notes
dtype: string
- name: source_license
dtype: string
- name: source_url
dtype: string
- name: annotations_info
struct:
- name: notes
dtype: string
- name: source_license
dtype: string
- name: source_url
dtype: string
- name: image_info
struct:
- name: notes
dtype: string
- name: image_sha256
dtype: string
splits:
- name: train
num_bytes: 123664898
num_examples: 43484
- name: val
num_bytes: 17736515
num_examples: 6232
download_size: 35928078
dataset_size: 141401413
- config_name: pubtables-1m
features:
- name: sample_id
dtype: string
- name: dataset_name
dtype: string
- name: task_name
dtype: string
- name: query
sequence: string
- name: annotations
sequence: string
- name: img_id
dtype: string
- name: query_info
struct:
- name: notes
dtype: string
- name: source_license
dtype: string
- name: source_url
dtype: string
- name: annotations_info
struct:
- name: notes
dtype: string
- name: source_license
dtype: string
- name: source_url
dtype: string
- name: image_info
struct:
- name: notes
dtype: string
- name: image_sha256
dtype: string
splits:
- name: train
num_bytes: 4515375098
num_examples: 1371294
- name: val
num_bytes: 568892238
num_examples: 172887
- name: test
num_bytes: 561231801
num_examples: 170466
download_size: 1398777256
dataset_size: 5645499137
BigDocs-7.5M
Training data for the paper: BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks
π News
- [2025-04-23]: Initial release of the BigDocs-7.5M data.
Guide on Data Loading
Some parts of BigDocs-7.5M are distributed without their "image" column, and instead have an "img_id" column. The file get_bigdocs_75m.py, part of this repository, provides tooling to substitutes such images back in.
from get_bigdocs_75m import get_bigdocs_75m
cocotext = get_bigdocs_75m("COCOtext", user_local_path=".../train2014")
pubtables1m = get_bigdocs_75m("pubtables-1m", user_local_path=".../PubTables-1M-Detection/images")
textocr = get_bigdocs_75m("TextOCR", user_local_path=".../train")
When specified, user_local_path must point to one of the third-party datasets listed below.
- COCOtext: http://images.cocodataset.org/zips/train2014.zip
- pubtables-1m: https://www.microsoft.com/en-us/research/publication/pubtables-1m
- TextOCR: https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip
See the docstring in get_bigdocs_75m.py for more details.
Licensing
The part of this repository generated by us is Copyright ServiceNow 2024 and licensed under the CC-BY-4.0 license.
Multiple datasets, documents, and tools were involved in the generation of BigDocs-Bench. We document these dependencies on a per-sample basis through the query_info, annotation_info and image_info fields, respectively documenting the query, annotations and image fields of our datasets.