Publish 2 shards CC-MAIN-2026-12/61848–61849
Browse files- README.md +10 -10
- charts/timing_chart.png +2 -2
- charts/totals_chart.png +2 -2
- data/CC-MAIN-2026-12/06/18/061848.parquet +3 -0
- data/CC-MAIN-2026-12/06/18/061849.parquet +3 -0
- stats.csv +5 -3
README.md
CHANGED
|
@@ -32,15 +32,15 @@ configs:
|
|
| 32 |
|
| 33 |
**Open Markdown** is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown, and packages the result into Parquet files with useful WARC metadata for traceability.
|
| 34 |
|
| 35 |
-
The dataset currently includes crawl **CC-MAIN-2026-12** with **657,
|
| 36 |
|
| 37 |
### Live Progress
|
| 38 |
|
| 39 |
-
Processing at **71.2 shards/hour** — 38,
|
| 40 |
|
| 41 |
Estimated completion: **May 25, 2026** (36 days)
|
| 42 |
|
| 43 |
-
**Current server:** 6 CPU cores, 12 GB RAM (3.
|
| 44 |
|
| 45 |
**Memory per session:** avg 575 MB, peak 799 MB (measured via VmRSS)
|
| 46 |
|
|
@@ -182,9 +182,9 @@ No intermediate files are created — the pipeline streams from compressed WARC
|
|
| 182 |
|
| 183 |
### Compression Ratios
|
| 184 |
|
| 185 |
-
Numbers below are actual measurements summed across all
|
| 186 |
|
| 187 |
-
| Stage |
|
| 188 |
|---|---|---|---|
|
| 189 |
| Raw WARC (.warc.gz, downloaded) | ~30.1 TB | ~79.2 TB | — |
|
| 190 |
| HTML extracted (uncompressed) | 85.2 TB | ~224.0 TB | — |
|
|
@@ -193,16 +193,16 @@ Numbers below are actual measurements summed across all 38040 files of CC-MAIN-2
|
|
| 193 |
|
| 194 |
The big win is HTML → Markdown conversion: the tokenizer strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 85.2 TB of uncompressed HTML down to 5.5 TB of markdown — a **93.6% reduction**. Parquet with Zstd then compresses the markdown a further 68.9%.
|
| 195 |
|
| 196 |
-
End to end: ~30.1 TB of raw gzipped WARCs becomes **1.7 TB of Parquet** — a **94.3% total reduction** — containing 657,
|
| 197 |
|
| 198 |
### Processing Times
|
| 199 |
|
| 200 |
-
Pipeline timings across
|
| 201 |
|
| 202 |
```
|
| 203 |
-
Download (raw WARC) ████████████░░░░░░░░░░░░ 173h
|
| 204 |
-
Convert (HTML → Markdown → Parquet) ████████████████████████ 341h
|
| 205 |
-
Publish (HuggingFace) ███████░░░░░░░░░░░░░░░░░ 108h
|
| 206 |
```
|
| 207 |
|
| 208 |
### Dataset Charts
|
|
|
|
| 32 |
|
| 33 |
**Open Markdown** is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown, and packages the result into Parquet files with useful WARC metadata for traceability.
|
| 34 |
|
| 35 |
+
The dataset currently includes crawl **CC-MAIN-2026-12** with **657,540,364 documents across 38043 shards**. Processed 85.2 TB of raw HTML into 5.5 TB of clean Markdown — a **93.6% reduction**. We plan to add more snapshots over time.
|
| 36 |
|
| 37 |
### Live Progress
|
| 38 |
|
| 39 |
+
Processing at **71.2 shards/hour** — 38,043 of 100,000 done (**38.04%**)
|
| 40 |
|
| 41 |
Estimated completion: **May 25, 2026** (36 days)
|
| 42 |
|
| 43 |
+
**Current server:** 6 CPU cores, 12 GB RAM (3.0 GB available), 45 GB disk free
|
| 44 |
|
| 45 |
**Memory per session:** avg 575 MB, peak 799 MB (measured via VmRSS)
|
| 46 |
|
|
|
|
| 182 |
|
| 183 |
### Compression Ratios
|
| 184 |
|
| 185 |
+
Numbers below are actual measurements summed across all 38043 files of CC-MAIN-2026-12 (657,540,364 pages total), projected to the full crawl of 100,000 WARC files.
|
| 186 |
|
| 187 |
+
| Stage | 38043 files (measured) | 100,000 files (projected) | Reduction |
|
| 188 |
|---|---|---|---|
|
| 189 |
| Raw WARC (.warc.gz, downloaded) | ~30.1 TB | ~79.2 TB | — |
|
| 190 |
| HTML extracted (uncompressed) | 85.2 TB | ~224.0 TB | — |
|
|
|
|
| 193 |
|
| 194 |
The big win is HTML → Markdown conversion: the tokenizer strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 85.2 TB of uncompressed HTML down to 5.5 TB of markdown — a **93.6% reduction**. Parquet with Zstd then compresses the markdown a further 68.9%.
|
| 195 |
|
| 196 |
+
End to end: ~30.1 TB of raw gzipped WARCs becomes **1.7 TB of Parquet** — a **94.3% total reduction** — containing 657,540,364 clean markdown documents.
|
| 197 |
|
| 198 |
### Processing Times
|
| 199 |
|
| 200 |
+
Pipeline timings across 38043 shards of CC-MAIN-2026-12:
|
| 201 |
|
| 202 |
```
|
| 203 |
+
Download (raw WARC) ████████████░░░░░░░░░░░░ 173h 39m 29s
|
| 204 |
+
Convert (HTML → Markdown → Parquet) ████████████████████████ 341h 37m 15s
|
| 205 |
+
Publish (HuggingFace) ███████░░░░░░░░░░░░░░░░░ 108h 52m 46s
|
| 206 |
```
|
| 207 |
|
| 208 |
### Dataset Charts
|
charts/timing_chart.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
charts/totals_chart.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
data/CC-MAIN-2026-12/06/18/061848.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d2355e9dd1e048078e237ff94ed69a927cb333528a22f4be29b6cc1c48b6102a
|
| 3 |
+
size 49297568
|
data/CC-MAIN-2026-12/06/18/061849.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3e5f427b2637875e89162c8015ef9b7581770d8719fb7cf0a8b50add497743b8
|
| 3 |
+
size 47809576
|
stats.csv
CHANGED
|
@@ -26072,6 +26072,8 @@ CC-MAIN-2026-12,61841,17377,2459639939,165166839,50769657,2026-04-18T20:49:32Z,2
|
|
| 26072 |
CC-MAIN-2026-12,61842,17229,2462480991,158973388,48801882,2026-04-18T20:52:11Z,23,40,0,14,567
|
| 26073 |
CC-MAIN-2026-12,61843,17057,2419179547,157144324,47664243,2026-04-18T20:52:11Z,24,52,0,14,567
|
| 26074 |
CC-MAIN-2026-12,61844,17305,2476644712,158687405,50250616,2026-04-18T20:52:11Z,25,44,0,14,569
|
| 26075 |
-
CC-MAIN-2026-12,61845,17354,2461373422,157995127,49992377,2026-04-18T20:54:32Z,35,41,0,
|
| 26076 |
-
CC-MAIN-2026-12,61846,17085,2408740843,151869444,48724225,2026-04-18T20:54:32Z,41,43,0,
|
| 26077 |
-
CC-MAIN-2026-12,61847,17159,2455784984,158635515,49639582,2026-04-18T20:54:32Z,42,43,0,
|
|
|
|
|
|
|
|
|
| 26072 |
CC-MAIN-2026-12,61842,17229,2462480991,158973388,48801882,2026-04-18T20:52:11Z,23,40,0,14,567
|
| 26073 |
CC-MAIN-2026-12,61843,17057,2419179547,157144324,47664243,2026-04-18T20:52:11Z,24,52,0,14,567
|
| 26074 |
CC-MAIN-2026-12,61844,17305,2476644712,158687405,50250616,2026-04-18T20:52:11Z,25,44,0,14,569
|
| 26075 |
+
CC-MAIN-2026-12,61845,17354,2461373422,157995127,49992377,2026-04-18T20:54:32Z,35,41,0,18,538
|
| 26076 |
+
CC-MAIN-2026-12,61846,17085,2408740843,151869444,48724225,2026-04-18T20:54:32Z,41,43,0,18,534
|
| 26077 |
+
CC-MAIN-2026-12,61847,17159,2455784984,158635515,49639582,2026-04-18T20:54:32Z,42,43,0,18,536
|
| 26078 |
+
CC-MAIN-2026-12,61848,17362,2488834317,159153213,49297568,2026-04-18T20:56:11Z,25,52,0,0,547
|
| 26079 |
+
CC-MAIN-2026-12,61849,17089,2436051760,154620125,47809576,2026-04-18T20:56:11Z,24,50,0,0,527
|