Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
text: string
id: string
dump: string
url: string
date: string
file_path: string
offset: int64
token_count: int64
language: string
page_average_lid: string
page_average_lid_score: double
full_doc_lid: string
full_doc_lid_score: double
per_page_languages: list<element: string>
child 0, element: string
is_truncated: bool
extractor: string
page_ends: list<element: int64>
child 0, element: int64
fw_edu_scores: list<element: double>
child 0, element: double
fw_edu_v2_scores: list<element: double>
child 0, element: double
dclm_scores: list<element: double>
child 0, element: double
ocr_quality_scores: list<element: double>
child 0, element: double
minhash_cluster_size: int64
duplicate_count: int64
dataset: string
to
{'text': Value('string'), 'id': Value('string'), 'dump': Value('string'), 'url': Value('string'), 'date': Value('string'), 'file_path': Value('string'), 'language': Value('string'), 'token_count': Value('int64'), 'dataset': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2083, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 180, in _generate_tables
yield Key(file_idx, batch_idx), self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 143, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
text: string
id: string
dump: string
url: string
date: string
file_path: string
offset: int64
token_count: int64
language: string
page_average_lid: string
page_average_lid_score: double
full_doc_lid: string
full_doc_lid_score: double
per_page_languages: list<element: string>
child 0, element: string
is_truncated: bool
extractor: string
page_ends: list<element: int64>
child 0, element: int64
fw_edu_scores: list<element: double>
child 0, element: double
fw_edu_v2_scores: list<element: double>
child 0, element: double
dclm_scores: list<element: double>
child 0, element: double
ocr_quality_scores: list<element: double>
child 0, element: double
minhash_cluster_size: int64
duplicate_count: int64
dataset: string
to
{'text': Value('string'), 'id': Value('string'), 'dump': Value('string'), 'url': Value('string'), 'date': Value('string'), 'file_path': Value('string'), 'language': Value('string'), 'token_count': Value('int64'), 'dataset': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
FinePDFs 100BT
A ~100 billion token English subset of FinePDFs (eng_Latn split), created for efficient pretraining experiments.
Part of the Smol-Data collection — tried and tested mixes for strong pretraining.
Dataset Description
This dataset was created by randomly sampling from the English split of FinePDFs (~726B tokens) to produce a ~100B token subset. Sampling was performed with a fixed seed (42) and a slight 1.05× oversampling factor to account for variance.
A pre-shuffled version is available at HuggingFaceFW/finepdfs_100BT-shuffled.
How It Was Created
The dataset was generated using datatrove with the smol_data.py script. The pipeline reads from the source dataset in streaming mode, applies a SamplerFilter to downsample, and writes the result back to the Hugging Face Hub.
Usage
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/finepdfs_100BT", split="train", streaming=True)
for sample in ds:
print(sample["text"][:200])
break
Citation
@misc{niklaus2026smoldata,
title={SmolData},
author={Joel Niklaus and Hynek Kydl{\'\i}{\v{c}}ek},
year={2026},
publisher={Hugging Face},
journal={Hugging Face repository},
howpublished={\url{https://huggingface.co/collections/HuggingFaceFW/smol-data}}
}
- Downloads last month
- 5,952