image imagewidth (px) 987 1.32k | xml_content stringlengths 32.2k 443k | filename stringlengths 23 23 | project_name stringclasses 9
values |
|---|---|---|---|
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0307_3473452.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0280_3473356.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0309_3473459.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0306_3473448.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0115_3472775.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0312_3473469.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0023_3472512.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0164_3472950.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0222_3473152.jpg | MM_1_001 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083605_0098_3472716.jpg | MM_1_001 |
End of preview. Expand in Data Studio
Dataset Card for image-text_kurrent-xix
This dataset was created using pagexml-hf converter from Transkribus PageXML data.
Dataset Summary
This dataset contains 158525 samples across 1 split(s).
Projects Included
- MM_1_001
- MM_1_002
- MM_1_003
- MM_1_004
- MM_1_005
- MM_1_006
- MM_1_007
- MM_1_008
- MM_1_009
- MM_1_010
- MM_1_011
- MM_1_012
- TEST_CITlab_Bassermann_0_4
- TEST_CITlab_Bassermann_Manuscripts
- TEST_CITlab_Bassermann_Manuscripts_0_2
- TEST_CITlab_Binder_Kochbuch_2
- TEST_CITlab_Escher_M1
- TEST_CITlab_Gusbeth
- TEST_CITlab_Handschriftliche_Archivquellen
- TEST_CITlab_Konzilsprotkolle_B_Schwartz_5_2018
- TEST_CITlab_MargareteSick
- TEST_CITlab_MargareteSick20180731
- TEST_CITlab_MargareteSick20180731a
- TEST_CITlab_MÜLLER
- TEST_CITlab_Protokoll_Hoftheater_1806
- TEST_CITlab_Rehlen_1834_a
- TEST_CITlab_RvE_Barlaam_HS_D_
- TEST_CITlab_Steiner
- TEST_CITlab_Suppes
- TEST_CITlab_Suppes_3500
- TEST_CITlab_Tagebuch_Arnold_v1
- TEST_CITlab_umkc_Roland_M1
- TEST_CITlab_umkc_Roland_M2
- TRAIN_CITlab_Bassermann_0_4
- TRAIN_CITlab_Bassermann_Manuscripts
- TRAIN_CITlab_Bassermann_Manuscripts_0_2
- TRAIN_CITlab_Binder_Kochbuch_2
- TRAIN_CITlab_Escher_M1
- TRAIN_CITlab_Gusbeth
- TRAIN_CITlab_Handschriftliche_Archivquellen
- TRAIN_CITlab_Konzilsprotkolle_B_Schwartz_5_2018
- TRAIN_CITlab_MargareteSick
- TRAIN_CITlab_MargareteSick20180731
- TRAIN_CITlab_MargareteSick20180731a
- TRAIN_CITlab_MÜLLER
- TRAIN_CITlab_Protokoll_Hoftheater_1806
- TRAIN_CITlab_Rehlen_1834_a
- TRAIN_CITlab_RvE_Barlaam_HS_D_
- TRAIN_CITlab_Steiner
- TRAIN_CITlab_Suppes
- TRAIN_CITlab_Suppes_3500
- TRAIN_CITlab_Tagebuch_Arnold_v1
- TRAIN_CITlab_umkc_Roland_M1
- TRAIN_CITlab_umkc_Roland_M2
- hufeland_privatbesitz_1829
- nn_msgermqu2124_1827
- nn_msgermqu2345_1827
- parthey
Dataset Structure
Data Splits
- train: 158525 samples
Dataset Size
- Approximate total size: 14843467.55 MB
- Total samples: 158525
Features
- image:
Image(mode=None, decode=False) - xml_content:
Value('string') - filename:
Value('string') - project_name:
Value('string')
Data Organization
Data is organized as parquet shards by split and project:
data/
├── <split>/
│ └── <project_name>/
│ └── <timestamp>-<shard>.parquet
The HuggingFace Hub automatically merges all parquet files when loading the dataset.
Usage
from datasets import load_dataset
# Load entire dataset
dataset = load_dataset("dh-unibe/image-text_kurrent-xix")
# Load specific split
train_dataset = load_dataset("dh-unibe/image-text_kurrent-xix", split="train")
- Downloads last month
- 94