suayptalha's picture
Update README.md
b7dc283 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 1784778472
      num_examples: 2005712
  download_size: 1106679567
  dataset_size: 1784778472
tags:
  - turkish
  - pretraining
  - masked-language-modeling
  - diffusion
  - wikipedia
  - oscar
  - news
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-generation
language:
  - tr

DiffutronLM-Pretraining-Corpus

DiffutronLM-Pretraining-Corpus is the comprehensive, filtered Turkish text dataset used during the Continual Pre-training (CPT) phase of the Diffutron language models.

The primary goal of this dataset was to align the cross-lingual representations of a multilingual base encoder (jhu-clsp/mmBERT-base) with the agglutinative complexity and morphological nuances of the Turkish language, without inducing catastrophic forgetting.

📊 Dataset Composition

To ensure a balance between structured encyclopedic knowledge and natural, diverse web/news usage, the corpus is a composite of three primary open-source collections. It contains a total of approximately 2 million sequences.

  • Turkish Wikipedia (~406,000 sequences): Sourced from the standard encyclopedic subset from the Wikimedia Foundation. It provides high-quality, factual, and structurally sound Turkish text.
  • Havadis & Temiz-OSCAR (~1,600,000 sequences): * Havadis: A robust dataset of Turkish news articles providing formal and contemporary language usage.
    • Temiz-OSCAR: A heavily filtered and cleaned version of the Common Crawl-based Turkish OSCAR corpus, representing diverse internet text.
    • These two sources were merged, filtered, and uniformly sampled to extract 1.6 million high-quality sequences.

⚙️ Preprocessing & Curation Strategy

The data was strictly curated to match the architectural constraints of the base Masked Diffusion Language Model (MDLM):

  1. Length Filtering: To ensure compatibility and training stability, a strict length constraint was applied across all data sources. Any sequences exceeding a maximum token length of 512 were filtered out.
  2. Tokenization Alignment: The text was tokenized using the jhu-clsp/mmBERT-base tokenizer. This was a crucial step to maintain absolute alignment with the pre-trained embedding space of the frozen backbone.
  3. Shuffling & Distribution: The web and news subsets were thoroughly shuffled prior to sampling to ensure distributional uniformity during the training process.

🚀 Intended Use

This corpus is optimized for:

  • Continual Pre-Training (CPT): Adapting existing multilingual or general-purpose encoders to the Turkish language.
  • Masked Language Modeling (MLM): Training models to predict masked or corrupted tokens (the foundational mechanism of discrete diffusion models).
  • Domain Adaptation: Serving as a baseline corpus for general Turkish language modeling before task-specific instruction tuning.

⚠️ Limitations

  • Length Constraint: The dataset inherently lacks long-form document structures, as all sequences are hard-capped at 512 tokens. It is not suitable for training long-context models without additional data.
  • Tokenization: While provided as text, researchers should be aware that the length filters were applied based on the specific subword tokenization of mmBERT. Re-tokenizing with a different tokenizer (like LLaMA's or a custom BPE) may yield different sequence lengths.

📝 Citation

If you use this dataset in your research, please cite the Diffutron paper:

@misc{diffutron2026,
      title={Diffutron: A Masked Diffusion Language Model for Turkish Language}, 
      author={Şuayp Talha Kocabay and Talha Rüzgar Akkuş},
      year={2026},
      eprint={2603.20466},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2603.20466}, 
}