The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 23 new columns ({'url', 'disabled', 'tags', 'tasks', 'id', 'author', 'created_at', 'languages', 'lastModified', 'downloads_alltime', 'size_categories', 'gated', 'license', 'task_ids', 'description', 'citation', 'likes', 'sha', 'paperswithcode_id', 'arxiv_id', 'private', 'downloads_30', 'language_category'}) and 3 missing columns ({'dataset_id', 'markdown_content', 'yaml_metadata'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Dasool/huggingface-cjk-metadata/data/dataset_meta/dataset_meta_en.csv (at revision dc1bbece25443f32a20919483a68b714df0d33d0)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
id: string
author: string
created_at: string
lastModified: string
sha: string
downloads_30: int64
downloads_alltime: int64
likes: int64
tags: string
tasks: string
description: string
citation: string
languages: string
language_category: string
size_categories: string
paperswithcode_id: string
private: bool
gated: string
disabled: bool
license: string
arxiv_id: string
url: double
task_ids: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2941
to
{'dataset_id': Value(dtype='string', id=None), 'yaml_metadata': Value(dtype='string', id=None), 'markdown_content': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1436, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1053, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 23 new columns ({'url', 'disabled', 'tags', 'tasks', 'id', 'author', 'created_at', 'languages', 'lastModified', 'downloads_alltime', 'size_categories', 'gated', 'license', 'task_ids', 'description', 'citation', 'likes', 'sha', 'paperswithcode_id', 'arxiv_id', 'private', 'downloads_30', 'language_category'}) and 3 missing columns ({'dataset_id', 'markdown_content', 'yaml_metadata'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Dasool/huggingface-cjk-metadata/data/dataset_meta/dataset_meta_en.csv (at revision dc1bbece25443f32a20919483a68b714df0d33d0)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
dataset_id string | yaml_metadata string | markdown_content string |
|---|---|---|
argilla/databricks-dolly-15k-curated-en | {"language": ["en"]} | ## Guidelines
In this dataset, you will find a collection of records that show a category, an instruction, a context and a response to that instruction. The aim of the project is to correct the instructions, intput and responses to make sure they are of the highest quality and that they match the task category that th... |
lighteval/mmlu | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswit... | # Dataset Card for MMLU
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
cais/mmlu | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswit... | # Dataset Card for MMLU
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
nyu-mll/glue | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language... | # Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#... |
rajpurkar/squad_v2 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-... | # Dataset Card for SQuAD 2.0
## Table of Contents
- [Dataset Card for "squad_v2"](#dataset-card-for-squad_v2)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)... |
Salesforce/wikitext | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-mo... | # Dataset Card for "wikitext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
... |
nyu-mll/blimp | "{\"annotations_creators\": [\"crowdsourced\"], \"language_creators\": [\"machine-generated\"], \"la(...TRUNCATED) | "# Dataset Card for \"blimp\"\n\n## Table of Contents\n- [Dataset Description](#dataset-description)(...TRUNCATED) |
truthfulqa/truthful_qa | "{\"annotations_creators\": [\"expert-generated\"], \"language_creators\": [\"expert-generated\"], \(...TRUNCATED) | "# Dataset Card for truthful_qa\n\n## Table of Contents\n- [Dataset Card for truthful_qa](#dataset-c(...TRUNCATED) |
allenai/ai2_arc | "{\"annotations_creators\": [\"found\"], \"language_creators\": [\"found\"], \"language\": [\"en\"],(...TRUNCATED) | "# Dataset Card for \"ai2_arc\"\n\n## Table of Contents\n- [Dataset Description](#dataset-descriptio(...TRUNCATED) |
stanfordnlp/imdb | "{\"annotations_creators\": [\"expert-generated\"], \"language_creators\": [\"expert-generated\"], \(...TRUNCATED) | "# Dataset Card for \"imdb\"\n\n## Table of Contents\n- [Dataset Description](#dataset-description)\(...TRUNCATED) |
Dataset Card for HuggingFace-CJK-Metadata
Dataset Summary
This dataset provides structured metadata and documentation extracted from the top 700 most downloaded datasets per language on the Hugging Face Hub for Chinese (zh), Japanese (ja), Korean (ko), and English (en, as a reference). The collection includes both high-level metadata (e.g., size, license, task type) and raw dataset card contents, enabling large-scale, cross-linguistic analysis of data curation, documentation quality, and cultural development patterns in East Asian NLP communities. All download statistics were recorded on January 28, 2025, and the dataset includes a total of 3,300+ entries spanning metadata and documentation fields.
Supported Tasks
This dataset is intended for:
- Dataset ecosystem analysis
- Meta-evaluation of documentation quality
- Cultural and institutional analysis of NLP practices across languages
- Visualization and benchmarking of dataset trends
Languages
- zh – Chinese
- ja – Japanese
- ko – Korean
- en – English (reference baseline)
Dataset Structure
The dataset contains two main components: structured metadata and full dataset card content, each organized by language (English, Chinese, Japanese, Korean).
huggingface-cjk-metadata/
└── data/
├── dataset_card/
│ ├── dataset_cards_en.csv
│ ├── dataset_cards_ko.csv
│ ├── dataset_cards_ja.csv
│ └── dataset_cards_zh.csv
└── dataset_meta/
├── dataset_meta_en.csv
├── dataset_meta_ko.csv
├── dataset_meta_ja.csv
└── dataset_meta_zh.csv
📘 dataset_meta files
Each row corresponds to a Hugging Face dataset and includes structured metadata fields:
| Field | Description |
|---|---|
id |
Hugging Face dataset ID (e.g., skt/kogpt2) |
author |
Dataset creator (user or organization) |
created_at |
Timestamp when the dataset repo was created |
lastModified |
Timestamp of the latest commit |
sha |
Git commit SHA |
downloads_30 |
Number of downloads in the past 30 days |
downloads_alltime |
Total number of downloads |
likes |
Number of likes on the dataset page |
tags |
Associated tags |
tasks |
NLP tasks associated with the dataset |
description |
Short dataset summary |
citation |
Citation information |
languages |
Languages covered (e.g., ko, en, zh) |
language_category |
One of: monolingual, en-paired, multilingual |
size_categories |
Estimated dataset size (e.g., 10K<n<100K) |
paperswithcode_id |
Linked PapersWithCode ID (if any) |
private |
Boolean indicating if repo is private |
gated |
Boolean for gated access |
disabled |
Boolean for deactivated datasets |
license |
License name (e.g., apache-2.0, cc-by-nc-4.0) |
arxiv_id |
arXiv paper ID (if applicable) |
url |
Hugging Face dataset URL |
task_ids |
Internal Hugging Face task identifiers |
📄 dataset_card files
Each file contains raw Hugging Face dataset card contents in two fields:
| Field | Description |
|---|---|
dataset_id |
Dataset identifier (same as id in meta) |
yaml_metadata |
Structured YAML block from the top of the README |
markdown_content |
The full free-text markdown body of the dataset card |
These cards enable deeper qualitative analyses of documentation quality, structure, and cultural content across languages.
Repository
All scraping code and analysis notebooks can be found at:
👉 GitHub: https://github.com/Dasol-Choi/cjk-huggingface-analysis
Citation
@misc{choi2025languagedataleftbehind,
title={No Language Data Left Behind: A Comparative Study of CJK Language Datasets in the Hugging Face Ecosystem},
author={Dasol Choi and Woomyoung Park and Youngsook Song},
year={2025},
eprint={2507.04329},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04329},
}
Contact
- Downloads last month
- 41