The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Error code: ClientConnectionError
YakugakuQA
YakugakuQA is a question answering dataset, consisting of 13 years (2012-2024) of past questions and answers from the Japanese National License Examination for Pharmacists. It contains over 4K pairs of questions, answers, and commentaries.
2025-6-9: Our dataset is included in KokushiMD-10 dataset!
2025-5-29: Leaderboard added.
2025-2-17: Image data added.
2024-12-10: Dataset release.
Leaderboard
3-shot Accuracy (%)
| YakugakuQA | IgakuQA | |
|---|---|---|
| o1-preview | 87.9 | |
| GPT-4o | 83.6 | 86.6 |
| pfnet/Preferred-MedLLM-Qwen-72B | 77.2 | |
| Qwen/Qwen2.5-72B-Instruct | 73.6 | |
| google/medgemma-27b-text-it | 62.2 (*) | |
| EQUES/JPharmatron-7B | 62.0 | 64.7 |
| Qwen/Qwen3-14B (**) | 59.9 |
(*) Several issues in instruction-following, e.g., think and reason too much to reach token limit.
(**) enable_thinking=False for fair evaluation.
Dataset Details
Dataset Description
- Curated by: EQUES Inc.
- Funded by [optional]: GENIAC Project
- Shared by [optional]:
- Language(s) (NLP): Japanese
- License: cc-by-sa-4.0
Uses
Direct Use
YakugakuQA is intended to be used as a benchmark for evaluating the knowledge of large language models (LLMs) in the field of pharmacy.
Out-of-Scope Use
Any usage except above.
Dataset Structure
YakugakuQA consists of two files: data.jsonl, which contains the questions, answers, and commentaries, and metadata.jsonl, which holds supplementary information about the question categories and additional details related to the answers.
data.jsonl
- "problem_id" : unique ID, represented by a six-digit integer. The higher three digits indicate the exam number, while the lower three digits represent the question number within that specific exam.
- "problem_text" : problem statement.
- "choices" : choices corresponding to each question. Note that the Japanese National License Examination for Pharmacists is a multiple-choice format examination.
- "text_only" : whether the question includes images or tables. The corresponding images or tables are not included in this dataset, even if
text_onlyis marked asfalse. - "answer" : list of indices of the correct choices. Note the following points:
- the choices are 1-indexed.
- multiple choices may be included, depending on the question format.
- "解なし" indicates there is no correct choice. The reason for this is documented in
metadata.jsonlin most cases.
- "comment" : commentary text.
- "num_images" : number of images included in the question.
metadata.jsonl
- "problem_id" : see above.
- "category" : question caterogy. One of the
["Physics", "Chemistry", "Biology", "Hygiene", "Pharmacology", "Pharmacy", "Pathology", "Law", "Practice"]. - "note" : additional information about the question.
images
The image filenames follow the format:problem_id_{image_id}.png
Dataset Creation
Curation Rationale
YakugakuQA aims to provide a Japanese-language evaluation benchmark for assessing the domain knowledge of LLMs.
Source Data
Data Collection and Processing
All questions, answers and commentaries for the target years have been collected. The parsing process has been performed automatically.
Who are the source data producers?
All question, answers, and commentaries have been obtained from yakugaku lab. All metadata has been obtained from the website of the Ministry of Health, Labour and Welfare. It should be noted that the original questions and answers are also sourced from materials published by the Ministry of Health, Labour and Welfare.
Citation
This paper has been accepted to IJCNLP-AACL 2025.
BibTeX:
@inproceedings{ono-etal-2025-japanese,
title = "A {J}apanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical {NLP}",
author = "Ono, Shinnosuke and
Sukeda, Issey and
Fujii, Takuro and
Buma, Kosei and
Sasaki, Shunsuke",
editor = "Inui, Kentaro and
Sakti, Sakriani and
Wang, Haofen and
Wong, Derek F. and
Bhattacharyya, Pushpak and
Banerjee, Biplab and
Ekbal, Asif and
Chakraborty, Tanmoy and
Singh, Dhirendra Pratap",
booktitle = "Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics",
month = dec,
year = "2025",
address = "Mumbai, India",
publisher = "The Asian Federation of Natural Language Processing and The Association for Computational Linguistics",
url = "https://aclanthology.org/2025.ijcnlp-long.72/",
pages = "1316--1332",
ISBN = "979-8-89176-298-5",
abstract = "We present **JPharmatron**, a Japanese domain-specific large language model (LLM) for the pharmaceutical field, developed through continual pre-training on two billion Japanese pharmaceutical tokens and eight billion English biomedical tokens. For rigorous evaluation, we introduce **JPharmaBench**, a benchmark suite consisting of three new benchmarks: YakugakuQA, based on national pharmacist licensing exams; NayoseQA, which tests cross-lingual synonym and terminology normalization; and SogoCheck, a novel task involving cross-document consistency checking.We evaluate our model against open-source medical LLMs and commercial models, including GPT-4o. Experimental results show that **JPharmatron** outperforms existing open models and achieves competitive performance with commercial ones.Interestingly, even GPT-4o performs poorly on SogoCheck, suggesting that cross-sentence consistency reasoning remains an open challenge.**JPharmatron** enables secure and local model deployment for pharmaceutical tasks, where privacy and legal constraints limit the use of closed models. Besides, **JPharmaBench** offers a reproducible framework for evaluating Japanese pharmaceutical natural language processing. Together, they demonstrate the feasibility of practical and cost-efficient language models for Japanese healthcare and pharmaceutical sectors.Our model, codes, and datasets are available on HuggingFace: https://huggingface.co/collections/EQUES/jpharmatron and https://huggingface.co/collections/EQUES/jpharmabench."
}
Contributions
Thanks to @shinnosukeono for adding this dataset.
Acknowledgement
本データセットは、経済産業省及び国立研究開発法人新エネルギー・産業技術総合開発機構(NEDO)による生成AI開発力強化プロジェクト「GENIAC」により支援を受けた成果の一部である。
- Downloads last month
- 16