metadata
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
- image-classification
- text-generation
language:
- zh
tags:
- education
- math
- error-analysis
- handwritten
- multimodal
- scratchwork
pretty_name: ScratchMath
size_categories:
- 1K<n<10K
configs:
- config_name: primary
data_files: primary/data-*.parquet
- config_name: middle
data_files: middle/data-*.parquet
dataset_info:
- config_name: primary
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: solution
dtype: string
- name: student_answer
dtype: string
- name: student_scratchwork
dtype: image
- name: error_category
dtype:
class_label:
names:
'0': 计算错误
'1': 题目理解错误
'2': 知识点错误
'3': 答题技巧错误
'4': 手写誊抄错误
'5': 逻辑推理错误
'6': 注意力与细节错误
- name: error_explanation
dtype: string
splits:
- name: train
num_examples: 1479
- config_name: middle
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: solution
dtype: string
- name: student_answer
dtype: string
- name: student_scratchwork
dtype: image
- name: error_category
dtype:
class_label:
names:
'0': 计算错误
'1': 题目理解错误
'2': 知识点错误
'3': 答题技巧错误
'4': 手写誊抄错误
'5': 逻辑推理错误
'6': 注意力与细节错误
- name: error_explanation
dtype: string
splits:
- name: train
num_examples: 241
ScratchMath
Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math
AIED 2026 — 27th International Conference on Artificial Intelligence in Education
Overview
ScratchMath is a multimodal benchmark for evaluating whether MLLMs can analyze handwritten mathematical scratchwork produced by real students. Unlike existing math benchmarks that focus on problem-solving accuracy, ScratchMath targets error diagnosis — identifying what type of mistake a student made and explaining why.
- 1,720 authentic student scratchwork samples from Chinese primary & middle schools
- 7 expert-defined error categories with detailed explanations
- 2 complementary tasks: Error Cause Explanation (ECE) & Error Cause Classification (ECC)
- 16 leading MLLMs benchmarked; best model reaches 57.2% vs. human experts at 83.9%
Dataset Structure
Subsets
| Subset | Grade Level | Samples |
|---|---|---|
primary |
Grades 1–6 | 1,479 |
middle |
Grades 7–9 | 241 |
Error Categories
| Category (zh) | Category (en) | Primary | Middle |
|---|---|---|---|
| 计算错误 | Calculation Error | 453 | 113 |
| 题目理解错误 | Problem Comprehension Error | 499 | 20 |
| 知识点错误 | Conceptual Knowledge Error | 174 | 45 |
| 答题技巧错误 | Procedural Error | 118 | 17 |
| 手写誊抄错误 | Transcription Error | 95 | 29 |
| 逻辑推理错误 | Logical Reasoning Error | 73 | 2 |
| 注意力与细节错误 | Attention & Detail Error | 67 | 15 |
Fields
| Field | Type | Description |
|---|---|---|
question_id |
string | Unique identifier |
question |
string | Math problem text (may contain LaTeX) |
answer |
string | Correct answer |
solution |
string | Step-by-step reference solution |
student_answer |
string | Student's incorrect answer |
student_scratchwork |
image | Photo of handwritten work |
error_category |
ClassLabel | One of 7 error types |
error_explanation |
string | Expert explanation of the error |
Quick Start
from datasets import load_dataset
# Load primary school subset
ds_primary = load_dataset("songdj/ScratchMath", "primary")
# Load middle school subset
ds_middle = load_dataset("songdj/ScratchMath", "middle")
# Access a sample
sample = ds_primary["train"][0]
print(sample["question"])
print(sample["error_category"])
sample["student_scratchwork"].show()
Citation
If you use this dataset, please cite:
@inproceedings{song2026scratchmath,
title = {Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math},
author = {Song, Dingjie and Xu, Tianlong and Zhang, Yi-Fan and Li, Hang and Yan, Zhiling and Fan, Xing and Li, Haoyang and Sun, Lichao and Wen, Qingsong},
booktitle = {Proceedings of the 27th International Conference on Artificial Intelligence in Education (AIED)},
year = {2026}
}
License
This dataset is released under the CC BY-NC-SA 4.0 license.