--- license: cc-by-nc-sa-4.0 task_categories: - visual-question-answering - image-classification - text-generation language: - zh tags: - education - math - error-analysis - handwritten - multimodal - scratchwork pretty_name: ScratchMath size_categories: - 1K # ScratchMath ### *Can MLLMs Read Students' Minds?* Unpacking Multimodal Error Analysis in Handwritten Math **AIED 2026** — 27th International Conference on Artificial Intelligence in Education [![Project Page](https://img.shields.io/badge/Project-Page-blue?style=for-the-badge&logo=googlechrome&logoColor=white)](https://bbsngg.github.io/ScratchMath/) [![Paper](https://img.shields.io/badge/Paper-PDF-red?style=for-the-badge&logo=adobeacrobatreader&logoColor=white)](https://bbsngg.github.io/ScratchMath/paper/ScratchMath_AIED2026.pdf) [![Code](https://img.shields.io/badge/Code-GitHub-black?style=for-the-badge&logo=github&logoColor=white)](https://github.com/ai-for-edu/ScratchMath) [![License](https://img.shields.io/badge/License-CC_BY--NC--SA_4.0-green?style=for-the-badge)](https://creativecommons.org/licenses/by-nc-sa/4.0/) --- ## Overview **ScratchMath** is a multimodal benchmark for evaluating whether MLLMs can analyze handwritten mathematical scratchwork produced by real students. Unlike existing math benchmarks that focus on problem-solving accuracy, ScratchMath targets **error diagnosis** — identifying what type of mistake a student made and explaining why. - **1,720** authentic student scratchwork samples from Chinese primary & middle schools - **7** expert-defined error categories with detailed explanations - **2** complementary tasks: Error Cause Explanation (ECE) & Error Cause Classification (ECC) - **16** leading MLLMs benchmarked; best model reaches **57.2%** vs. human experts at **83.9%** --- ## Dataset Structure ### Subsets | Subset | Grade Level | Samples | |:------:|:-----------:|:-------:| | `primary` | Grades 1–6 | 1,479 | | `middle` | Grades 7–9 | 241 | ### Error Categories | Category (zh) | Category (en) | Primary | Middle | |:-:|:-:|:-:|:-:| | 计算错误 | Calculation Error | 453 | 113 | | 题目理解错误 | Problem Comprehension Error | 499 | 20 | | 知识点错误 | Conceptual Knowledge Error | 174 | 45 | | 答题技巧错误 | Procedural Error | 118 | 17 | | 手写誊抄错误 | Transcription Error | 95 | 29 | | 逻辑推理错误 | Logical Reasoning Error | 73 | 2 | | 注意力与细节错误 | Attention & Detail Error | 67 | 15 | ### Fields | Field | Type | Description | |:------|:----:|:------------| | `question_id` | string | Unique identifier | | `question` | string | Math problem text (may contain LaTeX) | | `answer` | string | Correct answer | | `solution` | string | Step-by-step reference solution | | `student_answer` | string | Student's incorrect answer | | `student_scratchwork` | image | Photo of handwritten work | | `error_category` | ClassLabel | One of 7 error types | | `error_explanation` | string | Expert explanation of the error | --- ## Quick Start ```python from datasets import load_dataset # Load primary school subset ds_primary = load_dataset("songdj/ScratchMath", "primary") # Load middle school subset ds_middle = load_dataset("songdj/ScratchMath", "middle") # Access a sample sample = ds_primary["train"][0] print(sample["question"]) print(sample["error_category"]) sample["student_scratchwork"].show() ``` --- ## Citation If you use this dataset, please cite: ```bibtex @inproceedings{song2026scratchmath, title = {Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math}, author = {Song, Dingjie and Xu, Tianlong and Zhang, Yi-Fan and Li, Hang and Yan, Zhiling and Fan, Xing and Li, Haoyang and Sun, Lichao and Wen, Qingsong}, booktitle = {Proceedings of the 27th International Conference on Artificial Intelligence in Education (AIED)}, year = {2026} } ``` --- ## License This dataset is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.