|
|
| --- |
| license: apache-2.0 |
| language: |
| - en |
| task_categories: |
| - question-answering |
| - text-generation |
| pretty_name: DeepSynth Bench |
| annotations_creators: |
| - expert-annotators |
| source_datasets: |
| - original |
| paper: |
| title: "A Benchmark for Deep Information Synthesis" |
| conference: "ICLR 2026" |
| --- |
| |
| # DEEPSYNTH: A Benchmark for Deep Information Synthesis |
|
|
| <div align="center"> |
| <img src="assets/octopus_logo.png" alt="DEEPSYNTH Bench" width="400"/> |
| </div> |
|
|
|
|
| <div align="center"> |
| <strong>Published at ICLR 2026</strong> | |
| <a href="https://openreview.net/pdf?id=0Dhpt9aY3n">📄 Paper</a> | |
| <a href="https://github.com/agentdeepsynthesis/deepsynth-bench">💻 Code</a> | |
| <a href="https://agentdeepsynthesis.github.io/deepsynth.github.io/">🌐 Project Page</a> |
|
|
|  |
| </div> |
|
|
|
|
|
|
| ## Overview |
|
|
| **DEEPSYNTH-Bench** is a challenging benchmark for evaluating *deep information synthesis* — the ability of AI systems to integrate, reason over, and consolidate multi-source information into precise, structured answers. |
|
|
| Unlike benchmarks focused on retrieval or single-hop reasoning, DEEPSYNTH-Bench requires models to: |
| - Chain multiple reasoning steps across heterogeneous sources |
| - Produce structured JSON outputs with specific keys and values |
| - Demonstrate analytical depth, not just surface-level extraction |
|
|
| The benchmark includes a public **dev set of 40 tasks** with gold answers, full decompositions, and intermediate steps for iterative development, and a **test set of 80 tasks** (questions only) for clean evaluation — **120 tasks in total**. |
|
|
| --- |
|
|
| ## Repository Structure |
|
|
|
|
| ``` |
| deepsynth-bench/ |
| ├── README.md # This dataset card |
| ├── data/ |
| │ ├── test.jsonl # Full test set (80 tasks) |
| │ └── dev.jsonl # Dev/Lite split for prototyping ((40 tasks)) |
| ├── evaluation/ |
| │ ├── evaluate.py # Evaluation script (F1, EM, LLM-Judge) |
| │ └── llm_judge_prompt.txt # Prompt used for LLM-as-a-judge metric |
| ├── assets/ |
| │ └── octopus_logo.png # Project logo |
| └── LICENSE # CC-BY-4.0 |
| ``` |
|
|
| --- |
|
|
| ## Dataset Files |
|
|
| | File | Split | Size | Description | |
| |------|-------|------|-------------| |
| | `dev.json` | Dev | 40 tasks | Questions, gold answers, reasoning plans, and full decompositions with intermediate steps | |
| | `test.json` | Test | 80 tasks | Questions only — submit answers for evaluation | |
|
|
| --- |
|
|
| ## Loading the Data |
|
|
| ```python |
| import json |
| from huggingface_hub import hf_hub_download |
| |
| # Dev set — includes gold answers |
| dev_path = hf_hub_download( |
| repo_id="DeepSynthesisTeam/deepsynth-bench", |
| filename="data/dev.json", |
| repo_type="dataset" |
| ) |
| with open(dev_path, "r") as f: |
| dev_set = json.load(f) |
| |
| # Test set — questions only |
| test_path = hf_hub_download( |
| repo_id="DeepSynthesisTeam/deepsynth-bench", |
| filename="data/test.json", |
| repo_type="dataset" |
| ) |
| with open(test_path, "r") as f: |
| test_set = json.load(f) |
| ``` |
|
|
| --- |
|
|
| ## Prediction Format |
|
|
| Model predictions should be a JSON file mapping task IDs to answer dictionaries: |
|
|
| ```json |
| { |
| "001": {"Sweden": 1.2, "Finland": 0.8}, |
| "002": {"Brunei": -0.67, "Singapore": -0.34} |
| } |
| ``` |
|
|
| --- |
|
|
| ## Evaluation |
|
|
| Evaluation scripts are available in the [GitHub repository](https://github.com/agentdeepsynthesis/deepsynth-bench). |
|
|
| | Metric | Description | |
| |--------|-------------| |
| | **Exact Match (EM)** | All keys and values must be exactly correct | |
| | **F1 Score** | Partial credit for correct key-value pairs | |
| | **LLM Judge** | Semantic equivalence; allows small numerical margins (1–5.5%) | |
|
|
| ```bash |
| # Clone the repository to access evaluation scripts |
| git clone https://github.com/agentdeepsynthesis/deepsynth-bench.git |
| cd deepsynth-bench |
| |
| # Run EM + F1 evaluation |
| python scripts/evaluation/eval_static_score.py your_predictions.json |
| |
| # Run LLM-as-judge evaluation |
| python scripts/evaluation/llm_judge.py your_predictions.json |
| ``` |
|
|
| --- |
|
|
| ## 🧩 Decompositions & Validation Schemas |
|
|
| ### Decomposition Files (`decompositions/*.json`) |
| |
| Each file (e.g., `001.json`) maps the logical sub-steps required to solve the corresponding question. These decompositions support step-by-step evaluation and can be used to guide or audit model reasoning chains. |
| |
| ### Validation Schemas (`intermediate_answers_schemas/`) |
| |
| Each decomposition has a matching JSON Schema (e.g., `001.schema.json`) that defines the expected format for intermediate answer fields. Use these to programmatically validate whether a model's intermediate outputs conform to the expected structure. |
| |
| --- |
| |
| |
| ## Citation |
| |
| If you use DEEPSYNTH-Bench in your research, please cite: |
| |
| ```bibtex |
| @inproceedings{paul-etal-2026-deepinfosynth, |
| title = {A Benchmark for Deep Information Synthesis}, |
| author = {Paul, Debjit and Murphy, Daniel and Gritta, Milan and Cardenas, Ronald and Prokhorov, Victor and Bolliger, Lena Sophia and Toker, Aysim and Miles, Roy and Oncescu, Andreea-Maria and Sivakumar, Jasivan Alex and Borchert, Philipp and Elezi, Ismail and Zhang, Meiru and Lee, Ka Yiu and Zhang, Guchun and Wang, Jun and Lampouras, Gerasimos}, |
| booktitle = {The Fourteenth International Conference on Learning Representations}, |
| month = apr, |
| year = {2026}, |
| } |
| ``` |
| |
| |
| ### License |
| We follow Apache License Version 2.0. Please see the [License](LICENSE) file for more information. |
| |
| Disclaimer: This open source project is not an official Huawei product, Huawei is not expected to provide support for this project. |
| |