KuangshiAi's picture
Add paper link, GitHub repository, and sample usage instructions (#2)
d2975dd
metadata
task_categories:
  - text-to-3d
  - other
pretty_name: SciVisAgentBench
license: other

SciVisAgentBench Tasks

Paper | Project Page | GitHub

This repository contains scientific data analysis and visualization datasets and tasks for benchmarking scientific visualization agents, as presented in the paper "SciVisAgentBench: A Benchmark for Evaluating Scientific Data Analysis and Visualization Agents".

Sample Usage

You can download the benchmark tasks using the huggingface_hub CLI:

pip install huggingface_hub
hf download SciVisAgentBench/SciVisAgentBench-tasks \
  --repo-type dataset \
  --local-dir ~/SciVisAgentBench/SciVisAgentBench-tasks

Data Organization

All the volume datasets from http://klacansky.com/open-scivis-datasets/ have been organized into a consistent structure.

Directory Structure

The datasets and tasks for ParaView-based visualizations are organized into the main, the sci_volume_data, and the chatvis_bench folders. The bioimage_data folder holds tasks for bioimage visualization, and molecular_vis folder holds tasks for molecular visualization. The chatvis_bench folder contains 20 test cases from the official ChatVis benchmark.

Each dataset in the main, the sci_volume_data, and the chatvis_bench folders follows this structure:

dataset_name/
├── data/
│   ├── dataset_file.raw         # The actual data file
│   └── dataset_name.txt         # Metadata about the dataset
├── GS/                          # Ground truth folder (ParaView state + pvpython code)
├── task_description.txt         # ParaView visualization task
└── visualization_goals.txt      # Evaluation criteria for the visualization

Available Volume Datasets

  • 37 datasets under 512MB are suggested to be downloaded
  • 18 datasets over 512MB are listed but not downloaded

See datasets_list.md for a complete list with specifications. And datasets_info.json is the complete JSON file with all dataset metadata.

Task Descriptions

Each dataset has:

  1. Task descriptions - Based on dataset type (medical, simulation, molecular, etc.)
  2. Visualization goals - Evaluation criteria tailored to the dataset characteristics
  3. Ground Truth - Ground truth pvpython code, ParaView state and screenshots

Acknowledgement

SciVisAgentBench was mainly created by Kuangshi Ai (kai@nd.edu), Shusen Liu (liu42@llnl.gov), and Haichao Miao (miao1@llnl.gov). Some of the test cases are provided by Kaiyuan Tang (ktang2@nd.edu) and Jianxin Sun (sunjianxin66@gmail.com). We sincerely thank the open-source community for their invaluable contributions. This project is made possible thanks to the following outstanding projects:

License

© 2026 University of Notre Dame.
Released under the License.