---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- agent
size_categories:
- n<1K
configs:
- config_name: webpage
data_files:
- split: test
path: "webpage/test.parquet"
- config_name: frontend
data_files:
- split: test
path: "frontend/test.parquet"
- config_name: website
data_files:
- split: test
path: "website/test.parquet"
---
# Vision2Web: A Hierarchical Benchmark for Visual Website Development with Agent Verification



[[🏠 Project Page](https://vision2web-bench.github.io/)] [[📖 arXiv Paper](https://arxiv.org/abs/2603.26648)] [[🏆 Leaderboard](https://vision2web-bench.github.io/#leaderboard)] [[📮 Submit Results](https://huggingface.co/datasets/zai-org/Vision2Web-Leaderboard)]
Vision2Web is a comprehensive benchmark designed to evaluate multimodal coding agents on **visual website development tasks spanning the full software development lifecycle**.
This dataset repository contains the **benchmark tasks, UI prototypes, test workflows, and resources** used to evaluate agent performance.
---
# 👀 Introduction
Vision2Web is a hierarchical benchmark for evaluating multimodal coding agents on **end-to-end visual website development**, measuring their ability to integrate:
- UI understanding
- requirements reasoning
- interactive logic
- full-stack implementation
in **long-horizon development scenarios**.
The benchmark is organized into three progressive levels:
### Level 1 – Static Webpage
Generate responsive executable webpages from multi-device UI prototypes
(desktop / tablet / mobile).
**Metric**
- Visual Score (VS)
---
### Level 2 – Interactive Frontend
Develop multi-page interactive frontends from multiple prototypes and textual specifications.
**Metrics**
- Visual Score (VS)
- Functional Score (FS)
---
### Level 3 – Full-Stack Website
Build complete full-stack web systems from requirement documents and UI prototypes.
Agents must implement:
- backend logic
- state management
- frontend interactions
**Metrics**
- Visual Score (VS)
- Functional Score (FS)
---
Evaluation uses a **workflow-based agent verification paradigm** combining:
- **GUI Agent verifiers** for functional correctness
- **VLM-based judges** for visual fidelity
This enables **scalable and implementation-agnostic evaluation** across increasing levels of complexity.
---
# 📊 Benchmark Statistics
Vision2Web contains:
- **193 tasks**
- **16 subcategories**
- **4 major domains**
Domains include:
- E-Commerce
- SaaS
- Content Platforms
- Public Service
The dataset includes:
- **918 prototype images**
- **1,255 functional test cases**
---
# 📥 Using the Dataset
The dataset can be downloaded directly from Hugging Face.
After downloading, extract the dataset and place it in your project directory with the following structure:
```
datasets/
├── webpage/ # Level 1: Static Webpage (100 tasks)
├── frontend/ # Level 2: Interactive Frontend (66 tasks)
└── website/ # Level 3: Full-Stack Website (27 tasks)
```
Each task directory contains the following components:
| File / Folder | Description |
|---|---|
| `prototypes/` | UI prototype images (desktop / tablet / mobile) |
| `resources/` | Multimedia assets used in tasks |
| `workflow.json` | Functional test workflow specification |
| `prompt.txt` | Textual requirements (Level 2 only) |
| `prd.md` | Requirement document (Level 3 only) |
Once extracted, ensure the dataset directory is placed at the root of the Vision2Web project so that the evaluation pipeline can locate the benchmark tasks correctly.
---
# ⚠️ License
Vision2Web is released under the **CC-BY-NC-SA-4.0 license**.
---
# ✒️ Citation
If you find Vision2Web useful in your research, please cite:
```bibtex
@misc{he2026vision2webhierarchicalbenchmarkvisual,
title={Vision2Web: A Hierarchical Benchmark for Visual Website Development with Agent Verification},
author={Zehai He and Wenyi Hong and Zhen Yang and Ziyang Pan and Mingdao Liu and Xiaotao Gu and Jie Tang},
year={2026},
eprint={2603.26648},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2603.26648},
}
```