nielsr HF Staff commited on
Commit
8f849fc
·
verified ·
1 Parent(s): ab50c38

Update dataset card with paper link, task categories and citation

Browse files

Hi! I'm Niels, part of the community science team at Hugging Face.

This PR improves the dataset card for **Bench2Drive-VL** by:
- Adding a link to the [research paper](https://huggingface.co/papers/2604.01259).
- Updating the `task_categories` to `image-text-to-text` to better reflect the nature of the benchmark.
- Providing links to the official GitHub repository and project page.
- Including the BibTeX citation for researchers using this data.

Files changed (1) hide show
  1. README.md +33 -10
README.md CHANGED
@@ -1,21 +1,44 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - question-answering
5
  language:
6
  - en
7
- tags:
8
- - autonomous_driving
9
- pretty_name: Bench2Drive-VL-base1000
10
  size_categories:
11
  - 10M<n<100M
 
 
 
 
 
12
  ---
 
13
  # Bench2Drive-VL: Full-Stack Software for Closed-Loop Autonomous Driving with Vision Language Models
14
 
15
- [Github](https://huggingface.co/datasets/rethinklab/Bench2Drive) | [Website](https://thinklab-sjtu.github.io/Bench2Drive-VL/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- This is a natural language annotation of [https://huggingface.co/datasets/rethinklab/Bench2Drive](Bench2Drive-Base1000) dataset, generated by expert model DriveCommenter. This dataset provides full-stack VQAs covering perception, prediction, planning and behaviour tasks with 50 different questions.
18
 
19
- ## License and Citation
20
 
21
- All assets and code are under the Apache 2.0 license unless specified otherwise.
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
 
 
5
  size_categories:
6
  - 10M<n<100M
7
+ task_categories:
8
+ - image-text-to-text
9
+ pretty_name: Bench2Drive-VL-base1000
10
+ tags:
11
+ - autonomous-driving
12
  ---
13
+
14
  # Bench2Drive-VL: Full-Stack Software for Closed-Loop Autonomous Driving with Vision Language Models
15
 
16
+ [**Project Page**](https://thinklab-sjtu.github.io/Bench2Drive-VL/) | [**GitHub**](https://github.com/Thinklab-SJTU/Bench2Drive-VL) | [**Paper**](https://huggingface.co/papers/2604.01259)
17
+
18
+ **Bench2Drive-VL** is a comprehensive closed-loop benchmark for Vision-Language Models in Autonomous Driving (VLM4AD). It extends the Bench2Drive benchmark by introducing closed-loop evaluation and the `DriveCommenter` expert model for automated annotation.
19
+
20
+ This repository contains the natural language annotations for the [Bench2Drive-Base1000](https://huggingface.co/datasets/rethinklab/Bench2Drive) dataset. These annotations were generated by the expert model `DriveCommenter` and provide full-stack VQA pairs covering perception, prediction, planning, and behavior tasks across diverse driving situations in CARLA.
21
+
22
+ ## Key Features
23
+
24
+ - **DriveCommenter**: A closed-loop generator that automatically generates diverse, behavior-grounded question-answer pairs for all driving situations in CARLA.
25
+ - **Unified Protocol**: An interface that allows modern VLMs to be directly plugged into the Bench2Drive closed-loop environment for comparison.
26
+ - **Full-Stack VQA**: Annotations covering low-level perception (objects, signs, lanes) and high-level reasoning for planning and behavior.
27
+
28
+ ## License
29
+
30
+ All assets and code are under the Apache 2.0 license unless specified otherwise.
31
 
32
+ ## Citation
33
 
34
+ If you use this dataset in your research, please cite:
35
 
36
+ ```bibtex
37
+ @article{Bench2DriveSpeed,
38
+ title={Bench2Drive-VL: Benchmarks for Closed-Loop Autonomous Driving with Vision-Language Models},
39
+ author={Xiaosong Jia, Yuqian Shao, Zhenjie Yang, Qifeng Li, Zhiyuan Zhang, Junchi Yan},
40
+ year={2026},
41
+ eprint={2604.01259},
42
+ archivePrefix={arXiv},
43
+ }
44
+ ```