nanamma commited on
Commit
b8a078c
·
verified ·
1 Parent(s): 4b55a9a

Upload 5 files

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: video_source
5
+ dtype: string
6
+ - name: video_id
7
+ dtype: string
8
+ - name: duration_sec
9
+ dtype: float64
10
+ - name: fps
11
+ dtype: float64
12
+ - name: question_id
13
+ dtype: string
14
+ - name: question
15
+ dtype: string
16
+ - name: choices
17
+ sequence: string
18
+ - name: correct_answer
19
+ dtype: string
20
+ - name: time_reference
21
+ sequence: float64
22
+ - name: question_type
23
+ dtype: string
24
+ - name: question_time
25
+ dtype: float64
26
+ splits:
27
+ - name: train
28
+ num_bytes: 291464
29
+ num_examples: 900
30
+ download_size: 98308
31
+ dataset_size: 291464
32
+ configs:
33
+ - config_name: default
34
+ data_files:
35
+ - split: train
36
+ path: data/train-*
37
+ ---
38
+ <div align="center">
39
+
40
+ <h2>
41
+ RIVER: A Real-Time Interaction Benchmark for Video LLMs
42
+ </h2>
43
+
44
+ <img src="assets/RIVER logo.png" width="80" alt="RIVER logo">
45
+
46
+ [Yansong Shi<sup>*</sup>](https://scholar.google.com/citations?user=R7J57vQAAAAJ),
47
+ [Qingsong Zhao<sup>*</sup>](https://scholar.google.com/citations?user=ux-dlywAAAAJ),
48
+ [Tianxiang Jiang<sup>*</sup>](https://github.com/Arsiuuu),
49
+ [Xiangyu Zeng](https://scholar.google.com/citations?user=jS13DXkAAAAJ&hl),
50
+ [Yi Wang](https://scholar.google.com/citations?user=Xm2M8UwAAAAJ),
51
+ [Limin Wang<sup>†</sup>](https://scholar.google.com/citations?user=HEuN8PcAAAAJ)
52
+ [[💻 GitHub]](https://github.com/OpenGVLab/RIVER),
53
+ [[🤗 Dataset on HF]](https://huggingface.co/datasets/OpenGVLab/RIVER),
54
+ [[📄 ArXiv]](https://arxiv.org/abs/2603.03985)
55
+ </div>
56
+
57
+
58
+ ## Introduction
59
+ This project introduces **RIVER Bench**, designed to evaluate the real-time interactive capabilities of Video Large Language Models through streaming video perception, featuring novel tasks for memory, live-perception, and proactive response.
60
+
61
+ ![RIVER](assets/river.jpg)
62
+
63
+ Based on the frequency and timing of reference events, questions, and answers, we further categorize online interaction tasks into four distinct subclasses, as visually depicted in the figure. For the Retro-Memory, the clue is drawn from the past; for the live-Perception, it comes from the present—both demand an immediate response. For the Pro-Response task, Video LLMs need to wait until the corresponding clue appears and then respond as quickly as possible.
64
+
65
+ ## Dataset Preparation
66
+ |Dataset |URL|
67
+ |--------------|---|
68
+ |LongVideoBench|https://github.com/longvideobench/LongVideoBench|
69
+ |Vript-RR |https://github.com/mutonix/Vript|
70
+ |LVBench |https://github.com/zai-org/LVBench|
71
+ |Ego4D |https://github.com/facebookresearch/Ego4d|
72
+ |QVHighlights |https://github.com/jayleicn/moment_detr|
73
+
74
+ ## Citation
75
+
76
+ If you find this project useful in your research, please consider cite:
77
+ ```BibTeX
78
+ @misc{shi2026riverrealtimeinteractionbenchmark,
79
+ title={RIVER: A Real-Time Interaction Benchmark for Video LLMs},
80
+ author={Yansong Shi and Qingsong Zhao and Tianxiang Jiang and Xiangyu Zeng and Yi Wang and Limin Wang},
81
+ year={2026},
82
+ eprint={2603.03985},
83
+ archivePrefix={arXiv},
84
+ primaryClass={cs.CV},
85
+ url={https://arxiv.org/abs/2603.03985},
86
+ }
87
+ ```
annotations/Live-Perception.json ADDED
The diff for this file is too large to render. See raw diff
 
annotations/Pro-Response-Instant.json ADDED
The diff for this file is too large to render. See raw diff
 
annotations/Pro-Response-Streaming.json ADDED
The diff for this file is too large to render. See raw diff
 
annotations/Retro-Memory.json ADDED
The diff for this file is too large to render. See raw diff