赵智轩 commited on
Commit
3dd0dfd
·
1 Parent(s): 8bb5680

Align HF README workflow with GitHub instructions

Browse files
Files changed (1) hide show
  1. README.md +78 -6
README.md CHANGED
@@ -33,7 +33,7 @@ PerceptionComp is a benchmark for complex perception-centric video reasoning. It
33
 
34
  ### Dataset Description
35
 
36
- PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 videos. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie.
37
 
38
  This Hugging Face dataset repository is intended to host the benchmark videos together with a viewer-friendly annotation file, `questions.json`, for Dataset Preview and Data Studio. The canonical annotation source, evaluation code, and model integration examples are maintained in the official GitHub repository:
39
 
@@ -68,6 +68,71 @@ PerceptionComp is not intended for:
68
  - surveillance, identity inference, or sensitive attribute prediction
69
  - modifying the benchmark protocol and reporting those results as directly comparable official scores
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  ## Dataset Structure
72
 
73
  ### Data Instances
@@ -94,12 +159,17 @@ Core fields in each annotation item:
94
 
95
  ### Data Files
96
 
97
- This upload bundle contains:
98
 
99
  - `questions.json`: root-level annotation file used by Hugging Face Dataset Preview and Data Studio
 
100
  - `README.md`: Hugging Face dataset card
101
  - `LICENSE`: custom research-use terms for the benchmark materials
102
 
 
 
 
 
103
  The official evaluation code prepares videos into the following local layout:
104
 
105
  ```text
@@ -112,14 +182,16 @@ Use the official download script from the GitHub repository:
112
  git clone https://github.com/hrinnnn/PerceptionComp.git
113
  cd PerceptionComp
114
  pip install -r requirements.txt
115
- python scripts/download_data.py --repo-id hrinnnn/PerceptionComp
116
  ```
117
 
 
 
118
  ### Data Splits
119
 
120
  The current public release uses one official evaluation split:
121
 
122
- - `test`: 1,114 multiple-choice questions over 273 videos, exposed through `questions.json`
123
 
124
  ## Dataset Creation
125
 
@@ -198,8 +270,8 @@ Example evaluation workflow:
198
  git clone https://github.com/hrinnnn/PerceptionComp.git
199
  cd PerceptionComp
200
  pip install -r requirements.txt
201
- python scripts/download_data.py --repo-id hrinnnn/PerceptionComp
202
- python evaluate/evaluate.py \
203
  --model YOUR_MODEL_NAME \
204
  --provider api \
205
  --api-key YOUR_API_KEY \
 
33
 
34
  ### Dataset Description
35
 
36
+ PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 referenced video IDs. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie.
37
 
38
  This Hugging Face dataset repository is intended to host the benchmark videos together with a viewer-friendly annotation file, `questions.json`, for Dataset Preview and Data Studio. The canonical annotation source, evaluation code, and model integration examples are maintained in the official GitHub repository:
39
 
 
68
  - surveillance, identity inference, or sensitive attribute prediction
69
  - modifying the benchmark protocol and reporting those results as directly comparable official scores
70
 
71
+ ## Evaluation Workflow
72
+
73
+ The Hugging Face repository hosts the benchmark videos and the viewer-friendly test annotations. The evaluation code lives in the GitHub repository and follows this workflow:
74
+
75
+ ### Step 1. Clone the Repository
76
+
77
+ ```bash
78
+ git clone https://github.com/hrinnnn/PerceptionComp.git
79
+ cd PerceptionComp
80
+ ```
81
+
82
+ ### Step 2. Install Dependencies
83
+
84
+ ```bash
85
+ pip install -r requirements.txt
86
+ ```
87
+
88
+ ### Step 3. Download the Benchmark Videos
89
+
90
+ ```bash
91
+ python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
92
+ ```
93
+
94
+ If the Hugging Face dataset requires authentication:
95
+
96
+ ```bash
97
+ python3 scripts/download_data.py \
98
+ --repo-id hrinnnn/PerceptionComp \
99
+ --hf-token YOUR_HF_TOKEN
100
+ ```
101
+
102
+ The download helper fetches video files from the Hugging Face `data/` directory, flattens them into `benchmark/videos/`, and validates the required `video_id` set against `benchmark/annotations/1-1114.json`.
103
+
104
+ ### Step 4. Run Evaluation
105
+
106
+ OpenAI-compatible API example:
107
+
108
+ ```bash
109
+ python3 evaluate/evaluate.py \
110
+ --model YOUR_MODEL_NAME \
111
+ --provider api \
112
+ --api-key YOUR_API_KEY \
113
+ --base-url YOUR_BASE_URL \
114
+ --video-dir benchmark/videos
115
+ ```
116
+
117
+ Gemini example:
118
+
119
+ ```bash
120
+ python3 evaluate/evaluate.py \
121
+ --model YOUR_GEMINI_MODEL_NAME \
122
+ --provider gemini \
123
+ --api-key YOUR_GEMINI_API_KEY \
124
+ --video-dir benchmark/videos
125
+ ```
126
+
127
+ ### Step 5. Check the Outputs
128
+
129
+ Evaluation outputs are written to:
130
+
131
+ ```text
132
+ evaluate/results/Results-<model>.json
133
+ evaluate/results/Results-<model>.csv
134
+ ```
135
+
136
  ## Dataset Structure
137
 
138
  ### Data Instances
 
159
 
160
  ### Data Files
161
 
162
+ This Hugging Face dataset repository contains:
163
 
164
  - `questions.json`: root-level annotation file used by Hugging Face Dataset Preview and Data Studio
165
+ - `data/<video_id>.<ext>`: benchmark video files downloaded by the official helper script
166
  - `README.md`: Hugging Face dataset card
167
  - `LICENSE`: custom research-use terms for the benchmark materials
168
 
169
+ The canonical annotation file used by the evaluator remains:
170
+
171
+ - `benchmark/annotations/1-1114.json` in the GitHub repository
172
+
173
  The official evaluation code prepares videos into the following local layout:
174
 
175
  ```text
 
182
  git clone https://github.com/hrinnnn/PerceptionComp.git
183
  cd PerceptionComp
184
  pip install -r requirements.txt
185
+ python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
186
  ```
187
 
188
+ If your environment provides `python` instead of `python3`, use that alias consistently for the commands below.
189
+
190
  ### Data Splits
191
 
192
  The current public release uses one official evaluation split:
193
 
194
+ - `test`: 1,114 multiple-choice questions over 273 referenced video IDs, exposed through `questions.json`
195
 
196
  ## Dataset Creation
197
 
 
270
  git clone https://github.com/hrinnnn/PerceptionComp.git
271
  cd PerceptionComp
272
  pip install -r requirements.txt
273
+ python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
274
+ python3 evaluate/evaluate.py \
275
  --model YOUR_MODEL_NAME \
276
  --provider api \
277
  --api-key YOUR_API_KEY \