Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   RetryableConfigNamesError
Exception:    ConnectionError
Message:      Couldn't reach 'MME-Benchmarks/Video-MME-v2' on the Hub (LocalEntryNotFoundError)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1133, in dataset_module_factory
                  raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({e.__class__.__name__})") from e
              ConnectionError: Couldn't reach 'MME-Benchmarks/Video-MME-v2' on the Hub (LocalEntryNotFoundError)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Video-MME-v2 logo

🍎 Project Page | πŸ“– Paper | πŸ€— Dataset | πŸ† Leaderboard


πŸ€— About This Repo

This repository contains annotation data for "Video-MME-v2: Towards the Next Stage in Benchmarks for Comprehensive Video Understanding". It mainly consists of three parts: videos/, test.parquet, and subtitle.zip.

  • videos/ contains 800 1080p MP4 files, organized sequentially into 40 zip archives. For example, 001.mp4 to 020.mp4 are stored in 001.zip.

  • test.parquet contains 3200 QA instances, with each video paired with 4 questions. Each instance includes the question, options, answer, and auxiliary metadata such as the video id and task type.

  • subtitle.zip contains 800 JSONL files, each corresponding to a unique video id, with word-level entries and timestamps.


🩷 About This Benchmark

In 2024, our Video-MME benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between leaderboard performance and actual user experience. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce Video-MME v2β€”a progressive and robust benchmark designed to drive the next generation of video understanding models.

Teaser

  • Dataset Size

    The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions.

  • Multi-level Evaluation Hierarchy

    • πŸ” Level 1: Retrieval & Aggregation
    • ⏱️ Level 2: Level 1 + Temporal Understanding
    • 🧠 Level 3: Level 2 + Complex Reasoning.
  • Group-based Evaluation Strategy

    • Capability consistency groups examine the breadth of a specific fundamental perception skill.
    • Reasoning coherence groups assess the depth of a model’s reasoning ability.
  • Video Sources

    All videos are collected from YouTube. Over 80% were published in 2025 or later, with nearly 40% published after October 2025.

  • Video Categories

    The dataset includes four top-level domains, further divided into 31 fine-grained subcategories.

  • Metrics

    A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups.


🍺 About a Concrete Case

πŸ’‘ Why this example matters? This video QA group demonstrates our Reasoning Coherence evaluation strategy and Multi-level Hierarchy. To answer the final state correctly, a model must successfully track the object backwards through temporal swaps. If a model guesses the initial state correctly but fails the intermediate swaps, our first error truncation mechanism will accurately penalize it for flawed reasoning.

Demo video cover

πŸ‘† Click the cover image to view the demo video.

Q1: Did the ball exist underneath any of the shells?
A. No.
B. Yes. βœ…
C. Cannot be determined.

Q2: Underneath which shell was the ball located at the end?
A. There is no ball under any shell.
B. The third shell.
C. The sixth shell.
D. The second shell.
E. The seventh shell.
F. The fifth shell.
G. The fourth shell. βœ…
H. The first shell.

Q3: The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located after the first swap?
A. There is no ball under any shell.
B. The seventh shell.
C. The fourth shell. βœ…
D. The fifth shell.
E. The sixth shell.
F. The second shell.
G. The third shell.
H. The first shell.

Q4: The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located initially?
A. The seventh shell.
B. The fourth shell.
C. The fifth shell.
D. The third shell. βœ…
E. The second shell.
F. There is no ball under any shell.
G. The first shell.
H. The sixth shell.

Downloads last month
178