Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xe8 in position 63: invalid continuation byte
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  for key, table in generator:
                                    ^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/text/text.py", line 98, in _generate_tables
                  batch = f.read(self.config.chunksize)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "<frozen codecs>", line 322, in decode
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 63: invalid continuation byte
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
================================================================================
TRANSCRIPT WITH TIMESTAMPS
================================================================================
[00:00:07.473 --> 00:00:35.674] [Unknown]
multiple offenders, all offenders, all offenders, all offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders, multiple offenders
[00:00:37.932 --> 00:00:54.533] [Unknown]
We also found one of the, which one?
[00:00:54.553 --> 00:00:57.737] [Unknown]
There's a fire, one on the left side, and I'm going to go right over there.
[00:00:57.757 --> 00:00:58.017] [Unknown]
That's Craig.
================================================================================
TRANSCRIPT WITH TIMESTAMPS
================================================================================
[00:00:02.039 --> 00:00:14.115] [Unknown]
It's gonna be in the Addison mall.
[00:00:16.359 --> 00:00:28.435] [Unknown]
Over there, GameStop.
[00:00:30.406 --> 00:00:32.548] [Unknown]
I'm just going to die.
[00:00:32.728 --> 00:00:34.249] [Unknown]
733, I lost eyes.
[00:00:34.710 --> 00:00:44.418] [Unknown]
I'm going to head back to the game stop.
[00:00:44.458 --> 00:00:58.210] [Unknown]
They were just shooting, sir.
[00:00:58.230 --> 00:00:59.150] [Unknown]
I fired.
[00:00:59.511 --> 00:00:59.991] [Unknown]
I fired.
================================================================================
TRANSCRIPT WITH TIMESTAMPS
================================================================================
[00:00:00.031 --> 00:00:16.414] [Unknown]
They helped out on Sacramento.
[00:00:20.881 --> 00:00:22.283] [Unknown]
They helped out on Sacramento.
[00:00:22.323 --> 00:00:29.413] [Unknown]
Thank you.
[00:00:29.714 --> 00:00:36.382] [Unknown]
1712, you're behind a meditation or what?
[00:00:36.402 --> 00:00:51.840] [Unknown]
1712, I'm running inbound, is it digging?
[00:00:53.525 --> 00:01:00.058] [Unknown]
And we have to help you guys out because someone there is just there for info.
================================================================================
TRANSCRIPT WITH TIMESTAMPS
================================================================================
[00:00:00.031 --> 00:00:04.418] [Unknown]
So we are pursuing it.
[00:00:06.220 --> 00:00:08.844] [Unknown]
We got to sit on it.
[00:00:08.984 --> 00:00:11.088] [Unknown]
I see nothing.
[00:00:11.188 --> 00:00:12.069] [Unknown]
I don't know.
[00:00:13.852 --> 00:00:15.474] [Unknown]
How did he plan?
[00:00:17.177 --> 00:00:17.718] [Unknown]
How did he plan?
[00:00:20.341 --> 00:00:21.383] [Unknown]
How did he plan?
[00:00:21.443 --> 00:00:24.388] [Unknown]
We're going eastbound on red.
[00:00:24.408 --> 00:00:25.730] [Unknown]
We're going eastbound on red.
[00:00:25.750 --> 00:00:27.372] [Unknown]
This is the chief.
End of preview.

BodyCam-VQA: BWC-VideoText-359

Dataset Summary

BodyCam-VQA (BWC-VideoText-359) is a specialized multimodal dataset designed for research in legal transparency, forensic linguistics, and automated video analysis. It consists of 359 one-minute video segments sourced from the Civilian Office of Police Accountability (COPA) of Chicago.

This dataset is unique in its focus on Body-Worn Camera (BWC) footage, which presents distinct challenges such as occlusions, rapid motion, varied lighting, and high-ambient-noise audio environments.

Dataset Structure

Data Statistics

The dataset is split into training and evaluation subsets. The evaluation set is accompanied by a human-verified "gold standard" ground truth file for benchmarking.

Component Split Count Format
Videos Train 288 .mp4
Transcripts (WhisperX) Train 288 .json
Videos Evaluation 71 .mp4
Transcripts (WhisperX) Evaluation 71 .json
Ground Truth (Human) Evaluation 1 .json

File Organization & Naming

  • Video Files: Located in train_videos.zip and eval_videos.zip. Filename format: Video[index].mp4.
  • Machine Transcripts: Located in train_transcripts.zip and eval_transcripts.zip. These can be mapped to videos via the index.
  • Evaluation Ground Truth: The file human_annotated_ground_truth_eval_set.json contains the human-evaluated labels for the 71 evaluation videos, serving as the definitive benchmark for model performance.

Data Generation & Quality Control

Automated Transcription

Initial transcripts were generated using WhisperX. This tool was selected for its robust performance in noisy environments and its ability to provide precise word-level timestamps, essential for temporal alignment in video-text tasks.

Human Annotation (Evaluation Set)

To ensure the reliability of the evaluation benchmark, the 71-video evaluation subset was manually reviewed and annotated. This Ground Truth file corrects potential machine-transcription errors and provides verified labels for Visual Question Answering (VQA) tasks, ensuring that model assessments reflect real-world accuracy.


Usage & Benchmarking

Benchmarking Protocol

Researchers should use the train split for model fine-tuning and the eval split for testing. Performance should be measured against the human_annotated_ground_truth_eval_set.json to ensure results are compared against a human-verified baseline.

Intended Use

  • Visual Question Answering (VQA): Captioning video events via natural language queries.
  • Legal & Forensic Analysis: Training models to assist in the review of law enforcement public records.
  • Multimodal Robustness: Testing AI performance on real-world footage.

Ethical Considerations & Licensing

Ethics: This dataset contains real-world law enforcement interactions. While these are public records released in the interest of transparency, researchers are expected to use this data ethically and responsibly. Avoid using this dataset for person identification or unauthorized surveillance applications.

License: This data is sourced from public records provided by the City of Chicago. It is subject to the terms and conditions of the Civilian Office of Police Accountability (COPA) public disclosure policies.

Downloads last month
7