Harland's picture
Upload README.md with huggingface_hub
3c50568 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: dev
        path: dev.jsonl
license: apache-2.0

DCASE 2026 Task 5 Audio-Dependent Question Answering (ADQA) Development Set

DCASE 2026 Paper Training Set

This is the official Development Set for DCASE 2026 Challenge Task 5: Audio-Dependent Question Answering (ADQA).

The ADQA task focuses on addressing "Textual Hallucination" in Large Audio-Language Models (LALMs) — where models pass audio understanding benchmarks by relying on text prompts and internal linguistic priors rather than actual audio perception. ADQA introduces a rigorous evaluation framework using Audio-Dependency Filtering (ADF) to ensure questions cannot be answered through common sense or text-only reasoning.

Audio-Dependency Filtering (ADF)

All samples in this development set undergo a rigorous four-step ADF hard-filtering process to guarantee genuine audio dependence:

  1. Silent Audio Filtering: Questions solvable by LALMs without audio are removed.
  2. LLM Common-sense Check: Ensures no external knowledge alone can solve the question.
  3. Perplexity-based Soft Filtering: Eliminates samples with text-based statistical shortcuts.
  4. Manual Verification: Final human-in-the-loop check for ground-truth accuracy.

Statistics

Metric Count
Total Samples 1,607
Unique Audio Files 1,607

Data Sources

The development set is composed of two parts:

  • Existing Benchmarks: A portion of the samples is derived from established audio understanding benchmarks, including MMAU, MMAR, and MMSU. These samples cover a wide range of audio understanding tasks such as speech, music, and sound perception.
  • Human-Annotated Questions: The remaining majority consists of newly constructed, human-annotated multiple-choice questions based on diverse audio sources, designed to further challenge models on real-world audio comprehension.

All samples undergo the four-step Audio-Dependency Filtering (ADF) process described above.

Directory Structure

DCASE2026-Task5-DevSet/
├── dev.jsonl                # Main data file (1,607 samples, shuffled)
├── dev_audios/              # Audio files (1,607 .wav files)
└── README.md

Data Format

Each entry in dev.jsonl is a JSON object with the following fields:

Field Type Description
id string Unique sample identifier (e.g., dev_0001)
audio_path string Relative path to audio file
question_text string Question text
answer string Correct answer
multi_choice list[string] Answer choices

Example

{
  "id": "dev_0001",
  "audio_path": "dev_audios/dev_0001.wav",
  "question_text": "What is the speaker's primary emotion in this audio?",
  "answer": "Happiness",
  "multi_choice": ["Sadness", "Happiness", "Anger", "Fear"]
}

Submission Format

The system output file should be a .csv file with the following two columns:

Column Description
question The question ID (e.g., dev_0001)
answer The system's answer, must match one of the given choices

License

This dataset is distributed under the Apache-2.0 license.

Citation

If you use this development set or participate in DCASE 2026 Task 5, please cite:

@inproceedings{he2025audiomcq,
  title={Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models},
  author={He, Haolin and others},
  booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
  year={2026}
}