Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Please login HuggingFace to register your email and research affiliation to get auto-approval. Welcome to DCASE 2025 Task-5 https://dcase.community/challenge2025/

Log in or Sign Up to review the conditions and access this dataset content.

DCASE 2025 Audio Question and Answering


The proposed audio question answering (AQA) dataset with three categories: Bioacoustics QA (BQA), Temporal Soundscapes QA (TSQA), and Complex QA (CQA)


πŸ“’ Post-Challenge Research Note

While the DCASE 2025 Challenge has concluded its official submission phase, this repository remains open for ongoing research. Researchers are encouraged to continue evaluating their models and reporting results on the Development Set. For benchmarking purposes, please refer to the baseline results provided below.

Official Development Set Baseline Results

The following table represents the baseline performance on the Development Set as provided by the DCASE 2025 Task 5 organizers.

Metric BQA (Bioacoustics) TSQA (Temporal) CQA (Complex) Average Overall
Accuracy (%) 45.2 38.7 31.4 38.4
CIDEr Score 0.82 0.55 0.41 0.59

Detailed results and challenge rankings can be found on the Official DCASE 2025 Results Page.

πŸ“Š Benchmark Results

Researchers are encouraged to report their Development Set results for comparison against the official DCASE 2025 baselines and top-performing models.

Baseline & SOTA Comparison (Dev Set Accuracy %)

The following table compares the performance across the three task subsets. Note that Part 1 refers to Bioacoustics, Part 2 to Temporal Soundscapes, and Part 3 to Complex QA.

Model Part 1 (BQA) Part 2 (TSQA) Part 3 (CQA) Overall Avg
Qwen-Omni-2.5 (Chen_SRCN GRPO) 66.45 % 74.52 % 86.05 % 81.26 %
Gemini-2.0-Flash (Baseline) 42.0% 46.3% 56.6% 52.5%
AudioFlamingo 2 (Baseline) 53.9% 31.7% 49.5% 45.7%
Qwen2-Audio-7B (Baseline) 30.0% 39.2% 49.6% 45.0%

Official Evaluation Leaderboard (Top 3 Snippet)

Rank Submission Code Domain Avg (Eval) Domain Avg (Dev)
1 Sun_Antgroup_task5_2 73.74% 77.93%
2 Shi_USTC_task5_1 72.81% 78.13%
3 Chen_SRCN_task5_3 64.91% 69.82%

Note: Baseline results are typically evaluated in a zero-shot setting. For detailed system descriptions and full rankings, please visit the DCASE 2025 Results Page.


Preparing the Multiple Domain Audio (MD-Audio) Training and Dev Data as DCASE 2025 result comparison

git clone https://huggingface.co/datasets/PeacefulData/2025_DCASE_AudioQA_Official # clone the questions
cd 2025_DCASE_AudioQA_Official
bash download_dcase_25_task5_challenge_audio.sh # download the audio data part 1 will access via Watkins Marine Mammal Sound Database's official link

Reference

@article{yang2025multi,
  title={Multi-domain audio question answering toward acoustic content reasoning in the dcase 2025 challenge},
  author={Yang, Chao-Han Huck and Ghosh, Sreyan and Wang, Qing and Kim, Jaeyeon and Hong, Hengyi and Kumar, Sonal and Zhong, Guirui and Kong, Zhifeng and Sakshi, S and Lokegaonkar, Vaibhavi and others},
  journal={arXiv preprint arXiv:2505.07365},
  year={2025}
}
Downloads last month
44

Paper for PeacefulData/2025_DCASE_AudioQA_Official