
ODAQ is a dataset addressing the scarcity of openly available collections of audio signals accompanied by corresponding subjective scores of perceived quality. ODAQ contains 240 audio samples accompanied by corresponding quality scores obtained via a MUSHRA listening test carried out in parallel at Fraunhofer IIS (Germany) and at Netflix, Inc. (USA). The quality-rated audio samples are processed versions of the original audio material (also made available). The original audio material consists of: Stereo audio with 44.1 or 48 kHz sampling frequency; 14 music excerpts (8 of which are solo recordings); 11 excerpts from movie-like soundtracks with dialogues mixed with music and effects (separate stems and transcripts are also provided). Highlights Each of the 240 audio samples is rated by 26 expert listeners (after post-screening). The audio samples are processed by a total of 6 method classes, each operating at 5 different quality levels, plus anchor conditions. The audio samples are processed by methods designed to generate quality degradations possibly encountered during audio coding and source separation. The quality levels for each processing method span the entire quality range. The diversity of the processing conditions, the large span of quality levels, the high sampling frequency of the audio signals, and the pool of international listeners make ODAQ particularly suited for further research into the prediction and analysis of perceived audio quality. The dataset is released with permissive licenses, please refer to _license_disclaimer.txt for full details. Package Structure The top-level folder contains: _license_disclaimer.txt and _detailed_license.csv detailing the license agreement; DE_systems_info.xls detailing the separation systems used for generating part of the dataset; The following subfolders. ODAQ_unprocessed This folder contains the original "unprocessed" audio material. ODAQ_listening_test This folder contains the audio samples used in the listening test and the listening test results both as individual result files (.xml) and as aggregated .csv table. ODAQ_training This folder contains the audio samples used during the training phase preceeding the main phase of the listening test. listening_test_instructions This folder contains the instructions provided to the participants in the listening test. ODAQ_DE_raw_outputs This folder contains the raw dialogue estimates output by the separation systems used for the Dialogue Enhancement (DE) scenario. ICASSP 2024 Please refer to our ICASSP 2024 paper for full details about the listening test and please cite it if you find this dataset useful: @inproceedings{Torcoli2024ODAQ, author = {Torcoli, M. and Wu, C. W. and Dick, S. and Williams, P. A. and Halimeh, M. M. and Wolcott, W. and Habets, E. A. P.}, year = {2024}, month = {April}, title = {{ODAQ}: Open Dataset of Audio Quality}, address = {Seoul, Korea}, booktitle={IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP)} } Useful Links Paper: https://arxiv.org/abs/2401.00197 GitHub project page: https://github.com/Fraunhofer-IIS/ODAQ/ Listening test app: https://github.com/Netflix-Skunkworks/listening-test-app Call for Contributions We make this data available to the community and we welcome contributions and extensions from the community!
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
