Downloads provided by UsageCounts
Introduction: Divide and Remaster (DnR) is a source separation dataset for training and testing algorithms that separate a monaural audio signal into speech, music, and sound effects/background stems. The dataset is composed of artificial mixtures using audio from the librispeech, free music archive (FMA), and Freesound Dataset 50k (FSD50k). We introduce it as part of the Cocktail Fork Problem paper. At a Glance: The size of the unzipped dataset is ~174GB Each mixture is 60 seconds long and sources are not fully overlapped Audio is encoded as 16-bit .wav files at a sampling rate of 44.1 kHz The data is split into training tr (3295 mixtues), validation cv (440 mixtures) and testing tt (652 mixtures) subsets The directory for each mixture contains four .wav files, mix.wav, music.wav, speech.wav, sfx.wav, and annots.csv which contains the metadata for the original audio used to compose the mixture (transcriptions for speech, sound classes for sfx, and genre labels for music) Other Resources: Demo examples and additional information are available at: https://cocktail-fork.github.io/ For more details about the data generation process, the code used to generate our dataset can be found at the following: https://github.com/darius522/dnr-utils Contact and Support: Have an issue, concern, or question about DnR ? If so, please open an issue here. For any other inquiries, feel free to shoot an email at: firstname.lastname@gmail.com, my name is Darius Petermann ;) Citation: If you use DnR please cite [our paper](https://arxiv.org/abs/2110.09958) in which we introduce the dataset as part of the Cocktail Fork Problem: @article{Petermann2021cocktail, title={The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks}, author={Darius Petermann and Gordon Wichern and Zhong-Qiu Wang and Jonathan {Le Roux}}, year={2021}, journal={arXiv preprint arXiv:2110.09958}, archivePrefix={arXiv}, primaryClass={eess.AS} }
sound event detection, audio classification, audio, speech recognition, audio source separation, music genre recognition, soundtrack separation
sound event detection, audio classification, audio, speech recognition, audio source separation, music genre recognition, soundtrack separation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 620 | |
| downloads | 1K |

Views provided by UsageCounts
Downloads provided by UsageCounts