
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
Labeled songs of domestic canary M1-2016-spring (Serinus canaria) J. Giraudon*123, N. Trouvain*123, A. Cazala4, C. Del Negro4, X. Hinaut123 1 Inria Bordeaux Sud-Ouest, France 2 LaBRI, Bordeaux INP, CNRS, UMR 5800, France 3 Institut des Maladies Neurogégénératives, Université de Bordeaux, CNRS, UMR 5293, France 4 Paris-Saclay University, UMR 9197 CNRS, Paris-Saclay Institute of Neuroscience, France * these authors participated equally to this work. General information This dataset contains ~3h of labeled songs (459 songs) of one male canary (called M1) recorded between May 24th and June 15th 2016. Songs were recorded in a sound-isolation chamber using a RODE M3 microphone, an external sound card for microphone amplification (M-Audio Fast Track Ultra 8R), and the software Sound Analysis Pro 2011 (SAP). SAP parameters were set with conservative thresholds (software threshold to 4-6) in order to record the initiation of canary's songs which can be low in volume. Songs were hand labelled by one human expert using Audacity. They were then checked and corrected by another human expert assisted by an automated program based on recurrent neural networks (see References). Dataset description Canary songs are labeled using 27 different identified syllable classes + 1 "call" class identifying simple off-song calls + 1 "TRASH" class for irrelevant sounds (very rare vocalizations or non-bird sounds) + 1 "SIL" class for silence between vocalizations. Songs are annotated at the phrase level: a phrase consists of a repetition of a single syllable type and each phrase type is assigned a label. Annotations are provided in CSV format in the "M1-2016-spring_csv_annotations.zip" archive. There is one file per song, containing: a "wave" column indicating the song's audio filename; "start" and "end" columns indicating the temporal delimitation of the label from the begining of the song, in seconds; a "syll" column indicating the labels. Annotations are also provided in Audacity TXT format in the "M1-2016-spring_audacity_annotations.zip" archive. There is one file per song, containing three tabulation-separated columns. The first two column indicates the temporal delimitation (start and end) of the phrase from the begining of the song. The thrid one contains the associated label. Annotations filenames match corresponding song audio filename. Songs are provided in WAV format (44kHz sampling rate) in the "M1-2016-spring_audio.zip" archive. There is one file per song: audio filenames match corresponding annotation filenames. References This dataset was used in: N. Trouvain, X. Hinaut (2021) Canary Song Decoder: Transduction and Implicit Segmentation with ESNs and LTSMs. HAL preprint 〈hal-03203374〉
{"references": ["N. Trouvain, X. Hinaut (2021) Canary Song Decoder: Transduction and Implicit Segmentation with ESNs and LTSMs. HAL preprint \u27e8hal-03203374\u27e9"]}
canary, animal vocalizations, birdsong, audio, serinus canaria
canary, animal vocalizations, birdsong, audio, serinus canaria
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
views | 46 | |
downloads | 43 |