Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2023
License: CC BY
Data sources: ZENODO
ZENODO
Dataset . 2023
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2023
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Manually labeled Bird song dataset of 22 species from Xeno-canto to enhance deep learning acoustic classifiers with contextual information.

Authors: Jeantet Lorene; Dufourq Emmanuel;

Manually labeled Bird song dataset of 22 species from Xeno-canto to enhance deep learning acoustic classifiers with contextual information.

Abstract

Data accompanying the paper: Jeantet and Dufourq (2023). Empowering Deep Learning Acoustic Classifiers with Human-like Ability to Utilize Contextual Information for Wildlife Monitoring. Ecological Informatics. 77, 15749541, DOI: 10.1016/j.ecoinf.2023.102256 Our investigation contributes to the field of deep learning and bioacoustics by highlighting the potential for improved classification performance through the incorporation of contextual information such as time and location. To test if spatial-temporal information can enhance deep learning classifier, we developed a subset dataset derived from Xeno-Canto that included location metadata as input alongside the spectrogram. We used this dataset with the primary purpose of creating a bird song classification task with species carefully selected to share similar vocal characteristics but from distinct geographical distributions. We only considered the recordings of category `A', corresponding to the best quality score in the database. The dataset contains songs of 22 bird species from 5 families and genera differents. The recordings were downloaded from the Xeno-canto database in .wav format and each recording was manually annotated by labelling the start and stop time for every vocalisation occurrence using Sonic Visualiser. In total, database contained 6537 occurrences of bird songs of various length from 967 file recordings. A precise description of the distribution by species and country can be found in the associated article. The audio files are provided in "Audio.zip" and the manually verified annotation in "Annotations.zip". The name of each file follows the following nomenclature: Family_genus_species_country of recording_date of recording_ID Xenocanto_type of song.wav/svl. The meta-data information of each file can be find in the csv file provided (Xenocanto_metadata_qualityA_selection) based on the number of the ID Xeno-canto. The annotations can be viewed using the Sonic Visualiser software. The python codes to process these files and train neural networks can be found here : github The files were divided into a training folder and a validation folder to train and evaluate the efficiency of each method. For each species and country, we randomly selected 70% of the downloaded recordings for the training dataset and kept the remaining 30% for validation. Process to select the species : We selected the ten most recorded families in the Passeriformes order, the most represented order in Xeno-canto database. From each of the ten families, we again sub-samples the ten most recorded genera. For each genus, we observed the countries of the recordings and the number of available recordings per species and countries. From these observations, we made a self-selection of genera containing species with similar songs but recorded in different regions, with enough recordings available by species and country to form a dataset . At the end, 5 genus were selected containing 22 species. We considered only recordings associated with bird songs, specifically, within Xeno-canto we selected the `song' type. To balance the number of recordings between species of the same genus, we reduced the number of recordings for the most represented species. Thus, for each genus we calculated the average of the number of records available per species and per country and limited the number of recordings for the species/country pairs that were in greater number to this value plus two.

Acknowledgments ED is supported by a research chairship from the African Institute for Mathematical Sciences South Africa. This work was carried out with the aid of a grant from the International Development Research Centre, Ottawa, Canada, www.idrc.ca, and with financial support from the Government of Canada, provided through Global Affairs Canada (GAC), www.international.gc.ca. We thank the School for Data Science and Computational Thinking at Stellenbosch University for providing computational resources for certain aspects of this study. Computations were performed using the University of Stellenbosch's HPC2: http://www.sun.ac.za/hpc

Keywords

bioacoustics, birds, deep learning, xeno-canto, bird song classifier

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 31
    download downloads 6
  • 31
    views
    6
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
31
6
Related to Research communities