
AbstractCamera trapping is widely used to monitor mammalian wildlife but creates large image datasets that must be classified. In response, there is a trend towards crowdsourcing image classification. For high‐profile studies of charismatic faunas, many classifications can be obtained per image, enabling consensus assessments of the image contents. For more local‐scale or less charismatic communities, however, demand may outstrip the supply of crowdsourced classifications. Here, we consider MammalWeb, a local‐scale project in North East England, which involves citizen scientists in both the capture and classification of sequences of camera trap images. We show that, for our global pool of image sequences, the probability of correct classification exceeds 99% with about nine concordant crowdsourced classifications per sequence. However, there is high variation among species. For highly recognizable species, species‐specific consensus algorithms could be even more efficient; for difficult to spot or easily confused taxa, expert classifications might be preferable. We show that two types of incorrect classifications – misidentification of species and overlooking the presence of animals – have different impacts on the confidence of consensus classifications, depending on the true species pictured. Our results have implications for data capture and classification in increasingly numerous, local‐scale citizen science projects. The species‐specific nature of our findings suggests that the performance of crowdsourcing projects is likely to be highly sensitive to the local fauna and context. The generality of consensus algorithms will, thus, be an important consideration for ecologists interested in harnessing the power of the crowd to assist with camera trapping studies.
camera traps, citizen science, MammalWeb, 006, crowdsourcing, data classification, data science
camera traps, citizen science, MammalWeb, 006, crowdsourcing, data classification, data science
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 44 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
