<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
The importance of having high quality ground truth annotations for a variety of multimedia applications is widely recognised. Indeed, one of the most time-consuming steps in methods’ development is represented by the generation of accurate truth and comparing this truth to the output of applications to provide evidence that the devised methods are performing well in the targeted domain. However, the cost of creating labeled data, which implies asking a human to examine multimedia data thoroughly and provide labels, becomes impractical as datasets to be labeled grow. This can lead to the creation of disparate datasets which are often too small for both learning and evaluating the underlining data distribution. To build up large scale datasets, recently, methods exploiting the collaborative effort of a large population of users annotators (e.g. Labelme, CalTech, Pascal VOC, Trecvid) have been devised. Nevertheless, the creation of a common and large scale ground truth data to train, test and evaluate algorithms for multimedia processing is still a major concern. In particular, the research in ground truth labelling still lacks both in developing user-oriented tools and in automatic methods for supporting annotators in accomplishing their labelling tasks. In fact, tools for ground truth annotation must be user-oriented, providing visual interfaces and methods that are able to guide and speed-up the process of ground truth creation. Under this scenario, multimedia processing methods and collaborative methods play a crucial role. Further, setting up requirements and standards for the creation of multimedia dataset allows other researchers in the field to continue efforts and to contribute to the creation and annotation of multimedia data. This allows researchers to share and extend each others’ work, which is beneficial for the research community.
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 5 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |