
Because of the popularity of online multimedia videos, there has been much interest in recent years in acoustic concept detection and classification for the improvement of online video search. In this paper, we present our acoustic concept annotation effort in user-submitted quality videos. We decide on the acoustic concept categories so that their labels can be used as low-level features to distinguish higher-level video events in a multimedia event detection (MED) task. Using thorough acoustic concept annotations, we train acoustic concept models and use these to generate features for a MED task in the form of what we refer to as a segmental GMM-based co-occurrence feature vector.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 5 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
