
Many attempts to accurately classify speech and music have been investigated over the years. This paper presents modulation features for effective speech and music classification. A Gammatone filter bank is used as a front-end for this classification system, where amplitude modulation (AM) and frequency modulation (FM) features are extracted from the critical band outputs of the Gammatone filters. In addition, the cepstral coefficients are also calculated from the energies of the filter bank outputs. The cepstral coefficients, AM and FM components are all given as input feature vector to Gaussian Mixture Models (GMM) which act as a speech-music classifier. The output probabilities of all GMMs are combined before making a decision. The error rate for different types of music has also been compared. Low frequency musical instruments such as the electric bass guitar were found to be more difficult to discriminate from speech, however, the proposed features were able to reduce such errors significantly.
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 6 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
