
Music is one of the fastest evolving media industries, currently undergoing a transformation at the nexus of music streaming, social media and convergence technologies. As a result, the music industry has become a mixed economy of diverse consumer channels and revenue streams, as well as disruptive innovations based on new services and content distribution models. In this setting, music companies encounter daunting challenges in dealing successfully with the transition to the new field that is shaped by streaming music, social media and media convergence. The availability of huge music catalogues and choices has rendered the problems of recommendation and discovery as key in the competition for audience, while the continuous access to multiple sources of music consumption have resulted in a dynamic audience, characterized by a highly diverse set of tastes and volatility in preferences which also depend on the context of music consumption. To serve the increasingly complex needs of the music ecosystem, FuturePulse will develop and pilot test a novel, close to market music platform in three high-impact use cases: a) Record Labels, b) Live Music, c) Online Music Platforms. The project will help music companies leverage a variety of music data and content, ranging from broadcasters (TV, radio) and music streaming data, to sales statistics and streams of music-focused social media discussions, interactions and content, through sophisticated analytics and predictive modelling services to make highly informed business decisions, to better understand their audience and the music trends of the future, and ultimately to make music distribution more effective and profitable. FuturePulse will offer these capabilities over a user-friendly, highly intuitive and visual web solution that will enable the immersion of music professionals in the realm of music data, and will support them to make highly informed and effective business decisions.
At present, it is estimated that over 10% of individuals with clinically-normal audiograms have significant difficulty understanding speech-in-noise (SPiN). In particular, synaptopathy - the loss of synapses connecting the cochlea to the auditory nerve, caused by aging or noise exposure - is thought to be an important factor contributing to this problem. Yet, recent attempts to assess synaptopathy in humans have produced mixed results. Because of its pivotal role for SPiN understanding, we hypothesize that addressing temporal fine-structure (TFS) coding fidelity should provide a more direct estimate of the impact of synaptopathy on these deficits. The INSPECTSYN project will thus focus on the coding of signals with different spectral shapes, such as vowels, to develop and test new tools to assess the contribution of synaptopathy to TFS coding. This project relies on an integrated multidisciplinary approach combining computational modelling, psychophysics and electrophysiology. In Axis #1, we will use state-of-the-art physiological models to deepen our theoretical understanding of how the peripheral auditory system encodes spectral shapes, and disentangle specific distortions caused by outer-hair-cell loss vs. synaptopathy on neural representations. In Axis #2, we will conduct psychoacoustical and electrophysiological measurements based on stimuli specifically designed to be sensitive to the impact of synaptopathy on TFS coding in various groups of listeners, young/old, with/without sensorineural hearing-loss. In Axis #3, we will study how these measurements account for SPiN intelligibility differences across listeners. Overall, this project will provide novel audiological tools to better characterize hidden hearing deficits in humans.