
handle: 2318/1906081
In this paper, we describe our intuitions about how language tech- nologies can contribute to create new ways to enhance the accessi- bility of exhibits in cultural contexts by exploiting the knowledge about the history of our senses and the link between perception and language. We evaluate the performance of fve multi-class classifcation models for the task of sensory recognition and introduce the DEEP Sensorium (Deep Engaging Experiences and Practices - Sensorium), a multidimensional dataset that combines cognitive and afective features to inform systematic methodologies for augmenting ex- hibits with multi-sensory stimuli. For each model, using diferent feature sets, we show that the features expressing the afective dimension of words combined with sub-lexical features perform better than uni-dimensional training sets.
multidimensional lexicon, accessibility, afect, machine learning, multi-sensory design, museums
multidimensional lexicon, accessibility, afect, machine learning, multi-sensory design, museums
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
