Deep Predictive Models in Interactive Music

Preprint English OPEN
Martin, Charles P. ; Ellefsen, Kai Olav ; Torresen, Jim (2018)
  • Subject: Computer Science - Sound | Electrical Engineering and Systems Science - Audio and Speech Processing | Computer Science - Artificial Intelligence | Computer Science - Human-Computer Interaction | Computer Science - Neural and Evolutionary Computing

Musical performance requires prediction to operate instruments, to perform in groups and to improvise. We argue, with reference to a number of digital music instruments (DMIs), including two of our own, that predictive machine learning models can help interactive systems to understand their temporal context and ensemble behaviour. We also discuss how recent advances in deep learning highlight the role of prediction in DMIs, by allowing data-driven predictive models with a long memory of past states. We advocate for predictive musical interaction, where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. We propose a framework for characterising prediction as relating to the instrumental sound, ongoing musical process, or between members of an ensemble. Our framework shows that different musical interface design configurations lead to different types of prediction. We show that our framework accommodates deep generative models, as well as models for predicting gestural states, or other high-level musical information. We apply our framework to examples from our recent work and the literature, and discuss the benefits and challenges revealed by these systems as well as musical use-cases where prediction is a necessary component.
  • References (53)
    53 references, page 1 of 6

    [1] N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3):175-185, 1992. doi:10.1080/00031305.1992.10475879.

    [2] C. Ames. Automated composition in retrospect: 1956-1986. Leonardo, 20(2):169-185, 1987. doi:10.2307/1578334.

    [3] C. Ames. The Markov process as a compositional model: A survey and tutorial. Leonardo, 22(2):175-187, 1989. doi:10.2307/1575226.

    [4] J. A. Biles. Improvizing with genetic algorithms: Genjam. In E. R. Miranda and J. A. Biles, editors, Evolutionary Computer Music, pages 137-169. Springer London, London, 2007. doi:10.1007/978-1-84628- 600-1_7.

    [5] S.-J. Blakemore, D. M. Wolpert, and C. D. Frith. Central cancellation of self-produced tickle sensation. Nature Neuroscience, 1(7):635-640, 1998. doi:10.1038/2870.

    [6] A. R. Brown and T. Gifford. Prediction and proactivity in real-time interactive music systems. Int. Workshop on Musical Metacreation, pages 35-39, 2013. URL: http://eprints.qut.edu.au/64500/.

    [7] J.-P. Cáceres, R. Hamilton, D. Iyer, C. Chafe, and G. Wang. To the edge with China: Explorations in network performance. In ARTECH 2008: Proc. 4th Int. Conf. Digital Arts, pages 61-66, 2008.

    [8] B. Caramiaux, N. Montecchio, A. Tanaka, and F. Bevilacqua. Adaptive gesture recognition with variation estimation for interactive systems. ACM Transactions on Interactive Intelligent Systems, 4(4):18:1-18:34, 2014. doi:10.1145/2643204.

    [9] B. Caramiaux and A. Tanaka. Machine learning of musical gestures. In Proceedings of the International Conference on New Interfaces for Musical Expression, NIME '13, pages 513-518, 2013. URL: http://nime. org/proceedings/2013/nime2013_84.pdf.

    [10] K. Chakraborty, K. Mehrotra, C. K. Mohan, and S. Ranka. Forecasting the behavior of multivariate time series using neural networks. Neural networks, 5(6):961-970, 1992.

  • Metrics
    No metrics available
Share - Bookmark