RoboJam: A musical mixture density network for collaborative touchscreen interaction

Preprint, Article OPEN
Martin, Charles P.; Torresen, Jim;
(2017)
  • Publisher: Springer Verlag
  • Related identifiers: doi: 10.1007/978-3-319-77583-8_11
  • Subject: Computer Science - Sound | Electrical Engineering and Systems Science - Audio and Speech Processing | Computer Science - Human-Computer Interaction | Computer Science - Neural and Evolutionary Computing
    acm: InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI)

RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and ab... View more
  • References (18)
    18 references, page 1 of 2

    [1] John A. Biles. Improvizing with genetic algorithms: Genjam. In Eduardo Reck Miranda and John Al Biles, editors, Evolutionary Computer Music, pages 137{169. Springer London, London, 2007. doi:10.1007/ 978-1-84628-600-1_7.

    [2] Christopher M. Bishop. Mixture density networks. Technical Report NCRG/97/004, Neural Computing Research Group, Aston University, 1994.

    [4] Florian Colombo, Alexander Seeholzer, and Wulfram Gerstner. Deep arti - cial composer: A creative neural network model for automated melody generation. In Jo~ao Correia, Vic Ciesielski, and Antonios Liapis, editors, Computational Intelligence in Music, Sound, Art and Design: 6th International Conference, EvoMUSART 2017 Proceedings, pages 81{96. Springer International Publishing, Cham, 2017. doi:10.1007/978-3-319-55750-2_6.

    [5] Douglas Eck and Jurgen Schmidhuber. A rst look at music composition using lstm recurrent neural networks. Technical Report IDSIA-07-02, Instituto Dalle Molle di studi sull' intelligenza arti ciale, Manno, Switzerland, 2007.

    [6] A. Graves. Generating Sequences With Recurrent Neural Networks. ArXiv e-prints, August 2013. URL: https://arxiv.org/abs/1308.0850.

    [7] D. Ha and D. Eck. A Neural Representation of Sketch Drawings. ArXiv e-prints, April 2017. URL: https://arxiv.org/abs/1704.03477.

    [8] Gaetan Hadjeres, Francois Pachet, and Frank Nielsen. DeepBach: a steerable model for Bach chorales generation. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1362{1371, International Convention Centre, Sydney, Australia, 06{11 Aug 2017. PMLR. URL: http://proceedings.mlr.press/v70/hadjeres17a. html.

    [9] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735{1780, 1997. doi:10.1162/neco.1997.9.8. 1735.

    [10] Patrick Hutchings and Jon McCormack. Using autonomous agents to improvise music compositions in real-time. In EvoMUSART 2017, volume 10198 of LNCS. Springer International Publishing, 2017. doi:10.1007/ 978-3-319-55750-2_8.

    [11] Alexander Refsum Jensenius, Marcelo M. Wanderley, Rolf Inge God y, and Marc Leman. Musical gestures: Concepts and methods in research. In Musical Gestures: Sound, Movement, and Meaning. Routledge, 2010.

  • Related Research Results (2)
  • Related Organizations (3)
  • Metrics
Share - Bookmark