Musical Audio Synthesis Using Autoencoding Neural Nets
Conference object, Article
Casey, Michael A.
- Publisher: International Society for Music Information Retrieval
arxiv: Computer Science::Sound | Computer Science::Neural and Evolutionary Computation
With an optimal network topology and tuning of hyperpa-\ud rameters, artificial neural networks (ANNs) may be trained\ud to learn a mapping from low level audio features to one\ud or more higher-level representations. Such artificial neu-\ud ral networks are commonly used in classification and re-\ud gression settings to perform arbitrary tasks. In this work\ud we suggest repurposing autoencoding neural networks as\ud musical audio synthesizers. We offer an interactive musi-\ud cal audio synthesis system that uses feedforward artificial\ud neural networks for musical audio synthesis, rather than\ud discriminative or regression tasks. In our system an ANN\ud is trained on frames of low-level features. A high level\ud representation of the musical audio is learned though an\ud autoencoding neural net. Our real-time synthesis system\ud allows one to interact directly with the parameters of the\ud model and generate musical audio in real time. This work\ud therefore proposes the exploitation of neural networks for\ud creative musical applications.
12 references, page 1 of 2
views in local repository
downloads in local repository
The information is available from the following content providers: