
doi: 10.2139/ssrn.3558389
Recent advancements in neural networks have helped algorithms to generate musical sound that is comparable to sounds composed by humans. In this project, polyphonic music is generated using Deep Learning neural network. Unlike generating images and videos, generating music is a bit different; generating music is time dependent. The used algorithm for this purpose is Variational Auto-encoders. By generating a sequential note of measures by already defined chord progression, our model can produce musical notes with convincing long-term structure. These algorithms have tune-able parameters and such types of algorithms yield better results and are practically useful for artists, filmmakers and many others in their creative tasks. In the future, users can provide tracks which consist of all the instruments and gives the similar track as the input. For e.g. given a specific track composed by a human, algorithms can create additional tracks similar to it.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
