
We propose a novel technique for algorithm-selection which adopts a deep-learning approach, specifically a Recurrent-Neural Network with Long-Short-Term-Memory (RNN-LSTM). In contrast to the majority of work in algorithm-selection, the approach does not need any features to be extracted from the data but instead relies on the temporal data sequence as input. A large case-study in the domain of 1-d bin packing is undertaken in which instances can be solved by one of four heuristics. We first evolve a large set of new problem instances that each have a clear "best solver" in terms of the heuristics considered. An RNN-LSTM is trained directly using the sequence data describing each instance to predict the best-performing heuristic. Experiments conducted on small and large problem instances with item sizes generated from two different probability distributions are shown to achieve between 7% to 11% improvement over the single best solver (SBS) (i.e. the single heuristic that achieves the best performance over the instance set) and 0% to 2% lower than the virtual best solver (VBS), i.e the perfect mapping.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 33 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
