
doi: 10.1109/dsaa.2016.12
Algorithm selection and hyperparameter tuning are omnipresent problems for researchers and practitioners. Hence, it is not surprising that the efforts in automatizing this process using various meta-learning approaches have been increased. Sequential model-based optimization (SMBO) is ne of the most popular frameworks for finding optimal hyperparameter configurations. Originally designed for black-box optimization, researchers have contributed different meta-learning approaches to speed up the optimization process. We create a generalized framework of SMBO and its recent additions which gives access to adaptive hyperparameter transfer learning with simple surrogates (AHT), a new class of hyperparameter optimization strategies. AHT provides less time-overhead for the optimization process by replacing time-and space-consuming transfer surrogate models with simple surrogates that employ adaptive transfer learning. In an empirical comparison on two different meta-data sets, we can show that AHT outperforms various instances of the SMBO framework in the scenarios of hyperparameter tuning and algorithm selection.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 21 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
