
Abstract The usual approach in runtime analysis is to derive estimates on the number of fitness function evaluations required by a method until a suitable element of the search space is found. One justification for this is that in real applications, fitness evaluation often contributes the most computational effort. A tacit assumption in this approach is that this effort is uniform and static across the search space. However, this assumption often does not hold in practice: some candidates may be far more expensive to evaluate than others. This might occur, for example, when fitness evaluation requires running a simulation or training a machine learning model. Despite the availability of a wide range of benchmark functions coupled with various runtime performance guarantees, the runtime analysis community currently lacks a solid perspective of handling variable fitness cost. Our goal with this paper is to argue for incorporating this perspective into our theoretical toolbox. We introduce two models of handling variable cost: a simple non-adaptive model together with a more general adaptive model. We prove cost bounds in these scenarios and discuss the implications for taking into account costly regions in the search space.
Evolutionary algorithms, genetic algorithms (computational aspects), cost functions, Analysis of algorithms, evolutionary algorithms, runtime analysis
Evolutionary algorithms, genetic algorithms (computational aspects), cost functions, Analysis of algorithms, evolutionary algorithms, runtime analysis
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
