
In evolutionary computation, it is common practice to use sets of instances as test-beds for evaluating and comparing the performance of new optimisation algorithms. In some cases, real-world instances are available, and, thus, they are used to constitute the experimental benchmark. Unfortunately, this is not the general case. Due to the difficulties for obtaining real-world instances, or because the optimisation problems defined in the literature are not exactly as those defined in the industry, practitioners are forced to create artificial instances. In this paper, we study some aspects related to the random generation of artificial instances. Particularly, we elaborate on the assumption that states that sampling uniformly at random in the space of parameters is equivalent to sampling uniformly at random in the space of functions. Illustrated with some experiments, we prove that for some type of algorithms this assumption does not hold.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
