<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
# Reproducability files for the paper: MABBOB extension This document details the reproduction steps for the paper "MA-BBOB: A Problem Generator for Black-Box Optimization through Affine Combinations and Shifts" ## Dependencies This project relies on a few different libraries to achieve the presented results. These are as follows: ### IOHexperimenter Part of the [IOHprofiler](https://iohprofiler.github.io/) environment, [IOHexperimenter](https://iohprofiler.github.io/IOHexp/) provides an interface between optimization algorithms and problems, and adds detailed logging functionality to this pipeline. We use the python-version of IOHexperimenter, available on [pip as 'ioh'](https://pypi.org/project/ioh/) (we used version 0.3.14). From this package, we use the logging component, as well as the interface to the [BBOB problem suite](https://bee22.com/resources/bbob%20functions.pdf) and the MA-BBOB problem generator. ### PFlacco To analyze the function's low-level properties, we make use of Exploratory Landscape Analysis (ELA), which gives access to a wide range of features. To calculate these, we use the python-based [pflacco](https://github.com/Reiyan/pflacco) library (version 1.2.2, note that this requires python 3.8 or higher). We make use of only the features which don't require sampling additional points from the function, from the 'classical_ela_features' module. ### Nevergrad To access a variety of optimization algorihms, we make use of the [Nevergrad](https://github.com/facebookresearch/nevergrad) (version 0.4.3.post8). We make use of the following algorithms from Nevergrads optimizers module: 'DifferentialEvolution', 'DiagonalCMA', 'RCobyla' ### Modular CMA-ES + DE In addition to the Nevergrad algorithms, we make use of two modular algorithms frameworks in our portfolio. The first is [Modular CMA-ES](https://github.com/IOHprofiler/ModularCMAES), 'modcma' on pip (version 1.0.2). The second is [Modular DE](https://github.com/Dvermetten/ModDE), 'modde' on pip (version 0.0.1). ### IOHanalzyer As a final requirement, we make use of the [IOHanalyzer](https://github.com/IOHprofiler/IOHanalyzer). This is an R-based library for analyzing and visualizing optimization algorithm performance. We use version 0.1.8.4. ## Acessing the MA-BBOB generator Since it is integrated in IOHexperimenter, acessing the MA-BBOB functions can be done easily from python. As a default, the generator uses the function creation procedure described in section 3 of the paper to sample new functions. To ensure the functions are reusable, the 'instance id' is used to seed the generation procedure, so using the same id multiple times leads to the same function. '''pythonimport iohf = ioh.problem.ManyAffine(1, n_variables = 5)''' However, the code used in the remainer of the repository uses specific settings of weights, instances and optima to compare different settings. These are specified instead of the instance id as follows: '''pythonxopt = np.random.uniform(size=(2), low=-5, high=5)weights = np.ones(24)/24iids = list(np.repeat(1,24)) f2 = ioh.problem.ManyAffine(xopt = list(xopt), weights = list(weights), instances = iids, n_variables = 2)''' ## Reproducing the papers results The core file for reproducing the results from this project is the notebook 'Visualization.ipynb'. This notebook is interrupted at times to run scripts and collect data, which is explained both in the notebook as well as in the sections below. The data collection is related to Section 4, since the results of Section 3 are entirely contained in the notebook, and Section 5 uses availalbe data from [this Zenodo](https://zenodo.org/records/7826036) (data included here as well for convenience). ### Determine the settings usedWithin the notebook, the section 'Setup data collection' is used to generate the used weights, instance number and optima location for all pairwise experiments. Each of these are stored to their corresponding csv-file.These files fully specify the used pairwise instances of the MA-BBOB suite we use throughout the paper. These files are used in the scripts for the following parts (ELA + Performace) ### Calculating ELA featuresThe ELA-feature computation is based on pflacco as described in the dependencies. The script: 'ela_calculation.py' runs the computation and stores the results as csv files (Note: the 'dirname' parameter should be changed before running this script). The file loops over all selected BBOB and MA-BBOB instances and gets the following sets of ELA features:* meta data* distribution* level set* principal component analysi* linear model* nbc* dispersion* information_contentFor more information on these feature sets, please see "Mersmann et al. (2011), “Exploratory Landscape Analysis”, in Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, pp. 829—836. ACM (http://dx.doi.org/10.1145/2001576.2001690)" ### Performance data #### Collect performance dataThe performance data collection script is 'collect_performance.py', which makes use of nevergrad and iohprofiler to benchmark the selected algorithms. Note that the 'rootname' parameter should be modified when running this script. The data from running this script is available in the zenodo repository as 'data.zip'. #### Process performance dataThe previous script generates IOHanalyzer-compatible data. This can be processed via the script 'auc_calculation.py', resulting in the file 'auc.csv' if the dirname parameter is set to the one used in the previous script. ### VisualizationsAll remaining analysis and visualization is part of the notebook. Note that the corresponding directory and filenames should be updated according to the ones used in the respective scripts. Parts of the notebook make use of data from other repositories, which is included in this repository as well for convenience.
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |