
We carry out a detailed performance assessment of two interactive evolutionary multi-objective algorithms (EMOAs) using a machine decision maker that enables us to repeat experiments and study specific behaviours modeled after human decision makers (DMs). Using the same set of benchmark test problems as in the original papers on these interactive EMOAs (in up to 10 objectives), we bring to light interesting effects when we use a machine DM based on sigmoidal utility functions that have support from the psychology literature (replacing the simpler utility functions used in the original papers). Our machine DM enables us to go further and simulate human biases and inconsistencies as well. Our results from this study, which is the most comprehensive assessment of multiple interactive EMOAs so far conducted, suggest that current well-known algorithms have shortcomings that need addressing. These results further demonstrate the value of improving the benchmarking of interactive EMOAs.
Performance assessment, Design of Experiments, Interactive Evolutionary Multi-Objective Optimization, Machine Decision Maker
Performance assessment, Design of Experiments, Interactive Evolutionary Multi-Objective Optimization, Machine Decision Maker
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
