Plump, ChristinaChristinaPlumpBerger, Bernhard JohannesBernhard JohannesBergerDrechsler, RolfRolfDrechsler2022-10-122022-10-122022-07IEEE Congress on Evolutionary Computation (CEC 2022)http://hdl.handle.net/11420/13749Evolutionary algorithms are a well-known optimisation technique, especially for non-convex, multi-modal optimisation problems. Their capability of adjusting to different search spaces and tasks by choosing the suitable encoding and operators has led to their widespread use in various application domains. However, application domains sometimes come with difficulties like fitness functions that can not be evaluated or not more than a few times. In these situations, surrogate functions or approximative fitness functions allow the evolutionary algorithm to work despite this complication. Still, using approximative fitness functions comes with a price: The fitness value is no longer correct for every individual, and the algorithm can not know which value to trust. However, statistical methods yield knowledge about the preciseness of the approximation. We propose using this knowledge to adapt the fitness value to ease the effects of the approximative nature. We choose to use the information given in the density of the training data, which has computational merits over the use of other techniques like cross-validation or prediction intervals. We evaluate our method on four well-known benchmark functions and achieve good optimisation success and computation time results.enUsing density of training data to improve evolutionary algorithms with approximative fitness functionsConference Paper10.1109/CEC55065.2022.9870352Other