Small ensembles of kriging models for optimizationReport as inadecuate




Small ensembles of kriging models for optimization - Download this document for free, or read online. Document in PDF available to download.

1 Ecole Nationale Supérieure des Mines de Saint-Etienne 2 LIMOS - Laboratoire d-Informatique, de Modélisation et d-optimisation des Systèmes

Abstract : The Efficient Global Optimization EGO algorithm uses a conditional Gaus-sian Process GP to approximate an objective function known at a finite number of observation points and sequentially adds new points which maximize the Expected Improvement criterion according to the GP. The important factor that controls the efficiency of EGO is the GP covariance function or kernel which should be chosen according to the objective function. Traditionally, a pa-rameterized family of covariance functions is considered whose parameters are learned through statistical procedures such as maximum likelihood or cross-validation. However, it may be questioned whether statistical procedures for learning covariance functions are the most efficient for optimization as they target a global agreement between the GP and the observations which is not the ultimate goal of optimization. Furthermore, statistical learning procedures are computationally expensive. The main alternative to the statistical learning of the GP is self-adaptation, where the algorithm tunes the kernel parameters based on their contribution to objective function improvement. After questioning the possibility of self-adaptation for kriging based optimizers, this paper proposes a novel approach for tuning the length-scale of the GP in EGO: At each iteration, a small ensemble of kriging models structured by their length-scales is created. All of the models contribute to an iterate in an EGO-like fashion. Then, the set of models is densified around the model whose length-scale yielded the best iterate and further points are produced. Numerical experiments are provided which motivate the use of many length-scales. The tested implementation does not perform better than the classical EGO algorithm in a sequential context but show the potential of the approach for parallel implementations.

Keywords : optimization based on surrogate ensembles Kernel parameters Continuous Global Optimization EGO Gaussian processes





Author: Hossein Mohammadi - Rodolphe Le Riche - Eric Touboul -

Source: https://hal.archives-ouvertes.fr/



DOWNLOAD PDF




Related documents