Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization

Hybrid Multi-Gradient Explorer (HMGE) is designed to combine strengths and avoid weaknesses of two major optimization approaches: gradient-based techniques and Genetic Algorithms (GAs).

It is known that:

  • Gradient-based techniques - have high convergence to a local Pareto front, but a low ability to find the global Pareto frontier.
  • Genetic Algorithms - have a strong ability to find global optimal solutions, but low convergence.

Genetic Algorithms provide two important for multi-objective optimization mechanisms: (a) crossover algorithm for obtaining the Pareto solutions uniformity along Pareto frontier, and (b) random mutation to find dominating Pareto fronts. The question is how to incorporate a gradient-based technique in a GA framework to improve convergence and avoid an overhead with gradient estimation. HMGE improves convergence by introducing a gradient mutation operator. Thus, HMGE has three algorithmic elements: crossover, random mutation, and gradient mutation operators.

The last but not least question is how to estimate gradients efficiently.

Dynamically Dimensioned Response Surface Method (DDRSM) is a fast and scalable algorithm (patent pending) for estimating gradients and performing gradient mutation. It requires just 0-7 model evaluations for estimating gradients regardless of task dimension (up to hundreds and thousands of design variables), and provides high convergence and optimization accuracy even for highly non-linear models.

HMGE Features HMGE combines (a) Genetic Algorithm framework, (b) random mutation, and (c) gradient mutation based on DDRSM. Synergy of the features brings HMGE on unparallel level of efficiency and scalability when compared to the state of the art commercial multi-objective optimization algorithms. HMGE is believed to be the first global multi-objective optimization algorithm which provides:

  • High convergence typical for gradient-based methods;
  • Efficiency in finding the global Pareto frontier;
  • Equal efficiency optimizing models with dozens, hundreds, and even thousands of design variables.

The following two benchmark problems have multiple Pareto frontiers, and allow seeing capability of HMGE algorithm finding global Pareto frontier. HMGE algorithm is compared with state of the art commercial multi-objective optimization algorithm NSGA-II.

The following benchmark problem ZDT4 has 10 design variables and multiple local Pareto fronts.

F(X)

The global Pareto-optimal front for the problem ZDT4 corresponds to:

X.

HMGE - covered the global Pareto frontier completely and evenly after 1,000 evaluations.

NSGA-II - found partial and not accurate solutions after 5,000 evaluations.
Image 1 Image 2
 
The following benchmark problem ZDT6 has 10 design variables and multiple local Pareto fronts. The most challenging obstacle is that the density of solutions across the Pareto-optimal region is highly non-uniform.

F(X)

The global Pareto-optimal front for the problem ZDT6 corresponds to:

X.

HMGE - covered the global Pareto frontier completely and evenly after 1,000 evaluations.

NSGA-II - found partial and not accurate solutions after 5,000 evaluations.
Image 3 Image 4

DDRSM is a critical part of HMGE algorithm. DDRSM dynamically, on each step, recognizes the most significant design variables, and builds local approximations based only on the variables. In fact, DDRSM performs a dynamic sensitivity analysis based on a patent pending technology. The analysis is performed in a small sub-region where the gradient needs to be estimated, and requires just 0-7 model evaluations performed in the sub-region. Also, DDRSM is very fast. Typically, it requires just 15-30 milliseconds to build local approximations and estimate a gradient.

Comparison of HMGE with state of the art commercial multi-objective optimization algorithms on a number of challenging benchmarks and real life problems has shown that HMGE finds the global Pareto frontiers more accurate and 2-10 times faster.

This creates the following benefits to users:

  • HMGE eliminates the need for DOE and surrogate models for global approximation
  • HMGE eliminates the need to perform a sensitivity analysis because reducing the number of design variables is not necessary
  • HMGE can be applied directly for optimization of computationally expensive simulation models
  • HMGE can be used for optimizing low- and high-dimensional models

Also, HMGE algorithm supports parallelization which allows an additional reducing of optimization time in 4-8 times.

HMGE is the best choice for solving global multi-objective optimization tasks for simulation models with low and moderate estimation time when 200-1,000 model evaluations are considered as a reasonable number of model evaluations for finding Pareto optimal solutions.

Home |  Products |  Solutions |  Technologies |  News & Events |  About Us |  Support |  Contact |  Site Map