Multi-Gradient Explorer Algorithm

MGE uses a conventional approach for optimization practice. It starts from an initial point, and iterates toward Pareto frontier until a Pareto optimal point is found. Then it takes another initial point, iterates again, and so on.

MGE algorithm can be used in two modes: (a) improvement of a given initial point, and (b) approximation of the entire Pareto frontier.

In the mode (a) MGE usually performs about 4-7 steps, and finds several Pareto optimal points improving a given initial design (see FIG.1.) Assuming that DDRSM response surface method is used for estimating gradients, it usually takes just about 15-30 model evaluations to approach Pareto frontier regardless of task dimension. Thus, MGE is the best choice for computationally expensive simulation models when covering the entire Pareto frontier is prohibitively expensive.

In the mode (b) MGE sequentially starts from randomly distributed initial points. Since the initial points are uniformly distributed in the design space, it is expected that Pareto optimal points found in multiple iterations will cover the entire Pareto frontier (see FIG.2.)

F(X)

Table 1 and FIG.2 illustrate MGE algorithm in the mode of improvement of a given initial point.

Table 1 Improvement of a given design by MGE optimization algorithm

  Evaluation # f1 f2 f3
Initial Point 1 12.26 5.394 14.05
Pareto Optimal Point 9 3.65 1.38 2.84

As follows from Table 1, the initial point has been significantly improved with respect to all objective functions. The target Pareto optimal point was found after 9 model evaluations. After that, MGE spent 26 additional model evaluations estimating gradients via DDRSM method, and tried to improve the point #9. MGE was stopped because further improvement of the point #9 was not possible, and the point was declared as Pareto optimal. Next, all evaluated points have been compared against each other with respect to all objectives, and all dominated points were declared as transitional points. The rest of points have been declared as Pareto optimal (see FIG.1.) The majority of evaluated points from #10 to #35 happened to be Pareto optimal in this optimization run. Thus, the user has 15 Pareto optimal points out of 35 model evaluations.

Image 1 Image 2

FIG.1 shows results of improvement of a given point by MGE algorithm. MGE has started from the initial point (orange triangle marker on the diagrams), and performed a few steps towards Pareto frontier; MGE has found 15 Pareto optimal points by the price of 35 model evaluations.

The following FIG.2 illustrates the ability of MGE algorithm to cover the entire Pareto frontier. In this scenario MGE sequentially starts from randomly distributed initial points, and iterates towards Pareto frontier.

Image 3 Image 4

FIG. 2 shows Pareto optimal points found by MGE algorithm for the benchmark (1). MGE sequentially started optimization from randomly distributed initial points, and covered the entire Pareto frontier evenly.

FIG.2 shows that MGE algorithm approximates the entire Pareto frontier, and covers it evenly. MGE is computationally efficient. It has spent 2420 model evaluations, and found 1156 Pareto optimal points-2420/1156=2.1 model evaluations per Pareto optimal point.

Constrained Optimization

MGE uses a technique similar to Modified Method of Feasible Directions (MMFD) for constrained optimization. Since MMFD was designed for constrained single-objective optimization, it could not be used as it is in MGE algorithm, and it has been adjusted to the needs of multi-objective optimization

Current implementation of MGE algorithm uses the previously mentioned MMFD-like constrained optimization approach for tasks with a relatively small number of constraints, and automatically shifts to Hybrid Multi-Gradient Explorer (HMGE) optimization algorithm for tasks with a larger number of constraints. MGE algorithm employs the hybrid HMGE code only in the infeasible area, and shifts back to the pure gradient based MGA technique as soon as a feasible point has been found.

HMGE algorithm has proved a high efficiency and reliability with the most challenging real life constrained optimization tasks. It finds feasible areas faster and more reliably than pure gradient-based techniques. Thus, the combination of MGE and HMGE is a powerful design optimization tool for real life tasks with up to thousands of design variables, and up to hundreds of constraints.

Home |  Products |  Solutions |  Technologies |  News & Events |  About Us |  Support |  Contact |  Site Map