Abstract

Bayesian optimization is a metamodel-based global optimization approach that can balance between exploration and exploitation. It has been widely used to solve single-objective optimization problems. In engineering design, making trade-offs between multiple conflicting objectives is common. In this work, a multi-objective Bayesian optimization approach is proposed to obtain the Pareto solutions. A novel acquisition function is proposed to determine the next sample point, which helps improve the diversity and convergence of the Pareto solutions. The proposed approach is compared with some state-of-the-art metamodel-based multi-objective optimization approaches with four numerical examples and one engineering case. The results show that the proposed approach can obtain satisfactory Pareto solutions with significantly reduced computational cost.

1 Introduction

In simulation-based design optimization, the objective functions are usually expensive to be evaluated [1]. In order to reduce the number of function evaluations during the search, metamodels or surrogates, including Kriging models [2,3], radial basis functions [4], artificial neural networks [5], and polynomial-based approximations [6], are commonly used to assist design optimization. Particularly, Kriging is a metamodeling approach that supports sequential sampling to reduce the overall number of samples and has been extensively applied for design optimization under uncertainty [712].

Recently, a new metamodel-assisted global optimization approach, Bayesian optimization (BO), attracted research attentions. The uniqueness of BO is that the sequential sampling strategies are designed to strike a balance between exploration and exploitation for robust global optimization under uncertainty. In addition to the surrogate model as an approximation of the objective function, an acquisition function is also constructed in the same solution space to guide the sampling procedure in BO. The new sample is taken to maximize the acquisition function for each iteration, instead of searching directly based on the objective surrogate. Commonly used acquisition functions include expected improvement (EI) [13,14], probability of improvement (PI) [15,16], and lower confidence bound (LCB) [17,18], where uncertainty associated with the predicted objective values from the surrogates is considered. BO has been studied for a wide range of applications, such as robotics [19,20], combinatorial optimization [21], machine learning [22], simulation-based design optimization [23,24], and others.

Most of the existing BO studies focused on single-objective optimization. In many real-world problems, multiple objectives need to be considered and are likely to be conflicting with each other. The purpose of multi-objective optimization is to find a set of optimum solutions in the Pareto sense so that there is no optimum solution that is superior to the other with respect to all objectives [25,26]. Heuristic algorithms, such as multi-objective particle swarm optimization and multi-objective genetic algorithm, are commonly used to obtain the Pareto solutions of multi-objective optimization problems because of their robustness in convergence [2729]. Since these population-based algorithms require a large number of function evaluations in order to converge to Pareto solutions [26,30,31], a common strategy to improve the efficiency is using metamodels to replace the objectives [3237]. In the metamodel-assisted multi-objective optimization approaches, usually one metamodel is constructed separately for each objective. The cost of function evaluation for the surrogates is much lower than that for the original objective functions.

Very limited work has been done on how to formulate BO for efficient multi-objective optimization. The research focus of multi-objective BO (MOBO) is how to design acquisition functions for solving multi-objective problems efficiently. In the existing work, the EI function has been extended to different forms in MOBO. A straightforward extension is to define multi-objective EI as the expected improvement based on the joint probability of improvement from individual objectives with the assumption of independence between objectives [3841]. Knowles [38] proposed the ParEGO approach, which is an extension of the single-objective efficient global optimization or EI approach. Ponweiser et al. [40] introduced a new EI-based multi-objective optimization, which utilizes the S-metric or hypervolume contribution to decide which sample point to evaluate next. Keane [42] combined the design of experiment methods with Kriging models to enable the parallel evolution of multi-objective Pareto sets. Feliot et al. [43] proposed a Bayesian multi-objective optimization (BMOO) approach that handles multiple objectives and non-linear constraints simultaneously, in which a one-step look-ahead Bayesian strategy is used. Forrester et al. [44] defined the parental improvement based on the Euclidean distance. Bautista [45] defined it as the minimum improvement among all objectives. Pandita et al. [46] provided a reformulation of expected improvement over the dominated hypervolume to solve stochastic multi-objective optimization problems. Calandra et al. [47] proposed to exploit the metamodels to improve the estimation of the final Pareto front. However, in these approaches, the calculation of the expected improvement needs to compute multi-dimensional integrals, which usually relies on Monte Carlo sampling and is computationally expensive. To solve this problem, Zhan et al. [48] proposed the EI matrix (EIM) for multi-objective problems, where the elements in the EIM are the single-objective EIs for each sample point with respect to each objective, and an aggregated scalar value based on all elements in the EIM is used to guide the sampling.

In this paper, a new MOBO approach is proposed to solve multi-objective optimization problems with a new acquisition function. In the proposed approach, two metrics of the relative hyperarea difference (RHD) and overall spread (OS) [49,50] are modified to better measure the convergence and diversity of Pareto solutions. The new acquisition function is defined to simultaneously improve the convergence and diversity of Pareto solutions, in which the modified RHD and OS are combined, and the LCB functions are employed to consider the metamodel uncertainty.

The rest of this paper is organized as follows: in Sec. 2, the background of multi-objective optimization and BO is introduced. In Sec. 3, the proposed MOBO approach is presented. In Sec. 4, the performance and efficiency of the proposed MOBO approach are demonstrated using four numerical examples and an engineering case. The proposed approach is also compared with existing metamodel-based approaches. The advantages of the proposed approach are analyzed and summarized. Section 5 concludes the paper.

2 Background

2.1 Multi-Objective Optimization.

In general, multi-objective optimization problems are formulated as
minxF(x)={f1(x),f2(x),,fi(x),,fM(x)}s.t.gj(x)0,j=1,2,,Jxlbxxub
(1)
where F(x) is a vector of M objective functions among which at least two objective functions are in conflict, x = (x1, x2, … xN)T is the design variable vector, and xlb and xub are the lower and upper bounds, respectively. g = (g1, g2, … gJ) are the constraints. Since there are trade-offs among those objective functions, the problem in Eq. (1) generally has a set of optimum solutions in the Pareto sense. That is, there is no optimum that is superior to the other in terms of all objectives [25]. These solutions are termed as Pareto set or Pareto frontier.

2.2 Kriging Model.

Kriging or Gaussian process regression model is usually used as the surrogate in BO. It provides the predictions of not only the mean value at an unobserved design location but also the associated uncertainty as variance.

Generally, a Kriging model is the surrogate of the expensive function f(x) as a realization of a stochastic process Y(x) [51,52]
Y(x)=μ+Z(x)
(2)
where μ and Z(x) are the mean and error term of the Gaussian process, respectively. In this paper, the simple Kriging is used and μ is a constant; Z(x) follows a normal distribution with zero mean and nonzero covariance. The Gaussian spatial correlation function between the values of function f at two locations x and x′ is given by
cov(Z(x),Z(x))=σ2ρ(x,x)=σ2exp{j=1uθj(xjxj)2}
(3)
where σ2 is the standard deviation (STD) that determines the overall magnitude of the variance, and θ1,…, θu are the roughness parameters. To obtain the hyperparameters σ2 and θ1,…, θu, the maximum likelihood estimate is applied.
Given some sample points X = {x1, x2,…, xn} and their responses f(X) = {f(x1), f(x2),…, f(xn)}, the predicted mean value and the prediction error (mean squared error) at an unobserved location xp can be calculated as
f^(xp)=μ^+rD(xp)TRD1(f(X)Jnμ^)
(4)
s2(xp)=σ2(1rD(xp)TRD1rD(xp)+(1JnTRD1rD(xp))2JnTRD1Jn)
(5)
respectively, where μ^ is the maximum likelihood estimation of μ, Jn is an n-dimensional vector with all elements as ones, RD is the covariance matrix with elements RD(i, j) = ρ(xi, xj), xi, xjX, and rD(xp) is the vector of correlations between xp and sample points, i.e., rDi(xp) = ρ(xp, xi), xiX. More details of Kriging models can be found in Refs. [51,53].

2.3 Bayesian Optimization.

Bayesian optimization is a metamodel-based approach for global optimization under uncertainty with sequential sampling. The most used metamodel to approximate the unknown objective function f(x) is based on Kriging. An acquisition function is defined to make a trade-off between exploration (sampling in the most uncertain region) and exploitation (sampling in the region with the best predictions) in the sampling or searching process. Commonly used acquisition functions include EI, PI, and LCB functions. Here, the LCB function is introduced. More details about the EI and PI functions can be found in Ref. [24].

Consider the optimization problem
minf(x)
(6)
Given some samples X = {x1, x2,…, xn} and their responses f(X) = {f(x1), f(x2),…, f(xn)}, the LCB acquisition function is expressed as [17,18]
fLCB(x)=f^(x)ks(x)
(7)
where f^(x) represents the Kriging model, s(x) is the square root of the mean squared error, and k dictates the exploitation–exploration balance. A new sample point is selected by minimizing the LCB function. With the new sample point, the Kriging model is updated. The iteration continues until a stop criterion is met. The advantage of using LCB as the acquisition function is that there is no need to calculate the integrals as in the EI and PI functions. The calculation of high-dimensional integrals is expensive for multi-objective problems.

3 The Proposed MOBO Formulation

In the proposed multi-objective Bayesian optimization formulation, two metrics for the quality of Pareto set, the RHD and OS, are modified and used to define the new acquisition function of MOBO.

3.1 Modified Quality Metrics.

The RHD and OS are usually used to measure the quality of Pareto frontiers [49,50]. As illustrated in Fig. 1, RHD and OS represent the convergence and diversity of the obtained Pareto frontier, respectively. The RHD is the area of the polygon formed by the Pareto frontier and an ideal solution that is best in all objectives. The smaller the value of RHD is, the higher convergence of the Pareto frontier is. The OS is the bounding box of the frontier. The larger the value of OS is, the higher diversity of the Pareto frontier is.

Fig. 1
Quality metrics: (a) relative hyperarea difference and (b) overall spread
Fig. 1
Quality metrics: (a) relative hyperarea difference and (b) overall spread
Close modal
More formally, if P = {a, b, c, d} is the current Pareto set, the RHD and OS are calculated as
RHD=HA(pbad,pgood)HA(pbad,a,b,c,d)HA(pbad,pgood)
(8)
OS=HA[extremes(P)]HA(pbad,pgood)
(9)
where HA[extremes(P)] represents the hyper area bounded by the two extreme points in the Pareto frontier, pgood is an ideal and hypothetically good solution, whereas pbad. is a known and extremely bad solution that is much worse than the frontier, and HA(pbad, pgood) represents the area of bounding box formed by pgood and pbad.

The main disadvantages of the RHD and OS are (1) the extremely good and bad solutions pgood and pbad need to be defined before the calculation of the two metrics and (2) when a new Pareto solution is found between two Pareto solutions, the value of RHD for a Pareto front will increase, instead of monotonically decreasing. This contradicts to the original intention that when two Pareto frontiers are compared, the one with a smaller RHD is supposed to be better.

To overcome the above two shortcomings in calcuating RHD and OS and using them to measure the improvement, the modified hyperarea difference (MHD) and modified overall spread (MOS) are proposed here, as shown in Fig. 2. The shaded areas in Figs. 2(a) and 2(b) represent the values of the MHD and MOS, respectively. MHD and MOS are not ratios as in the original RHD and OS. Thus, the calculations do not rely on the hypothetically good and bad solutions. The areas are calculated based on the existing frontier. Furthermore, the value of MHD will decrease monotonically when a new Pareto point is found between two existing Pareto solutions. As illustrated in Fig. 3(a), the area for MHD monotonically reduces as new Pareto solutions (e.g., point e) are found and included in the Pareto set, which is different from RHD as in Fig. 3(b). Therefore, when MHD is used in the acquisition function, it encourages finding more Pareto solutions, which will provide designers more choices.

Fig. 2
Modified quality metrics: (a) MHD and (b) MOS
Fig. 2
Modified quality metrics: (a) MHD and (b) MOS
Close modal
Fig. 3
The changes in the values of MHD and RHD when a new Pareto solution are found. (a) Change in the value of MHD and (b) change in the value of RHD.
Fig. 3
The changes in the values of MHD and RHD when a new Pareto solution are found. (a) Change in the value of MHD and (b) change in the value of RHD.
Close modal

The effects of new Pareto solutions on the new metrics are further illustrated with a simple example in Fig. 4. Based on the four Pareto solutions in Fig. 4(a), MHD and MOS for the two-objective minimization problem are calculated to be 3.0 and 6.25, respectively. When a new Pareto solution outside the range of the existing ones is inserted, either at the upper left or lower right of the region as shown in Fig. 4(b), the diversity is improved with a larger MOS value. As shown in Fig. 4(c), a new solution that dominates some existing solutions will improve the convergence with a smaller MHD value. Also shown in Fig. 4(d), a new solution that does not dominate any existing solution will improve the convergence and reduce the MHD value. Therefore, a new Pareto solution can always improve either diversity (increasing MOS) or convergence (reducing MHD).

Fig. 4
The effect of a new Pareto solution on the new metrics
Fig. 4
The effect of a new Pareto solution on the new metrics
Close modal

3.2 The Proposed Acquisition Function.

Based on the new Pareto quality metrics MHD and MOS, a new acquisition function is proposed to find new Pareto solutions in the unsearched space for MOBO. To consider the uncertainty associated with the objective surrogates, LCB functions are used in the search to improve the robustness and computational efficiency, since the LCB function does not require to calculate integrals as in the EI function. The LCB functions can be expressed as
fLCB(x)=[fLCB,1(x),,fLCB,M(x)]wherefLCB,i(x)=f^i(x)kisi(x),i=1,2,,M
(10)
where M is the number of objectives that are approximated by Kriging models, and f^i(x) and si(x) are the predicted mean and standard deviation of the Kriging model for the ith objective at x, respectively. Here, the coefficients ki’s are chosen to be 1. Larger coefficient values encourage the search in the uncertain regions for exploration, whereas smaller ones encourage exploitation. The effect of the LCB functions on searching the solutions is illustrated in Fig. 5. Support that there are four candidate solutions xi(i=1,2,3,4). The predicted means at x1 and x2 are almost the same, but the predicted variance at x1 is larger. Hence, x1 has a higher priority to be sampled than x2. x3 and x4 have the same prediction variance, while x4 has a better predicted mean. Therefore, x4 has a higher priority to be sampled than x3. For other cases where sample points cannot be directly compared, the acquisition function based on the MHD and MOS is used to compare their priorities with a balance between exploitation and exploration.
Fig. 5
The LCB acquisition functions incorporate to the uncertainty of Pareto solutions
Fig. 5
The LCB acquisition functions incorporate to the uncertainty of Pareto solutions
Close modal
The purpose of the proposed acquisition function is to find a solution that maximizes the quality of the Pareto solutions. As illustrated in Sec. 3.1, a new Pareto solution can always improve either the diversity or convergence of the Pareto set. Thus, the proposed acquisition function is defined as
a(x)=max(IMHD(x),IMOS(x))
(11)
where IMHD(x) and IMOS(x) represent the relative improvements in MHD and MOS, respectively, as
IMHD(x)=|MHD({Dn,x})MHD(Dn)MHD(Dn)|IMOS(x)=|MOS({Dn,x})MOS(Dn)MOS(Dn)|xlbxxub
(12)
where Dn represents existing n sample points, MHD(Dn) and MOS(Dn) represent the MHD and MOS based on Dn, respectively, whereas MHD({Dn,x}) and MOS(Dn) represent the updated MHD and MOS when x is added in the sample set. In calculating MHD({Dn,x}) and MOS(Dn), the LCB functions are used as the objectives. By maximizing the acquisition function in Eq. (11), a new sample point will be selected to update the Kriging models.

The basic requirement of multi-objective optimization is to obtain a sufficient number of Pareto solutions with good convergence and diversity. Since a new Pareto solution always brings an improvement of either MHD or MOS, our acquisition function is defined to simultaneously and rapidly improve both the convergence and diversity of the Pareto solutions.

There is an additional issue in the definition of the acquisition function in Eq. (11). As illustrated in Fig. 6, the dashed line represents the true Pareto frontier and the triangles represent the existing Pareto solutions. It can be seen that a new Pareto solution can only be found in the non-dominated shaded region, denoted by Ω. From Eqs. (11) and (12), it can be concluded that the value of a(x) is positive when fLCB(x) is in the non-dominated region Ω. Otherwise, the value of a(x) is equal to zero and does not provide the guidance in sequential sampling. When there are already a large number of Pareto solutions in the current sample set, the non-dominated region will be very small. This will make it difficult to find the optimal solution of the acquisition function. To resolve this efficiency issue, the acquisition function is further modified to
a(x)={max(IMHD(x),IMOS(x)),whenfLCB(x)isinΩminj=1,,K(fLCB(x)f(Xj)2),whenfLCB(x)isnotinΩ
(13)
where Xj’s (j = 1, 2, …, K) represent the current Pareto solutions. fLCB(x)f(Xj)2 denotes the Euclidean distance between two vectors, fLCB(x) and the original objectives corresponding to the existing Pareto solution Xj, in the output space. The additional component in Eq. (13) can provide a guidance to help find the optimal solution of the acquisition function when the non-dominated region becomes smaller as the search proceeds. It should be noted that the additional component in Eq. (13) will not change the optimal solution of the acquisition function.
Fig. 6
The non-dominated region in the objective space
Fig. 6
The non-dominated region in the objective space
Close modal

3.3 The New MOBO Procedure.

The proposed MOBO approach begins with the initial sampling. The basic requirement is that the initial sample points are distributed uniformly in the whole design space to provide each region of the design space the equal opportunity to be detected. Hence, Latin hypercube sampling (LHS) [54,55] can be used to generate the initial sample set.

Fig. 7
The flowchart of the proposed MOBO approach
Fig. 7
The flowchart of the proposed MOBO approach
Close modal

The MOBO procedure is summarized in Fig. 7, which includes the following steps:

  • Step 1: Generate an initial sample set by LHS and obtain the true responses from the original objectives.

  • Step 2: Construct the initial Kriging models based on the initial sample points, one model for each objective. The design and analysis of computer experiments toolbox [56] is used to construct the Kriging models.

  • Step 3: Sort the sample points to obtain the current Pareto solutions. Here, a fast non-dominated sorting approach is used [57]. In this study, the sorting approach is simplified in two ways to save the sorting time: (1) the simplified sorting approach only divides the sample points into dominated and non-dominated solutions, while the original approach classifies the sample points into multiple dominance levels and (2) the sample points that have been classified as dominated solutions are not sorted in the subsequent iterations.

  • Steps 4 and 5: The next sample point is determined by maximizing the proposed acquisition function. The new sample point is used to update the Kriging models and the acquisition function.

  • Step 6: If the stop criterion is not met, go to Step 3. Otherwise stop and output the Pareto solutions.

The optimization algorithm terminates when the following two conditions are met simultaneously:

  1. When the difference between the current quality metrics and the ones in the previous iteration is less than a specific value (e.g., 1% in this study), or the number of iterations reaches a predetermined maximum level, which is set as 200 in this study.

  2. When the optimization procedure finds at least k solutions so that a sufficient number of Pareto solutions give the decision maker enough choices. In this study, k is set as 20.

4 Examples and Results

In this section, several commonly used numerical examples (ZDT1, ZDT2, ZDT3, and FON) adapted from Refs. [32,58] and an engineering case adapted from Ref. [59] are used to demonstrate the applicability and efficiency of the proposed MOBO approach.

Since the EIM [48] and BMOO approaches [43] are the most relevant work, the proposed approach is compared with them through the test examples. For the EIM approaches, three multi-objective criteria were developed by combining the elements in the EIM into scalar functions in three different ways: (1) Euclidean distance-based EIM (EIMe), (2) maximin distance-based EIM (EIMm), and (3) hypervolume-based EIM (EIMh). For all of the approaches, the same initial sampling strategy is used and the initial sampling size is set as 30. The same stop criterion described in Sec. 3.3 is also used for all approaches.

4.1 Numerical Examples.

Each numerical example is solved 30 times with the same approach to account for the influence of randomness. The formulations of the four examples are listed in Table 1. For ZDT1, ZDT2, and ZDT3, the Kriging model is only constructed for f2(x) because f1(x) is very simple. For the FON problem, two Kriging models are constructed for the two objectives respectively.

Table 1

The formulations of the numerical examples

CasesFormulation
ZDT1



minimizef1(x)=x1f2(x)=g(x)×h(x)whereg(x)=1+9n1i=2nxih(x)=1f1(x)/g(x)0xi1,i=1,,n
ZDT2



minimizef1(x)=x1f2(x)=g(x)×h(x)whereg(x)=1+9n1i=2nxih(x)=1f1(x)/g(x)0xi1,i=1,,n
ZDT3



minimizef1(x)=x1f2(x)=g(x)×h(x)whereg(x)=1+9n1i=2nxih(x)=1f1(x)/g(x)(f1(x)/g(x))sin(10πf1)0xi1,i=1,,n
FON


minimizef1(x)=1exp(i=13(xi13)2)f2(x)=1exp(i=13(xi+13)2)4xi4,i=1,,3


CasesFormulation
ZDT1



minimizef1(x)=x1f2(x)=g(x)×h(x)whereg(x)=1+9n1i=2nxih(x)=1f1(x)/g(x)0xi1,i=1,,n
ZDT2



minimizef1(x)=x1f2(x)=g(x)×h(x)whereg(x)=1+9n1i=2nxih(x)=1f1(x)/g(x)0xi1,i=1,,n
ZDT3



minimizef1(x)=x1f2(x)=g(x)×h(x)whereg(x)=1+9n1i=2nxih(x)=1f1(x)/g(x)(f1(x)/g(x))sin(10πf1)0xi1,i=1,,n
FON


minimizef1(x)=1exp(i=13(xi13)2)f2(x)=1exp(i=13(xi+13)2)4xi4,i=1,,3


As the demonstration, we use the first example, the ZDT1 problem (dimension n = 3), to present a detailed comparison of the different approaches. Figure 8 shows a typical set of Pareto frontiers obtained from one of the 30 runs for different approaches. The true Pareto optimal frontier for the ZDT1 problem can be analytically expressed as [60]
f2=1f1
(14)
which is also plotted in Fig. 8. Based on Eq. (14), the ideal quality metrics of the Pareto frontier are calculated as MHD = 0.34 and MOS = 1.00.
Fig. 8
The obtained pareto frontiers of the ZDT1 problem using different approaches
Fig. 8
The obtained pareto frontiers of the ZDT1 problem using different approaches
Close modal

As shown in Fig. 8, the Pareto frontiers from the EIM’s, BMOO, and the proposed MOBO approaches are in good agreement with the true Pareto frontier. This indicates that these metamodel-based approaches can obtain satisfactory Pareto solutions.

The comparisons of the quality metrics and computational efficiency for different approaches are summarized in Table 2. In Table 2, FC denotes the number of original function evaluations. The value ranges, means, and STDs of the MHD, MOS, and FC from 30 runs are listed. The box charts of the MHD, MOS, and FC obtained by different approaches for the ZDT1 problem are also shown in Fig. 9.

Fig. 9
The box charts of the MHD, MOS, and FC obtained by different approaches. (a) Box chart of MHD, (b) box chart of MOS, and (c) box chart of FC.
Fig. 9
The box charts of the MHD, MOS, and FC obtained by different approaches. (a) Box chart of MHD, (b) box chart of MOS, and (c) box chart of FC.
Close modal
Table 2

Comparison of the results from different approaches for the ZDT1 problem

EIMeEIMmEIMhBMOOProposed approach
RangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTD
MHD[0.35, 0.38]0.370.01[0.34, 0.37]0.360.01[0.36, 0.37]0.360.00[0.36, 0.43]0.380.02[0.36, 0.37]0.360.00
MOS[1.00, 1.56]1.130.15[1.01, 1.46]1.110.11[1.00, 1.57]1.110.16[0.95, 2.17]1.250.34[0.99, 1.01]1.010.00
FC[53, 131]89.9318.63[60, 150]91.2019.54[67, 207]110.0033.12[61, 175]105.2728.45[50, 80]52.736.60
EIMeEIMmEIMhBMOOProposed approach
RangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTD
MHD[0.35, 0.38]0.370.01[0.34, 0.37]0.360.01[0.36, 0.37]0.360.00[0.36, 0.43]0.380.02[0.36, 0.37]0.360.00
MOS[1.00, 1.56]1.130.15[1.01, 1.46]1.110.11[1.00, 1.57]1.110.16[0.95, 2.17]1.250.34[0.99, 1.01]1.010.00
FC[53, 131]89.9318.63[60, 150]91.2019.54[67, 207]110.0033.12[61, 175]105.2728.45[50, 80]52.736.60

From Table 2 and Fig. 9, it can be seen that the means of quality metrics of Pareto solutions obtained by all four approaches are close to the ideal quality metrics. The proposed approach however shows the best stability with the minimum standard deviations. In terms of computational cost, the number of function evaluations in the proposed approach is significantly lower than those of other approaches. The proposed approach also has the smallest standard deviation of FC. The results show that the proposed approach is more efficient and stable than the other three approaches. To further compare the efficiency of different approaches to find a sufficient number of Pareto solutions, we counted the number of sample points (including initial sample points) required for different approaches to find 10, 15, and 20 Pareto solutions. The results are shown in Table 3. The proposed approach always has the lowest average number of samples. Obtaining the enough number of Pareto solutions is important for multi-objective optimization, because the designer can have more options to meet the design requirements from a larger number of Pareto solutions.

Table 3

The average number of sample points required for different approaches to find 10, 15, and 20 Pareto solutions for the ZDT1 problem

The number of obtained pareto solutionsEIMeEIMmEIMhBMOOProposed approach
1046.1344.0749.9748.4440.60
1572.0776.3081.1078.8146.80
2089.7791.17109.87104.9352.73
The number of obtained pareto solutionsEIMeEIMmEIMhBMOOProposed approach
1046.1344.0749.9748.4440.60
1572.0776.3081.1078.8146.80
2089.7791.17109.87104.9352.73

The comparison results for the quality of Pareto solutions and computational efforts for all examples are summarized in Table 4. It is seen that the EIMe, EIMm, EIMh, and BMOO approaches, and the proposed one can obtain satisfactory optimal solutions for the ZDT2 problem with n = 3, as well as ZDT3 and FON problems. In terms of computational efficiency, the proposed approach has the lowest computational cost. In addition, for all of the three metrics, the proposed approach has the minimum standard deviations, which indicates that the proposed approach has the best stability for these problems. For the three high-dimensional examples (ZDT2 problems with n = 6, 8, and 10), searching for Pareto solutions is more difficult. The “/” symbol in Table 4 indicates that it is difficult for the optimization method to reach the termination conditions when the maximum number of iterations 200 is reached. More iterations are required to find at least 20 Pareto solutions with converged quality metrics. For ZDT2 problems with n = 6 and 8, the proposed approach shows the best efficiency and stability. When the maximum number of iterations is reached, it is difficult for the EIMh and BMOO approaches to meet the termination condition for the ZDT2 problems with n = 6 and 8. The EIMm approach cannot meet the termination condition for the ZDT2 problem with n = 8. For the ZDT2 problem with n = 10, all five approaches failed to meet the termination conditions. The reason is that the number of samples required to construct a Kriging model with good approximation grows exponentially as the dimension of the problem increases [24]. As a result, the prediction accuracy of the Kriging model will decrease as the dimension increases when the number of samples does not grow quick enough.

Table 4

Quality and efficiency metrics of different approaches for ZDT2, ZDT3, and FON problems

EIMeEIMmEIMhBMOOProposed approach
30 runsMeanSTD30 runsMeanSTD30 runsMeanSTD30 runsMeanSTD30 runsMeanSTD
ZDT2 (n = 3)MHD[0.69, 0.70]0.700.00[0.69, 0.70]0.700.00[0.69, 0.70]0.700.00[0.66, 0.73]0.710.02[0.69, 0.69]0.690.00
MOS[0.70, 1.45]1.060.10[1.00, 1.33]1.050.06[0.99, 1.53]1.060.12[0.88, 1.93]1.020.18[1.00, 1.00]1.000.00
FC[51, 229]69.635.41[51, 69]56.034.53[63, 230]198.4338.5[65, 115]79.3310.84[50, 52]50.030.48
ZDT3MHD[0.43, 0.84]0.750.07[0.71, 0.84]0.780.03[0.69, 0.86]0.770.03[0.52, 0.93]0.770.06[0.69, 0.91]0.760.03
MOS[1.11, 3.09]1.810.41[1.46, 4.12]1.820.54[1.51, 3.88]2.100.52[1.14, 4.15]2.080.55[1.39, 1.74]1.540.07
FC[106, 229]183.1335.31[115, 229]181.8729.12[127, 229]195.131.71[91, 215]168.4328.69[84, 179]150.528.13
FONMHD[0.46, 0.70]0.660.05[0.51, 0.70]0.670.04[0.53, 0.70]0.670.04[0.57, 0.70]0.670.03[0.66, 0.70]0.690.01
MOS[0.62, 0.96]0.920.07[0.76, 0.96]0.930.04[0.76, 0.96]0.930.05[0.76, 0.96]0.940.04[0.93, 0.96]0.960.01
FC[73, 179]115.634.99[68, 230]126.1353.52[80, 230]158.154.56[80, 230]158.154.56[63, 103]799.56
ZDT2 (n = 6)MHD[0.69, 0.71]0.700.01[0.65, 0.72]0.700.01//////[0.68, 0.69]0.690.00
MOS[1.00, 1.87]1.180.20[0.97, 1.98]1.140.21//////[0.99, 1.05]1.010.02
FC[53, 93]66.129.50[50, 73]57.974.79//////[50, 56]51.661.49
ZDT2 (n = 8)MHD[0.67, 0.72]0.710.01/////////[0.68, 0.70]0.690.01
MOS[1.00, 2.09]1.290.33/////////[0.99, 1.05]1.010.02
FC[53, 118]77.1918.59/////////[50, 60]54.82.23
ZDT2 (n = 10)///////////////
///////////////
///////////////
EIMeEIMmEIMhBMOOProposed approach
30 runsMeanSTD30 runsMeanSTD30 runsMeanSTD30 runsMeanSTD30 runsMeanSTD
ZDT2 (n = 3)MHD[0.69, 0.70]0.700.00[0.69, 0.70]0.700.00[0.69, 0.70]0.700.00[0.66, 0.73]0.710.02[0.69, 0.69]0.690.00
MOS[0.70, 1.45]1.060.10[1.00, 1.33]1.050.06[0.99, 1.53]1.060.12[0.88, 1.93]1.020.18[1.00, 1.00]1.000.00
FC[51, 229]69.635.41[51, 69]56.034.53[63, 230]198.4338.5[65, 115]79.3310.84[50, 52]50.030.48
ZDT3MHD[0.43, 0.84]0.750.07[0.71, 0.84]0.780.03[0.69, 0.86]0.770.03[0.52, 0.93]0.770.06[0.69, 0.91]0.760.03
MOS[1.11, 3.09]1.810.41[1.46, 4.12]1.820.54[1.51, 3.88]2.100.52[1.14, 4.15]2.080.55[1.39, 1.74]1.540.07
FC[106, 229]183.1335.31[115, 229]181.8729.12[127, 229]195.131.71[91, 215]168.4328.69[84, 179]150.528.13
FONMHD[0.46, 0.70]0.660.05[0.51, 0.70]0.670.04[0.53, 0.70]0.670.04[0.57, 0.70]0.670.03[0.66, 0.70]0.690.01
MOS[0.62, 0.96]0.920.07[0.76, 0.96]0.930.04[0.76, 0.96]0.930.05[0.76, 0.96]0.940.04[0.93, 0.96]0.960.01
FC[73, 179]115.634.99[68, 230]126.1353.52[80, 230]158.154.56[80, 230]158.154.56[63, 103]799.56
ZDT2 (n = 6)MHD[0.69, 0.71]0.700.01[0.65, 0.72]0.700.01//////[0.68, 0.69]0.690.00
MOS[1.00, 1.87]1.180.20[0.97, 1.98]1.140.21//////[0.99, 1.05]1.010.02
FC[53, 93]66.129.50[50, 73]57.974.79//////[50, 56]51.661.49
ZDT2 (n = 8)MHD[0.67, 0.72]0.710.01/////////[0.68, 0.70]0.690.01
MOS[1.00, 2.09]1.290.33/////////[0.99, 1.05]1.010.02
FC[53, 118]77.1918.59/////////[50, 60]54.82.23
ZDT2 (n = 10)///////////////
///////////////
///////////////

4.2 An Engineering Example.

In this section, the proposed approach is applied to the design optimization of a torque arm. As shown in Fig. 10, the torque arm is fixed to the hole at the left end. Two forces, P1 = 8.0 kN and P2 = 4.0 kN, are placed at the center of the right end. Young’s modulus and Poisson’s ratio of the material are 200 GPa and 0.3, respectively.

Fig. 10
Geometry and parameterization of the torque arm
Fig. 10
Geometry and parameterization of the torque arm
Close modal
There are six design variables for the design of the torque arm, including α, b1, D1, h, t1, and t2. The design optimization of the torque arm can be expressed as
minf1=V(α,b1,D1,h,t1,t2)minf2=max_d(α,b1,D1,h,t1,t2)s.t.g1=σ190MPawhere3degα4.5deg;25mmb135mm90mmD1120mm;20mmh30mm12mmt122mm;8mmt212mm
(15)
where V is the total volume of the torque arm, and max_d is the maximum displacement. In the simulation-based optimization process, the maximum displacement and stress of the torque arm are predicted by two Kriging metamodels. The penalty function [61] is used to handle constraints in this example. For unknown or computationally expensive constraints, surrogates of constraints can also be constructed to quickly identify the infeasible solutions [23].

Each of the five approaches is run 10 times for this problem. The Pareto frontiers obtained by these different approaches are plotted in Fig. 11. As shown in Fig. 11, the Pareto frontiers obtained by the different approaches are in good consistency. The diversity of the proposed approach is better than the other four approaches. The Pareto solutions obtained by the EIMe and EIMm approaches cluster in some regions and are not as diverse or distributed as the ones obtained by the EIMh, BMOO, and the proposed approach. Obviously, a more uniformly distributed solution set is more beneficial for the designer. The design variable values of five Pareto solutions, labeled with numbers in Fig. 11, and the corresponding responses are listed in Table 5.

Fig. 11
The typical Pareto frontiers obtained by different approaches for the design of the torque arm
Fig. 11
The typical Pareto frontiers obtained by different approaches for the design of the torque arm
Close modal
Table 5

The five Pareto solutions and the corresponding responses of the engineering case

No.Design variablesVolumeMaximum displacement
1(3.75, 27.25, 90.03, 23.32, 12.05, 8.03)444.251.59
2(4.18, 25.00, 90.00, 29.29, 12.00, 8.00)470.971.11
3(4.50, 31.78, 90.00, 30.00, 12.00, 8.00)523.640.83
4(4.50, 35.00, 90.00, 29.99, 22.00, 13.00)650.460.64
5(4.50, 35.00, 120.00, 30.00, 22.00, 13.00)799.360.634
No.Design variablesVolumeMaximum displacement
1(3.75, 27.25, 90.03, 23.32, 12.05, 8.03)444.251.59
2(4.18, 25.00, 90.00, 29.29, 12.00, 8.00)470.971.11
3(4.50, 31.78, 90.00, 30.00, 12.00, 8.00)523.640.83
4(4.50, 35.00, 90.00, 29.99, 22.00, 13.00)650.460.64
5(4.50, 35.00, 120.00, 30.00, 22.00, 13.00)799.360.634

The detailed comparisons of the performances are summarized in Table 6. It can be seen in Table 6 that the proposed approach has better efficiency and diversity than the other approaches. For the two quality metrics and one efficiency metric, the proposed approach has the smallest standard deviations, which shows the best stability.

Table 6

Comparison results of different approaches for the design of the torque arm

EIMeEIMmEIMhBMOOProposed approach
RangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTD
MHD[48.17, 63.85]55.224.76[50.13, 64.25]57.814.5[34.96, 46.95]47.823.67[43.47, 55.77]47.823.67[44.39, 52.26]47.642.64
MOS[96.98, 294.83]170.8458.7[99.83, 188.77]155.8125.52[111.10, 235.32]185.8839.08[110.92, 310.45]270.4419.21[314.30, 356.84]330.7415.48
FC[94, 167]122.220.34[97, 139]114.515.58[56, 71]64.14.72[62, 89]68.136.54[60, 69]63.32.41
EIMeEIMmEIMhBMOOProposed approach
RangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTDRangeMeanSTD
MHD[48.17, 63.85]55.224.76[50.13, 64.25]57.814.5[34.96, 46.95]47.823.67[43.47, 55.77]47.823.67[44.39, 52.26]47.642.64
MOS[96.98, 294.83]170.8458.7[99.83, 188.77]155.8125.52[111.10, 235.32]185.8839.08[110.92, 310.45]270.4419.21[314.30, 356.84]330.7415.48
FC[94, 167]122.220.34[97, 139]114.515.58[56, 71]64.14.72[62, 89]68.136.54[60, 69]63.32.41

5 Conclusion

In this paper, a new acquisition function is proposed for multi-objective Bayesian optimization, where the improvement of the hyperarea and overall spread are used to guide the sequential sampling to search the Pareto solutions. Two modified metrics of hyperarea and overall spread of the Pareto frontier are proposed and used in the new acquisition function, which encourages searching for a large number of Pareto solutions and provides a balance between convergence and diversity of the Pareto frontier. The LCB functions are used to incorporate uncertainty in surrogate modeling and find the exploration–exploitation balance, while keeping the computational efficiency.

The proposed approach is demonstrated with four numerical examples and an engineering case. Its performance is compared with other four expected improvement matrix approaches. The results show that the proposed approach is computationally more efficient with fewer numbers of function evaluations to obtain the same number of Pareto solutions. The new approach is also stable and robust with the minimum level of variations.

While computational efficiency is the consideration of using the LCB functions, the main computational cost still remains at constructing the Kriging models. Scalability has been a major issue for Kriging as the dimension of searching space increases. The computational efficiency of the current scheme can be further improved. As the number of sample points increases, the computational time for sorting the non-dominated solutions and calculating MHD and MOS becomes longer. More efficient data structures and algorithms can be developed. In addition, a constant exploitation–exploration coefficient is used in our LCB functions. A dynamically adaptive coefficient value can have a better balance between the exploitation and exploration. In future work, we will also examine the efficiency and accuracy of the proposed approach for optimization with more than two objectives.

Acknowledgment

This research has been supported in part by the National Natural Science Foundation of China (Grant Nos. 51775203, 51805179, and 51721092) and the Fundamental Research Funds for the Central Universities at HUST (Grant No. 2016YXMS272). The support of China Scholarship Council is also appreciated.

References

1.
Aute
,
V.
,
Saleh
,
K.
,
Abdelaziz
,
O.
,
Azarm
,
S.
, and
Radermacher
,
R.
,
2013
, “
Cross-Validation Based Single Response Adaptive Design of Experiments for Kriging Metamodeling of Deterministic Computer Simulations
,”
Struct. Multidiscipl. Optim.
,
48
(
3
), pp.
581
605
. 10.1007/s00158-013-0918-5
2.
Kleijnen
,
J. P.
,
2009
, “
Kriging Metamodeling in Simulation: A Review
,”
Eur. J. Oper. Res.
,
192
(
3
), pp.
707
716
. 10.1016/j.ejor.2007.10.013
3.
Xiao
,
M.
,
Gao
,
L.
,
Shao
,
X.
,
Qiu
,
H.
, and
Jiang
,
P.
,
2012
, “
A Generalised Collaborative Optimisation Method and Its Combination With Kriging Metamodels for Engineering Design
,”
J. Eng. Des.
,
23
(
5
), pp.
379
399
. 10.1080/09544828.2011.595706
4.
Fang
,
H.
, and
Horstemeyer
,
M. F.
,
2006
, “
Global Response Approximation With Radial Basis Functions
,”
Eng. Optim.
,
38
(
4
), pp.
407
424
. 10.1080/03052150500422294
5.
Can
,
B.
, and
Heavey
,
C.
,
2012
, “
A Comparison of Genetic Programming and Artificial Neural Networks in Metamodeling of Discrete-Event Simulation Models
,”
Comput. Oper. Res.
,
39
(
2
), pp.
424
436
. 10.1016/j.cor.2011.05.004
6.
Kleijnen
,
J. P.
,
2008
, “
Response Surface Methodology for Constrained Simulation Optimization: An Overview
,”
Simul. Modell. Pract. Theory
,
16
(
1
), pp.
50
64
. 10.1016/j.simpat.2007.10.001
7.
Li
,
M.
,
Li
,
G.
, and
Azarm
,
S.
,
2008
, “
A Kriging Metamodel Assisted Multi-Objective Genetic Algorithm for Design Optimization
,”
ASME J. Mech. Des.
,
130
(
3
), p.
031401
. 10.1115/1.2829879
8.
Zhao
,
L.
,
Choi
,
K. K.
,
Lee
,
I.
, and
Gorsich
,
D.
,
2013
, “
Conservative Surrogate Model Using Weighted Kriging Variance for Sampling-Based RBDO
,”
ASME J. Mech. Des.
,
135
(
9
), p.
091003
. 10.1115/1.4024731
9.
Hu
,
Z.
, and
Du
,
X.
,
2015
, “
Mixed Efficient Global Optimization for Time-Dependent Reliability Analysis
,”
ASME J. Mech. Des.
,
137
(
5
), p.
051401
. 10.1115/1.4029520
10.
Li
,
M.
, and
Wang
,
Z.
,
2018
, “
Confidence-Driven Design Optimization Using Gaussian Process Metamodeling With Insufficient Data
,”
ASME J. Mech. Des.
,
140
(
12
), p.
121405
. 10.1115/1.4040985
11.
Moustapha
,
M.
,
Sudret
,
B.
,
Bourinet
,
J.-M.
, and
Guillaume
,
B.
,
2016
, “
Quantile-Based Optimization Under Uncertainties Using Adaptive Kriging Surrogate Models
,”
Struct. Multidiscipl. Optim.
,
54
(
6
), pp.
1403
1421
. 10.1007/s00158-016-1504-4
12.
Zhang
,
S.
,
Zhu
,
P.
,
Chen
,
W.
, and
Arendt
,
P.
,
2013
, “
Concurrent Treatment of Parametric Uncertainty and Metamodeling Uncertainty in Robust Design
,”
Struct. Multidiscipl. Optim.
,
47
(
1
), pp.
63
76
. 10.1007/s00158-012-0805-5
13.
Schonlau
,
M.
,
1997
, “
Computer Experiments and Global Optimization
,”
Ph.D. thesis
,
University of Waterloo
,
Waterloo
.
14.
Jones
,
D. R.
,
2001
, “
A Taxonomy of Global Optimization Methods Based on Response Surfaces
,”
J. Global Optim.
,
21
(
4
), pp.
345
383
. 10.1023/A:1012771025575
15.
Couckuyt
,
I.
,
Deschrijver
,
D.
, and
Dhaene
,
T.
,
2014
, “
Fast Calculation of Multiobjective Probability of Improvement and Expected Improvement Criteria for Pareto Optimization
,”
J. Global Optim.
,
60
(
3
), pp.
575
594
. 10.1007/s10898-013-0118-2
16.
Snoek
,
J.
,
Larochelle
,
H.
, and
Adams
,
R. P.
,
2012
, “
Practical Bayesian Optimization of Machine Learning Algorithms
,”
Proceedings of Advances in Neural Information Processing Systems
,
Lake Tahoe, NV
,
Dec. 3–6
, pp.
2951
2959
.
17.
Zheng
,
J.
,
Li
,
Z.
,
Gao
,
L.
, and
Jiang
,
G.
,
2016
, “
A Parameterized Lower Confidence Bounding Scheme for Adaptive Metamodel-Based Design Optimization
,”
Eng. Comput.
,
33
(
7
), pp.
2165
2184
. 10.1108/EC-04-2015-0088
18.
Liu
,
B.
,
Zhang
,
Q.
,
Fernández
,
F. V.
, and
Gielen
,
G.
,
2012
, “
Self-Adaptive Lower Confidence Bound: A New General and Effective Prescreening Method for Gaussian Process Surrogate Model Assisted Evolutionary Algorithms
,”
Proceedings of 2012 IEEE Congress on Evolutionary Computation (CEC)
,
Brisbane, Australia
,
June 10–15
,
IEEE
, pp.
1
6
.
19.
Martinez-Cantin
,
R.
,
de Freitas
,
N.
,
Doucet
,
A.
, and
Castellanos
,
J. A.
,
2008
, “
Active Policy Learning for Robot Planning and Exploration Under Uncertainty
,”
Proceedings of Robotics: Science and Systems
,
Zurich, Switzerland
,
June 25–28
, pp.
321
328
.
20.
Lizotte
,
D. J.
,
Wang
,
T.
,
Bowling
,
M. H.
, and
Schuurmans
,
D.
,
2007
, “
Automatic Gait Optimization With Gaussian Process Regression
,”
Proceedings of IJCAI
,
Hyderabad, India
,
Jan. 6–12
, pp.
944
949
.
21.
Hutter
,
F.
,
Hoos
,
H. H.
, and
Leyton-Brown
,
K.
,
2011
, “
Sequential Model-Based Optimization for General Algorithm Configuration
,”
Proceedings of International Conference on Learning and Intelligent Optimization
,
Rome, Italy
,
Jan. 17–21
, pp.
507
523
.
22.
Bergstra
,
J. S.
,
Bardenet
,
R.
,
Bengio
,
Y.
, and
Kégl
,
B.
,
2011
, “
Algorithms for Hyper-Parameter Optimization
,”
Proceedings of Advances in Neural Information Processing Systems
,
Sierra Nevada, Spain
,
Dec. 16–17
, pp.
2546
2554
.
23.
Tran
,
A.
,
Sun
,
J.
,
Furlan
,
J. M.
,
Pagalthivarthi
,
K. V.
,
Visintainer
,
R. J.
, and
Wang
,
Y.
,
2019
, “
pBO-2GP-3B: A Batch Parallel Known/Unknown Constrained Bayesian Optimization With Feasibility Classification and Its Applications in Computational Fluid Dynamics
,”
Comput. Methods Appl. Mech. Eng.
,
347
, pp.
827
852
. 10.1016/j.cma.2018.12.033
24.
Tran
,
A.
,
Tran
,
M.
, and
Wang
,
Y.
,
2019
, “
Constrained Mixed-Integer Gaussian Mixture Bayesian Optimization and Its Applications in Designing Fractal and Auxetic Metamaterials
,”
Struct. Multidiscipl. Optim.
,
59
(
6
), pp.
2131
2154
. 10.1007/s00158-018-2182-1
25.
Shan
,
S.
, and
Wang
,
G. G.
,
2004
, “
An Efficient Pareto Set Identification Approach for Multiobjective Optimization on Black-Box Functions
,”
ASME J. Mech. Des.
,
127
(
5
), pp.
866
874
. 10.1115/1.1904639
26.
Shu
,
L.
,
Jiang
,
P.
,
Zhou
,
Q.
,
Shao
,
X.
,
Hu
,
J.
, and
Meng
,
X.
,
2018
, “
An On-Line Variable Fidelity Metamodel Assisted Multi-Objective Genetic Algorithm for Engineering Design Optimization
,”
Appl. Soft Comput.
,
66
, pp.
438
448
. 10.1016/j.asoc.2018.02.033
27.
Deb
,
K.
,
Pratap
,
A.
,
Agarwal
,
S.
, and
Meyarivan
,
T.
,
2002
, “
A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II
,”
IEEE Trans. Evol. Comput.
,
6
(
2
), pp.
182
197
.
28.
Ak
,
R.
,
Li
,
Y.
,
Vitelli
,
V.
,
Zio
,
E.
,
López Droguett
,
E.
, and
Magno Couto Jacinto
,
C.
,
2013
, “
NSGA-II-Trained Neural Network Approach to the Estimation of Prediction Intervals of Scale Deposition Rate in Oil & Gas Equipment
,”
Exp. Syst. Appl.
,
40
(
4
), pp.
1205
1212
. 10.1016/j.eswa.2012.08.018
29.
Shu
,
L.
,
Jiang
,
P.
,
Zhou
,
Q.
, and
Xie
,
T.
,
2019
, “
An Online Variable-Fidelity Optimization Approach for Multi-Objective Design Optimization
,”
Struct. Multidiscipl. Optim.
,
60
(
3
), pp.
1059
1077
. 10.1007/s00158-019-02256-0
30.
Sun
,
X.
,
Gong
,
D.
,
Jin
,
Y.
, and
Chen
,
S.
,
2013
, “
A New Surrogate-Assisted Interactive Genetic Algorithm With Weighted Semisupervised Learning
,”
IEEE Trans. Cybern.
,
43
(
2
), pp.
685
698
.
31.
Cheng
,
R.
,
Jin
,
Y.
,
Narukawa
,
K.
, and
Sendhoff
,
B.
,
2015
, “
A Multiobjective Evolutionary Algorithm Using Gaussian Process-Based Inverse Modeling
,”
IEEE Trans. Evol. Comput.
,
19
(
6
), pp.
838
856
.
32.
Li
,
M.
,
2011
, “
An Improved Kriging-Assisted Multi-Objective Genetic Algorithm
,”
ASME J. Mech. Des.
,
133
(
7
), p.
071008
.
33.
Sun
,
C.
,
Jin
,
Y.
,
Cheng
,
R.
,
Ding
,
J.
, and
Zeng
,
J.
,
2017
, “
Surrogate-Assisted Cooperative Swarm Optimization of High-Dimensional Expensive Problems
,”
IEEE Trans. Evol. Comput.
,
21
(
4
), pp.
644
660
. 10.1109/TEVC.2017.2675628
34.
Chen
,
G.
,
Han
,
X.
,
Liu
,
G.
,
Jiang
,
C.
, and
Zhao
,
Z.
,
2012
, “
An Efficient Multi-Objective Optimization Method for Black-Box Functions Using Sequential Approximate Technique
,”
Appl. Soft Comput.
,
12
(
1
), pp.
14
27
. 10.1016/j.asoc.2011.09.011
35.
Regis
,
R. G.
,
2014
, “
Evolutionary Programming for High-Dimensional Constrained Expensive Black-Box Optimization Using Radial Basis Functions
,”
IEEE Trans. Evol. Comput.
,
18
(
3
), pp.
326
347
.
36.
Datta
,
R.
, and
Regis
,
R. G.
,
2016
, “
A Surrogate-Assisted Evolution Strategy for Constrained Multi-Objective Optimization
,”
Exp. Syst. Appl.
,
57
, pp.
270
284
. 10.1016/j.eswa.2016.03.044
37.
An
,
H.
,
Chen
,
S.
, and
Huang
,
H.
,
2018
, “
Multi-Objective Optimization of a Composite Stiffened Panel for Hybrid Design of Stiffener Layout and Laminate Stacking Sequence
,”
Struct. Multidiscipl. Optim.
,
57
(
4
), pp.
1411
1426
. 10.1007/s00158-018-1918-2
38.
Knowles
,
J.
,
2006
, “
ParEGO: A Hybrid Algorithm With On-Line Landscape Approximation for Expensive Multiobjective Optimization Problems
,”
IEEE Trans. Evol. Comput.
,
10
(
1
), pp.
50
66
. 10.1109/TEVC.2005.851274
39.
Jeong
,
S.
, and
Obayashi
,
S.
,
2005
, “
Efficient Global Optimization (EGO) for Multi-Objective Problem and Data Mining
,”
Proceedings of 2005 IEEE Congress on Evolutionary Computation
,
Edinburgh, Scotland, UK
,
Sept. 2–5
,
IEEE
, pp.
2138
2145
.
40.
Ponweiser
,
W.
,
Wagner
,
T.
,
Biermann
,
D.
, and
Vincze
,
M.
,
2008
, “
Multiobjective Optimization on a Limited Budget of Evaluations Using Model-Assisted $ S $-Metric Selection
,”
Proceedings of International Conference on Parallel Problem Solving From Nature
,
Dortmund, Germany
,
Sept. 13–17
,
Springer
, pp.
784
794
.
41.
Gaudrie
,
D.
,
Riche
,
R. L.
,
Picheny
,
V.
,
Enaux
,
B.
, and
Herbert
,
V.
,
2018
, “
Budgeted Multi-Objective Optimization With a Focus on the Central Part of the Pareto Front-Extended Version
,” .
42.
Keane
,
A. J.
,
2006
, “
Statistical Improvement Criteria for Use in Multiobjective Design Optimization
,”
AIAA J.
,
44
(
4
), pp.
879
891
. 10.2514/1.16875
43.
Feliot
,
P.
,
Bect
,
J.
, and
Vazquez
,
E.
,
2017
, “
A Bayesian Approach to Constrained Single- and Multi-Objective Optimization
,”
J. Global Optim.
,
67
(
1
), pp.
97
133
. 10.1007/s10898-016-0427-3
44.
Forrester
,
A.
,
Sobester
,
A.
, and
Keane
,
A.
,
2008
,
Engineering Design via Surrogate Modelling: A Practical Guide
,
John Wiley & Sons
,
Hoboken, NJ
.
45.
Bautista
,
D. C. T.
,
2009
,
A Sequential Design for Approximating the Pareto Front Using the Expected Pareto Improvement Function
,
The Ohio State University
,
Columbus, OH
.
46.
Pandita
,
P.
,
Bilionis
,
I.
,
Panchal
,
J.
,
Gautham
,
B.
,
Joshi
,
A.
, and
Zagade
,
P.
,
2018
, “
Stochastic Multiobjective Optimization on a Budget: Application to Multipass Wire Drawing With Quantified Uncertainties
,”
Int. J. Uncert. Quant.
,
8
(
3
), pp.
233
249
.
47.
Calandra
,
R.
,
Peters
,
J.
, and
Deisenrothy
,
M.
,
2014
, “
Pareto Front Modeling for Sensitivity Analysis in Multi-Objective Bayesian Optimization
,”
Proceedings of NIPS Workshop on Bayesian Optimization
,
Montreal, QC, Canada
,
Dec. 8–13
.
48.
Zhan
,
D.
,
Cheng
,
Y.
, and
Liu
,
J.
,
2017
, “
Expected Improvement Matrix-Based Infill Criteria for Expensive Multiobjective Optimization
,”
IEEE Trans. Evol. Comput.
,
21
(
6
), pp.
956
975
. 10.1109/TEVC.2017.2697503
49.
Wu
,
J.
, and
Azarm
,
S.
,
2001
, “
Metrics for Quality Assessment of a Multiobjective Design Optimization Solution set
,”
ASME J. Mech. Des.
,
123
(
1
), pp.
18
25
. 10.1115/1.1329875
50.
Cheng
,
S.
,
Zhou
,
J.
, and
Li
,
M.
,
2015
, “
A New Hybrid Algorithm for Multi-Objective Robust Optimization With Interval Uncertainty
,”
ASME J. Mech. Des.
,
137
(
2
), p.
021401
. 10.1115/1.4029026
51.
Jin
,
R.
,
Chen
,
W.
, and
Sudjianto
,
A.
,
2002
, “
On Sequential Sampling for Global Metamodeling in Engineering Design
,”
Proceedings of ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Montreal, Quebec, Canada
,
Sept. 29–Oct. 2
,
American Society of Mechanical Engineers
, pp.
539
548
.
52.
Shu
,
L.
,
Jiang
,
P.
,
Wan
,
L.
,
Zhou
,
Q.
,
Shao
,
X.
, and
Zhang
,
Y.
,
2017
, “
Metamodel-Based Design Optimization Employing a Novel Sequential Sampling Strategy
,”
Eng. Comput.
,
34
(
8
), pp.
2547
2564
. 10.1108/EC-01-2016-0034
53.
Rasmussen
,
C. E.
,
2004
, “Gaussian Processes in Machine Learning,”
Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2–14, 2003, Tübingen, Germany, August 4–16, 2003, Revised Lectures
,
O.
Bousquet
,
U.
von Luxburg
, and
G.
Rätsch
, eds.,
Springer
,
Berlin, Heidelberg
, pp.
63
71
.
54.
Park
,
J.-S.
,
1994
, “
Optimal Latin-Hypercube Designs for Computer Experiments
,”
J. Stat. Plan. Inference
,
39
(
1
), pp.
95
111
. 10.1016/0378-3758(94)90115-5
55.
Wang
,
G. G.
,
2003
, “
Adaptive Response Surface Method Using Inherited Latin Hypercube Design Points
,”
ASME J. Mech. Des.
,
125
(
2
), pp.
210
220
. 10.1115/1.1561044
56.
Lophaven
,
S. N.
,
Nielsen
,
H. B.
, and
Søndergaard
,
J.
,
2002
, “
DACE-A Matlab Kriging Toolbox, Version 2.0
.”
57.
Deb
,
K.
,
Agrawal
,
S.
,
Pratap
,
A.
, and
Meyarivan
,
T.
,
2000
, “
A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II
,”
Proceedings of International Conference on Parallel Problem Solving From Nature
,
Paris, France
,
Sept. 18–20
,
Springer
, pp.
849
858
.
58.
Liu
,
Y.
, and
Collette
,
M.
,
2014
, “
Improving Surrogate-Assisted Variable Fidelity Multi-Objective Optimization Using a Clustering Algorithm
,”
Appl. Soft Comput.
,
24
, pp.
482
493
. 10.1016/j.asoc.2014.07.022
59.
Park
,
H.-S.
, and
Dang
,
X.-P.
,
2010
, “
Structural Optimization Based on CAD–CAE Integration and Metamodeling Techniques
,”
Comput. Aided Des.
,
42
(
10
), pp.
889
902
. 10.1016/j.cad.2010.06.003
60.
Kalyanmoy
,
D.
,
2001
,
Multi Objective Optimization Using Evolutionary Algorithms
,
John Wiley and Sons
,
Hoboken, NJ
.
61.
Coello
,
C. A. C.
,
2000
, “
Use of a Self-Adaptive Penalty Approach for Engineering Optimization Problems
,”
Comput. Ind.
,
41
(
2
), pp.
113
127
. 10.1016/S0166-3615(99)00046-9