One of the most common assumptions in the design process for any engineering system is that we operate with deterministic design parameters that we can control with arbitrary precision. However, this is rarely the case. When moving our design from the computer to reality, many random processes intervene (manufacturing tolerances and defects, transport damages, installation problems, etc.) which affect our design and result in a final system that differs from the one we originally designed.
Luckily, in most cases the effects of all these processes are quite small, so the final system still performs as we anticipated during the design phase. But sometimes things are not quite so simple. There are some high performance components for which even very small perturbations on the initial design, like the ones caused by manufacturing tolerances, could mean having a product that behaves very differently from what we expected.
An example of such case is high pressure turbine blades. The performance of these components is very sensitive to the geometry of the trailing edge, to the point that shape perturbations caused by inevitable manufacturing tolerances and defects can have a significant impact on the turbine overall efficiency. For such problems, robust design optimisation comes to the rescue.
The first step in a robust design procedure is to represent the uncertainty which affect our design parameters. We can do this by assigning them a probability distribution: a design variable is not defined by a single nominal value anymore, but by a series of values with an associated probability. One of the most used probability distributions in industrial processes is the normal distribution (also called Gaussian distribution): a variable is then defined by a mean value (μ) and a variance (σ2 – or its square root, the so called standard deviation σ) as illustrated in Figure 2B. This means that now we need to think of the parameter V1 in terms of probability: there is a 68% probability that its value is in the range μ ± σ, a 95% it is in the range μ ± 2σ and so on. We cannot say anymore that V1 is “0.5” like we would in a standard design problem.
Robust Design Optimisation
Since now our design parameters have become a series of values with a certain probability, so have our design objectives. Therefore, we now need to replace them with their “robust” counterpart. Our standard optimisation problem, written in its canonical form:
Min(F1, F2, …, Fn)
must now be expressed as a robust one:
Min(F’1=μF1 + nσF1, F’2=μF2 + nσF2, …, F’n=μFn + nσFn)
where n is an arbitrary coefficient representing the confidence level we want for our objective. As we have seen in Figure 2, if we choose n=3, we are requesting a design with a probability of 99% to be optimal even when small perturbations occur.
Figure 3 illustrates the 2 different approaches: in a standard optimisation, we are only interested in finding the “best” objective. In a robust optimisation, since we don’t operate with nominal values anymore, but rather with probability distributions, we are interested to find an optimum design that remains optimal also when the design variables are slightly perturbed (like it happens in the real world during, for instance, the manufacturing process). In this sense, the robust optimisation can be interpreted like a “probabilistic” sensitivity analysis. However, unlike a standard sensitivity analysis, perturbations used to calculate the sensitivity are defined by a probability distribution function and are therefore more accurate when dealing with random processes (in other words, require less evaluations to accurately estimate the sensitivity compared to a standard approach), with the additional advantage of providing also an accurate estimation of mean and standard deviation of the objectives as explained in the next section.
While the mean and standard deviation (i.e. the statistical properties) of the design parameters are assigned upfront (typically based on some manufacturing statistical data), the statistical properties of our objectives are unknown and must be calculated during the optimisation process in order to define the robust objectives seen above.
Evaluating the statistical properties of our objectives can be quite demanding, computation-wise. In fact, to evaluate a single design point, we now need to allocate a sample of design points using an appropriate allocation algorithm (Monte Carlo or Latin Hypercube), evaluate each point and calculate mean and standard deviation of the objectives for the sample. This is repeated for all the designs evaluated during the optimisation, which in turn can lead to a big number of evaluations required. The procedure is summarised in Figure 4.
The number of samples required for an accurate estimation of the statistical properties can be quite big. Luckily, advanced techniques to efficiently estimate these properties using small samples are available: virtual sampling and polynomial chaos. These techniques allow to reduce the total number of evaluations required and is particularly advantageous when objectives are expensive to calculate, like, for instance, in the case of turbine blade design, where the blade efficiency is calculated through a series of RANS-based CFD simulations. We are going to cover these methods in more detail in other post.
Abandoning the hypothesis of a deterministic world is sometime necessary to design high performance products that need to operate at their maximum even where manufacturing tolerances and defects can play a significant role. Robust design optimisation offers a powerful tool to tackle these problems, identifying not only optimal solutions, but also a confidence interval that guarantees our design remains optimal also in the presence of manufacturing tolerances and other random perturbations.
If you are interested in robust design, check out Nexus which offer state-of-the-art algorithms for robust design optimisation and statistical analyses.