# Restrictions Explained: save computational time

Restrictions: what are restriction constraints in Nexus and where to use them to speed up your optimizations?

### Foreground

In this post we will provide a gentle introduction to restriction constraints (as implemented in Nexus) and we will try to form general guidelines on when to use them to speed up optimization tasks within Nexus.
In a previous post Genetic Algorithm or Gradient Based we provided a general classification of optimization algorithms in two main families:

• Direct Search:the algorithm selects the next solutions looking at the objective (and constraint) values of the pre-existing evaluations. Values are compared and a new set or evaluations is scheduled to form the next iteration.
• Gradient Based Search:the algorithm selects the next solutions looking at objective (and constraint) values and derivatives. The set of evaluations required to move to the next iteration are chosen using information coming from both function values and derivatives and this is usually done by estimating the search direction in the domain of interest that maximizes the objective (or constraint) improvements.

Within Nexus Restriction Constraints are made available to the first family of optimization procedures only (i.e. direct search). The reason will be clear soon.

### Contents

The post is organized in the following sections:

### General Consideration

In practical optimization problems accounting for more objectives and constraints there are scenarios where specific values of the design variables make the analyzed system unfeasible or (even worst) singular.

A first Example – Geometrical Constraints
Consider the example below, where we wish to optimize the thickness of a tapered beam.

Let’s use:

• Ri:: inner radius of the cross section;
• Rb:: external radius of the cross section, at the beginning of the beam;
• Re:: external radius of the cross section, at the end of the beam;

Let’s suppose each variable ranges between [5,10]. Clearly Rb>Re>Ri. Note that if Ri>Rb or (even worst) Ri>Rb> you will have a beam with a negative volume. If we now use a numerical solver to evaluate the performances of our beam, it’s very likely that our solver will crash or return inaccurate results if such geometries arise.
Ok, I know.. You are arguing that I formulated the problem in the wrong way. A better choice of the variables would have avoided all these problems. But please bear with my clumsy choice for now and let me use as an excuse that it’s not always possible or easy in real-world industrial problems to find alternative and effective problem formulations.
So, despite my initial efforts I ended up in a situation where for certain combinations of the design variables the system (or to better say our simulation model) become singular and no more valid. In the worst case scenario, the system returns inaccurate results possibly compromising the whole optimization search by offsetting the search in unreliable areas. In the best case we will face errors while the iterative optimization progresses in the search. In either case the optimization procedure looses time in evaluating solutions we know upfront to be unfeasible and it will probably converge slower.

A second Example – Multiphysic problem
As a second example consider you are optimizing a system for both structural and fluid dynamic performances. Let’s say this is a simple 2D profile. You use a semi-analytical approach to estimate the overall structural performance of the profile (such as overall stiffness which you want to constraint to remain greater than a given value and strain/stress levels). Let’s suppose . You use an Eulerian CFD simulation to predict drag and lift. You have both global shape variables such as the height of the profile, its cord and a set of controlling points describing its external shape as well as specific structural variables, i.e. wall thicknesses.
Now, your design process in Nexus will allow you to perform structural and CFD simulation in parallel for each evaluation points. However, a structural semi-analytical analysis requires fraction of seconds to run. Each CFD simulation on the contrary requires few minutes. Hence, there is no point to perform expensive CFD simulations if you can run a structural simulation first to discharge a solution in fraction of seconds. Never the less, CFD analyses are potentially required for all the solutions that turn out to be structural feasible.

The idea of Restriction Constraints in Nexus
Restriction Constraints have been designed within Nexus exactly to solve these kinds of problems, where a hierarchy of design requirements can ne defined upfront, either saving overall simulation time or preventing the design process to evaluate singular solutions.
Clearly such approach is possible only in so-called direct optimization methods as in gradient-based all the responses (and their derivatives) shall be known at each iteration to select the most promising search direction and step-length for the next coming iteration.

### A practical example with Nexus

In order to show the potential benefits of Restriction Constraints in Optimization tasks we will consider the following explanatory example:

where:

• X ranges from -5.0 to 15.0,
• Y ranges from -5.0 to 15.0,
• G1(x,y), G2(x,y) and G3(x,y) are constrained to remain lower than 0.0 and
• F(x,y) is the objective function to be minimized within the domain of interest

To emphasize the way restriction constraints work, we will tackle this optimization problem using a single objective Genetic Algorithm.
We will run the algorithm with three different settings:

No Restriction Constraints
This is our baseline. The Genetic Algorithm converged very close to the analytical optimum: x=4.07, y=1.2, F=5.27.

In total the following function evaluations have been performed:

• 346 evaluations of G1(x,y),
• 346 evaluations of G2(x,y),
• 346 evaluations of G3(x,y) and
• 346 evaluations of F(x,y).

The figure below shows the full point evaluations performed by this Optimization Run:

G1(x,y) as Restriction Constraint
Same settings used in the baseline, but here we set G1 to be a restriction. Also in this case the Genetic Algorithm converged very close to the analytical optimum: x=4.07, y=1.2, F=5.27.
In total the following function evaluations have been performed:

• 346 evaluations of G1(x,y),
• 314 evaluations of G2(x,y),
• 314 evaluations of G3(x,y) and
• 314 evaluations of F(x,y).

The figure below shows the full point evaluations performed by this second Optimization Run:

G1(x,y) and G3(x,y) as Restrictions
Again same settings of the baseline, but here we set also G3 to be a restriction. Also in this case the Genetic Algorithm converged very close to the analytical optimum: x=4.07, y=1.2, F=5.27.
In total the following function evaluations have been performed:

• 346 evaluations of G1(x,y),
• 282 evaluations of G2(x,y),
• 346 evaluations of G3(x,y) and
• 282 evaluations of F(x,y).

The figure below shows the full point evaluations performed by this second Optimization Run: