This post shows you how to create, validate and use your first Response Surface within Nexus.

### Foregound

In statistics, Response Surface Methodology (RSM) is used to explore the relationships between explanatory variables and one or more output values of your system. The method was introduced by G.E.P. Box and K.B.Wilson in 1951.

From the original idea of Box and Wilson, Response Surface Methodology evolved moving from Polynomial Approximation models to more general schemes such as Radial Basis Functions, Kriging estimators, Neural Networks, Vector Machine Learning and many more.

Never the less, basic concepts stay the same. In order to use Response Surface Methodology (RSM) you need:

- a set of
**Sample Points**(i.e. measures of your system) – often referred as Design of Experiments – on the base of which your approximation (i.e. your MetaModel) is built on - an approximation model capable to identify relationships between variables and outputs of your system. We refer to this approximation scheme as
**MetaModel**, being an abstracted model built on top of a set of measures from the real system - a validation strategy. This point is not strictly required but it is often underestimated. The goal here is to validate (i.e. build a certain level of reliability) on the
**MetaModel**before using it to derive guidelines and draw conclusions

### Contents:

This post will show you how to:

- on this example
- build your first Response Surface within Nexus
- validate your Response Surface
- visualize and use your response surface to predict your system behavior
- conclusive remarks

### On this example

Clearly this is intended to be an explanatory example on how to use RSM, as from a practical standpoint we all agree that building a Response Surface when we know upfront the exact behavior or our system does not provide much added value.

In order to proceed with the exercise, we need to sample our system within a certain pre-defined number or sample points (or designed experiments if you prefer). In this specific case, given the limited number of design variables we will use a Full Factorial Allocation as Design of Experiments spanning a grid of 5×5 points over the domain of interest, this leading to a Design of Experiment with 25 points.

Additionally, we will create a second (and larger) Design of Experiment having 300 random points.

We will use the first Design of Experiment (of 25 points) to create and build our response surface and the second one (of 300 points) to validate our response surfaces, i.e. to verify the error levels returned by the Response Surface when used to evaluate new points within the domain of interest. You can download the DoE(s) from here

### Build your first Response Surface within Nexus

Within Nexus users have two different ways to create Response Surfaces:

- from within the
**Flowchart**: this allows users to create Dynamic Response Surfaces, capable to adapt while new sample points are made available by running optimization and allocation tasks - from within the
**Response Surface Module**: this allows users to build static response surfaces using already available Design of Experiments. This is the simpler and easier approach if all you need is a Response Surface designed on the base of an existing set of samples. In this post, we will focus on creating a Response Surface from within the**Response Surface Module**only.

**SupportPoints**. We also import the second DOE – which we will use for validation purposes only – and we name this second table as

**ValidataionPoints**. Then we move to the Response Surface module of Nexus. From there we launch the New Response Surface Wizard

Let’s see more in detail how we can do that:

**1. Selecting a DoE**

The New Response Surface Wizard allows to select an existing project table as Design of Experiment, to assign relevant RSM inputs and outputs as well as to filter out specific points from the selected Design of Experiment table. Once the DoE has been selected, good practice is to run a quick correlation analysis between the selected inputs (i.e. the explanatory variables of your system) and the selected output(s).

**2. Correlation Analysis on DoE**

This is particularly useful to identify those explanatory variables that meaningful correlate (linearly) with the selected output. Variables with positive and higher value indicates a increasing trend of the output due to increase of the variable values, similarly a negative value of correlation value indicates the trend of the output value to decrease as the variable increases. Note that from a statistic standpoint, low correlation values do not imply that a variable is less important that another but simply that there is no a clear linear trend (increasing/decreasing) between the explanatory variable and the output. Hence do not blindly use low correlation values to exclude variable from your model!

**3. Selecting the Response Surafce**

The third step is to select the Response Surface methodology to approximate your DoE values. In this specific example, we use Radial Basis Function.

**4. Validating the model**

Once the kind of Response Surface has been selected and the Response Surface has been built, good practice is to validate the obtained RS against a set of data not used to build the Response Surface in the first place. In this specific case we do that evaluating the response surface for each point stored in table

**ValidatonPoints**and comparing the result of our Response Surface with the expected one stored in the table.

### Validate your Response Surface

Once generated, the response surface model should always go through some level of validation process to assure the reliability of the approximation in order to assure correct conclusions. A validation process is usually consists in using the response surface to predict a certain number of points within the domain of interest that do not belong to the Design of Experiment initially used to build the response surface.

A comparison between the values computed by the Response Surface and the one already available will provide a first estimation of the goodness of the response surface. The larger is the validation set and the more confident we can be on the reliability of the model. Depending on how critical is the application and how certain we need to be on the achieved level of approximation, validation DoE may range in dimension from a small fraction to several time in size of the DoE actually used to defined the response surface.

In this case, we use a set of 1000 validation points. Approximation results return MSE of 0.004, corresponding to a Regression Coeff. (R) of 0.9962.

>> Back to Top

### Visualize and use your response surface to predict your system behavior

>> Back to Top

### Conclusive remarks

In this post we have seen how to create a Response Surface and to validate it before being used to derive general guidelines and relationships between the explanatory variables and the output of a system. We discussed on the importance of using a DoE to contain the support points to generate the approximation plus a second set of *validation* points.

Finally let’s prove how important and critical is the selection of the Response Surface formulation by comparing the RBF approach (discussed above) with a more classical second order polynomial approximation, as show in the following. Both approximation have been built using exactly the same DoE but the RBF model is clearly capable to mirror our system better, being able to extract more information from the set of 25 sample points than the second order polynomial model.

>> Back to Top