Optimized Latin Cube and Latin Square Design of Experiments

This post moves from a Frequently Asked Question of Nexus users who often ask our Support Team what’s the difference among the available Point Combiners of the Latin Cubes and Latin Square Design of Experiments (DoE) algorithms available in Nexus and Grapheme.

Contents

Background

The Design of Experiment modules of Nexus and Grapheme have been specifically designed to easily define and create Design of Experiments. Creating a Design of Experiments requires two straight forward steps:

  1. creating set of Design Variables (i.e. free parameters) defining the domain of interest of the Design of Experiments;
  2. choosing the Design of Experiment Algorithm that best fits the user needs. This is done using the free parameters and the domain of interest previously defined via ad hoc-defined multi-page user friendly Wizard. As an example, the Nexus wizard is reported below:

    Selecting the Domain of Interest:
    Compact_06
    The first page of the Wizard allows defining the domain of interest, i.e. which are the variable to be used as basis for the set of numerical experiments.
    Selecting the Allocation Procedure:
    Compact_06
    The second page of the Wizard allows selecting and configuring the actual allocation procedure. Among the available algorithms, the Latin Cube and the Latin Square ones expose option Point Combiner to final users.

    >> Back to Top

    Latin Cube and Latin Square Allocations

    Basic Concepts

    • Latin Hypercube: Latin Hypercube sampling can be considered an allocation compromise between random allocation techniques and stratified ones. Latin hypercube sampling technique operates allocating a given number of sample points in a M-dimension space having each dimension its specific distribution. The basic idea is to define a base of sample points for each dimension considering equal probability intervals. The obtained bases are then combined accordingly to a specified criterion (in the simplest case in a pseudo-random manner). Generation of points in each dimension is performed separately. The final Latin Hypercube is composed combining each base of sample points with the other ones.
    • Latin Squares: Latin Squares have been formally investigated from Euler in 1782. The name is explained by the fact that originally Latin Square was based on the use of the Latin alphabet. A Latin Squares requires at least three parameters to be properly defined (a Latin Square with 2 parameters is indeed equivalent to a full-factorial allocation). In a three dimensional space, a Latin Square can be seen as a N by N table of samples filled with N different symbols (levels) in such a way that each symbol occurs exactly once for each parameter, i.e. exactly once in each line and column. It is worth noticing that many transformations (swap of lines, columns, per-mutation of values) can be applied on a given Latin Square to form another Latin Square. Similarly to Latin Hypercubes, Latin Squares can be randomly generated or optimised to meet a required criterion.


    >> Back to Top

    Point Combiners

    Once parameter bases have been defined, Latin Cube and Latin Square allocations combine those bases so to obtain the final set of experiments.

    Random Combiner

    The simplest and fastest way of combining available bases is the random one.

    However, a random recombination of parameters does not optimize the sample points and, therefore, it completely lacks of controlling the quality of the final allocation. In fact by randomly combining the bases, the resulting allocation may turn out to be not evenly distributed within the domain of interest with points very close each-others on some areas and very far away in others. These limitations become as more important as the number of points we need to allocate decreases. Hence, the difference between an optimized (iterative combination to meet a quality criterion) and a random combination of points becomes more meaningful when the number of parameters and numbers of points are smaller.

    >> Back to Top

    Optimized Combiner

    On top of the Random Point Combiner, Nexus implements so called optimized combiners: Distance, Energy and Entropy.

    These optimized combiners work through an iterative process, at each iterations of which the combiner attempt to combine the already defined bases in order to improve an inner fitness function. Fitness functions are:

    • Distance: the algorithm attempts to combine the bases in such a way that the minimal distances between any of the resulting points is maximized. The rationale is that by maximizing the minimum distance between the points, the final set of points turns out to cover the largest possible portion of the Domain of Interest, also assuring as side-effect a more uniform distribution of the points;
    • Energy: the algorithm computes the potential energy of the system of points and attempt to minimize the overall potential energy of the system. The approach is similar to the maximum distance one but here we use a global goodness index and therefore the undergoing optimization problem is smoother, generally leading to better final distributions.
    • Entropy: conceptually similar to the Energy one, but here we attempt to minimize the overall entropy of the system.


    Iterative combiners allow defining to main properties:

    • Max Iterations: this is the maximum number of iterations that the iterative combiner procedure can do before stopping the recombination process;
    • Max No Improvements Iterations: this is an exit criterion to allow the recombination algorithm to return (i.e. converge) before the maximum number of iterations is reached. The basic idea is to stop the combiner when no improvements on the fitness values are obtained for the provided number of iterations in a row.


    As a matter of fact, global indexes (i.e. Energy and Entropy) required mush higher computational times than those required to allocate the points using the minimum distance criterion. Any iterative (i.e. optimized point combination) is relatively far more expensive than a random allocation.
    Please keep in mind that for small DoE (3 variables, 30 points) the worst case scenario (i.e. Entropy with 2000 iterations and not early stopping criteria on convergence) requires about 4 seconds on a standard PC.

    >> Back to Top

    A First Example

    In order to better prove what we highlighted above, please consider the following example:

    • 2 parameters, X and Y both ranging between -1.0 and 1.0
    • 15 points to be allocated as output.

    Allocation results using a Latin Cube Allocation with Random, Distance and Energy combiners are reported below. Iterative combiners have all be run for 5000 iterations.

    Random Combiner
    Compact_06
    Distance Combiner
    Compact_06
    Energy Combiner
    Compact_06

    >> Back to Top

    Conclusive Remarks

    Compact_06
    This post illustrates the main concept behind the so-called Point Combiner of Nexus and Grapheme DoE(s). The post emphasizes how Energy and Entropy combiners should result the best allocation of points within the Design of Experiment, requiring on the other hand much higher computational costs.

    We also explained that using optimized combiners becomes less important if the number of allocation points (i.e. the size of each parameter base) increases. For larger DoE (meaning with large the number of points to be allocated in respect of the number of design parameters) the use of optimized combiners become less necessary as each base will, alone, almost saturate its design space.

    >> Back to Top