Anna

Share Embed Donate


Short Description

Download Anna...

Description

Design Optimization of a Boom for Drill Rigs

Ida Haglund Anna Håkansson

EXAM WORK 2011 Mechanical Engineering Postadress: Box 1026 551 11 Jönköping

Besöksadress: Gjuterigatan 5

Telefon: 036-10 10 00 (vx)

This exam work has been carried out at the School of Engineering in Jönköping in the subject area mechanical engineering. The work is a part of the Master of Science programme. The authors take full responsibility for opinions, conclusions and findings presented. Examiner: Niclas Strömberg, JTH Supervisors: Niclas Strömberg, JTH Oskar Sjöholm, Atlas Copco Scope: 30 credits Date: 2011-06-08

Postadress: Box 1026 551 11 Jönköping

Besöksadress: Gjuterigatan 5

Telefon: 036-10 10 00 (vx)

Abstract This thesis describes how a design optimization of a boom structure for drill rigs can be set up and performed where the range is increased and the weight reduced. A program that calculates the coverage area based on the boom angles and dimensions is connected with a finite element analysis of a boom model that is auto-generated by a macro, using commercial optimization software. The macro was recorded while creating the model and dimensions have been replaced by varying parameters. Workflows are created in the optimization software where suitable design of experiments and optimization algorithms are selected.

II

Summary Atlas Copco Rock Drills AB develops and manufactures a machine called Boomer. It is a series of large, high capacity drill rigs for tunneling and mining, used to drill blast holes in the rock. One or several arms with drills in the ends are mounted on the Boomer. These arms are called booms and this thesis has been focusing on BUT 45 L which is the newest and largest boom. The two-dimensional area in front of the boomer that can be reached by the boom drill is called coverage area. This area depends on the dimensions and angles of the boom structure, but so does the boom weight and stiffness. One boom can be up to 12 meters and weigh more than 5000 kg and it is Atlas Copcos wish to make a lighter boom. At the same time, a maximized coverage area is preferable since it results in larger drifts. The purpose with the thesis was to develop the prerequisite to perform a design optimization of the boom structure which maximized the coverage area and minimized the weight. The stresses and deflection in the structure had to be kept within allowed limits. A program was created in MATLAB to calculate the coverage area dependent on the boom dimensions and angles, using forward kinematics according to DenavitHartenberg. To obtain the weight and to make sure that the boom structure would not collapse due to high stresses, a finite element model was created in Abaqus CAE. Macros were recorded while creating the model and the script that was an outcome of the macros where worked through in detail. Ten dimensions on the boom were replaced by parameters and the script could be used to autogenerate and analyze the complete model with given parameters. The optimization software modeFRONTIER was used to connect the area calculation and FEA-model and to perform the optimizations. Two singleobjective and one multi-objective workflow were created. The first workflow maximized the coverage area and the resulting dimensions were used as constants in the second workflow where the weight was minimized by reducing the crosssection of the boom body. In the multi-objective optimization, the coverage area was maximized and the weight minimized in the same workflow. Suitable design of experiments and optimization algorithms for each optimization were selected in modeFRONTIER. By running the optimizations, it was identified that the coverage area could be increased while the weight was reduced, with allowed stresses and deflections. Keywords

Forward kinematics

Finite element analysis

Denavit-Hartenberg

Metamodeling

Design optimization

Design of experience

Multi-objective optimization III

Preface This thesis work has been carried out by two students at the School of Engineering at Jönköping University as a part of the Master of Science programme in Product Development and Materials Engineering. We would like to thank our supervisors for their help and support throughout this thesis work Niclas Strömberg – for his objectivity and discretion. Oskar Sjöholm – for his enthusiasm, commitment and faith in us. Furthermore, we would also like to thank Peter Öberg and Atlas Copco Rock Drills AB for giving us the opportunity to realize this project. Last but not least, we would like to thank Björn Ryttare for his patience and for devotion of time despite the lack of it.

Ida Haglund

Anna Håkansson IV

Contents

Contents 1  Introduction............................................................................... 1  1.1  1.2  1.3  1.4  1.5  1.6  1.7  1.8 

BACKGROUND ............................................................................................................................. 1  ATLAS COPCO ROCK DRILLS AB ................................................................................................ 1  THE BOOMER AND THE BUT 45 L............................................................................................... 1  PROBLEM DESCRIPTION ............................................................................................................... 2  PROBLEM FORMULATION ............................................................................................................ 3  PURPOSE AND AIM....................................................................................................................... 3  DELIMITATIONS .......................................................................................................................... 4  OUTLINE ..................................................................................................................................... 4 

2  Theoretical background ........................................................... 1  2.1  FORWARD KINEMATICS .............................................................................................................. 1  2.2  FE-ANALYSIS .............................................................................................................................. 2  2.2.1  FE-elements ...................................................................................................................... 3  2.3  OPTIMIZATION THEORY .............................................................................................................. 4  2.3.1  The Optimization Process ................................................................................................. 4  2.3.2  Multi-objective optimization and Pareto optimality ......................................................... 7  2.4  METAMODELING ......................................................................................................................... 8  2.4.1  Response surface models .................................................................................................. 9  2.4.2  Kriging ............................................................................................................................ 10  2.4.3  Radial basis function models .......................................................................................... 12  2.4.4  Neural Networks ............................................................................................................. 13  2.4.5  Advantages and disadvantages of different metamodels ................................................ 15  2.5  DESIGN OF EXPERIMENTS ......................................................................................................... 16  2.5.1  Sobol Design ................................................................................................................... 17  2.5.2  Latin Hypercube Designs ............................................................................................... 18  2.5.3  Incremental Space Filler ................................................................................................ 20  2.5.4  Taguchi Orthogonal Array Designs ............................................................................... 21  2.5.5  Full factorial designs ...................................................................................................... 23  2.5.6  Fractional Factorial Designs ......................................................................................... 24  2.5.7  Central Composite Designs ............................................................................................ 25  2.5.8  Box-Behnken Designs ..................................................................................................... 28  2.5.9  Latin Square Design ....................................................................................................... 30  2.5.10  Plackett-Burman Designs .......................................................................................... 31  2.6  FAST OPTIMIZERS IN MODEFRONTIER™ ................................................................................ 31  2.6.1  Working process ............................................................................................................. 31  2.6.2  The SIMPLEX algorithm ................................................................................................ 33  2.6.3  The MOGA-II algorithm ................................................................................................. 35 

3  Method and implementation ................................................. 37  3.1  COVERAGE AREA ...................................................................................................................... 37  3.2  THE EXISTING FEA-MODEL....................................................................................................... 38  3.3  THE NEW FEA-MODEL .............................................................................................................. 39  3.3.1  Boom support.................................................................................................................. 40  3.3.2  Boom link ........................................................................................................................ 40  3.3.3  Cylinder link ................................................................................................................... 41  3.3.4  Fork ................................................................................................................................ 41  3.3.5  Boom beam 1 .................................................................................................................. 41  3.3.6  Boom ears ....................................................................................................................... 42  3.3.7  Tripod cylinders .............................................................................................................. 42  3.3.8  Cylinder ends .................................................................................................................. 43  3.3.9  Flange ............................................................................................................................. 43  3.3.10  Boom beam 2 ............................................................................................................. 43 

V

Contents 3.3.11  Gear flange ................................................................................................................ 44  3.3.12  Gear case ................................................................................................................... 44  3.3.13  Knee ........................................................................................................................... 45  3.3.14  Bracket ....................................................................................................................... 45  3.3.15  Feeder fork................................................................................................................. 45  3.3.16  Feeder ........................................................................................................................ 46  3.3.17  Feeder beam, Steel holder and Feeder steel .............................................................. 46  3.4  MATHEMATICAL FORMULATION OF THE OPTIMIZATION ............................................................ 47  3.4.1  Formulation of single-objective optimization problem 1 ................................................ 47  3.4.2  Formulation of single-objective optimization problem 2 ................................................ 47  3.4.3  Formulation of multi-objective optimization problem .................................................... 48  3.5  OPTIMIZATION WORKFLOWS IN MODEFRONTIER™ ............................................................... 49  3.5.1  Single-objective optimization workflow 1 in modeFRONTIER™................................... 49  3.5.2  Single-objective optimization workflow 2 in ModeFRONTIER™ .................................. 50  3.5.3  Multi-objective optimization workflow in modeFRONTIER™ ....................................... 52 

4  Findings and analysis............................................................... 54  4.1  SINGLE-OBJECTIVE OPTIMIZATION 1 IN MODEFRONTIER™ .................................................... 54  4.2  SINGLE-OBJECTIVE OPTIMIZATION 2 IN MODEFRONTIER™ .................................................... 63  4.3  MULTI-OBJECTIVE OPTIMIZATION IN MODEFRONTIER™ ....................................................... 65 

5  Discussion and conclusions .................................................... 71  5.1  DISCUSSION OF METHOD ........................................................................................................... 71  5.1.1  Using modeFRONTIER™ .............................................................................................. 71  5.1.2  Formulation of the multi-objective problem ................................................................... 71  5.1.3  Used algorithms .............................................................................................................. 72  5.2  DISCUSSION OF FINDINGS .......................................................................................................... 73  5.3  FURTHER IMPROVEMENTS ......................................................................................................... 73  5.3.1  More realistic FEA-model .............................................................................................. 73 

6  References ............................................................................... 76  7  Search terms ........................................................................... 78  8  Appendices .............................................................................. 79 

VI

Introduction

1 Introduction The introduction section discusses the background of the project, the regarded product as well as the defined problem.

1.1 Background Due to a constant wish of shortened product development cycles, fast ways to evaluate design concepts at an early phase in the design process are requested. Advanced Finite Element Analysis software and computer support provides the possibility of performing so called virtual prototyping. By adding optimization to the process, the work can be made even more efficient as an optimal design of a product can be retrieved faster. (Jurecka, 2007) Atlas Copco Rock Drills AB in Örebro has realized the benefits of optimization and expects to implement this in their design process.

1.2 Atlas Copco Rock Drills AB Atlas Copco Rock Drills AB is a part of the Atlas Copco Group, founded in Sweden in 1873. Atlas Copco Group is a world leading company within development and manufacturing of equipment for heavy industry. The company is divided into several divisions which act within three different business areas Compressor Technique, Construction and Mining Technique, and Industrial Technique (Atlas Copco, 2010).

1.3 The Boomer and the BUT 45 L Within the area of “Construction and Mining technique” various equipment and machines for heavy construction work are developed – one is the Atlas Copco Boomer. The Boomer, which is a series of large, high capacity drill rigs for mining and tunneling, is used for drilling blast holes in the rock, see Figure 1.1. The machine is placed in the end of the drift where one or several booms mounted on the rig allow the operator to reach a large area of the gallery. These booms are named BUT and a following model number. This thesis has focused on the BUT 45 L, which is the newest boom in the BUT product family. A drawing can be seen in Appendix A: Drawing of a BUT 45 L. The BUT 45 L is a five-axis boom and the largest boom in the BUT-series. It is stiffer than its predecessors and consequently offers more precise positioning and a larger service range. (Atlas Copco, 2007) Since a 165 kg drill with a rotational motor with 650 Nm torque are mounted to the end of the boom stiffness is of great importance as large forces arise in the up to twelve meter long boom structure while drilling. (Atlas Copco, 2009)

1

Introduction

Figure 1.1: A Boomer with three booms.

1.4 Problem description The two-dimensional area which the boom reaches is called coverage area and is an important factor when constructing and developing new booms. The coverage area is linked to the dimensions and deflection angles, which also affects the forces in the boom structure, the total weight and the required stiffness. In fully extended position the BUT 45 L boom is 11,4 meters long and weighs over 5000 kg.

Figure 1.2: A Boomer in action, drilling holes in the rock using the drills mounted on the booms.

2

Introduction

If the boom structure becomes heavier extra weight must be compensated for in the Boomer in order to keep stable and prevent from tipping over. The maximum boom swing angle is set to a fixed value since to large swing angle results in a risk of tipping over. It is desirable to reduce the weight and the arisen stresses in the boom in order to be able to change the configuration of the dimensions, resulting in a larger cover area. The current boom is not developed by optimization and the possibility that a local optimum were found is large. If a configuration exists where the coverage area of the boom are increased at the same time as the total weight are sustained or even reduced, the theory of a local optimum is plausible. To find out, an optimization routine can be used and a number of concepts will be compared and evaluated by the computer. In order to create an optimization the design of the boom has to be rigorously documented and examined. The objective function in optimization depends on design variables and constraints. Without explicit answers to what is to be optimized, it will be impossible to fulfill the main purpose. More on how optimizations are carried out will be described in chapter 2: Theoretical background. Depending on the answer to what is to be optimized a suitable optimization algorithm has to be chosen. Based on the knowledge retrieved during the mapping and analyzing process a suitable method will be selected. The linking between the software has to be designed for implementation in the current research and development at Atlas Copco Rock Drills AB. Consequently, it has to be able to communicate with the available software at the department.

1.5 Problem formulation The problem formulation has throughout the project been; “How can an optimization of the boom structure, which maximizes the coverage area and minimizes the total weight while the stresses in the structure are kept within allowed limits, be set up and performed?” To solve the problem, two sub-questions were identified; “How can the coverage area be calculated so that the boom dimensions can be changed with an accurate outcome as a result?” and “How can a FE-analysis of the boom be carried out, that functions realistic for all configurations, which gives the stress results and volume as output?”

1.6 Purpose and aim The main purpose with the project is to develop the prerequisite that enables an optimization which maximizes the coverage area and minimizes the weight of the BUT 45 L while the stresses in the structure are kept within allowed limits. An optimization would imply a shortened developmental time of upcoming booms. 3

Introduction

The aim is to run an optimization of the boom and to compare the results with the current configuration.

1.7 Delimitations The BUT 45 L is available in two different versions, one with the feeder mounted on the top and one version with the feeder mounted on the side of the boom body. In this thesis only the top mounted version is managed. The optimization will concern the boom structure from boom support to gear flange. This delimitation has been made in accordance with the engineers on Atlas Copco since the tripod solution dimensions has proven to be the crucial factor when increasing the coverage area. The angles and dimensional variables can be seen in Appendix B: Angles and variables.

1.8 Outline Each section of this report deals with the three main areas described in the problem formulation and how they are solved; the calculation of coverage area, the FE-analysis of the boom and the optimization of the boom where the three subjects are connected. The theoretical background explains the method used to calculate the coverage area and a brief introduction to FE-analysis and element types. The theory behind optimization problems is also explained as well as metamodeling, design of experiments and fast optimizers in the optimization software modeFRONTIER™. In the method and implementation section there is a description of how the coverage area calculation are carried out and how the current FEA-model and new FEA-model differ. It also deals with the mathematical formulation of the optimization problem and how the problem can be set up in modeFRONTIER™ Under findings and analysis the result of the final optimization will be revealed and compared to the current boom configuration. The project is concluded with a discussion and further opportunities are presented.

4

Theoretical background

2 Theoretical background In the theoretical background, the calculation routine used for the coverage area, forward kinematics, will be explained. It will also concern the theory of optimization, explaining metamodels, different design of experiments and the fast optimizers available in modeFRONTIER™.

2.1 Forward Kinematics In order to calculate the coverage area of a boom, the original developer of the calculation program turned to the area of robot control and modeling. A boom is similar to a robot arm in the way that they both consist of links and joints which forms a kinematic chain. To determine the position of an end effector of a kinematic chain, depending on the known values of the joint variables, forward kinematics is used. Within the area of forward kinematics a method called the Denavit-Hartenberg Convention, or DH convention, is commonly used. In this methodology it is assumed that all joints only have a single degree-offreedom. Links with more than one degree-of-freedom can be modeled as a sequence of one-degree-joint with zero spacing. By this assumption it is possible to describe the displacement of the joint in terms of one rotation (or one translation for prismatic joints). Every link on the kinematic chain is assigned a rigid coordinate frame. As a joint is being put into action the connected link will move in space. The position of a link is described in terms of the preceding link in a homogenous transformation matrix . This matrix is a function of the rotation of the joint (Spong et. al, 2006). Introducing a framework and assigning the coordinates frames according to the DH convention makes it possible to represent the motion of each link with four basic transformations. An arbitrary homogeneous transformation matrix generally consists of six parameters. By making the following assumptions the reduction in parameters can be made (Spong et. al, 2006): 1.

The axis xi is perpendicular to the axis zi-1

2.

The axis xi intersects with the axis zi-1

The rigidly attached frame to the link does not have to lie within the physical link but can exist in free space. The Figure 2.1 shows how coordinate frames which fulfills the assumptions for DH convention will look like. Due to the DH convention of assigning coordinate frames the transformation matrix now depends on the following four parameters, two rotations, θ and α, and two translations, a and d. The matrix looks like the following:

sin 0 0

cos

sin cos sin 0

sin sin cos sin cos 0

1

cos sin 1

Theoretical background

The parameter a is usually named link length, α is named link twist, d is the named link offset and θ is the joint angle. How these are measured can be seen in Figure 2.1. The first three parameters are constant for a known link while the joint angle is variable (Spong et. al, 2006).

a α zi oi d

zi-1

yi xi

yi-1 θ

oi-1

xi-1

Figure 2.1: Coordinate frames, oi-1 and oi, which satisfies the assumptions made in the DH convention (Spong et. al, 2006).

There is no unique way of assigning the coordinate frames, derivation may differ but the result will end up the same. A summary of the procedure of assigning coordinate frames according to Denavit-Hartenberg is found in Appendix C Forward Kinematics.

2.2 FE-analysis When designing new products, the physical behavior of the geometry is important to analyze since it can predict how the final product will respond when in use. By dividing the investigated geometry into smaller parts, so called finite elements that are connected by shared nodes, the behavior of each element as well as the total performance of the physical structure can be obtained. Every element represents a discrete part of the total geometry and a collection of elements are called mesh (Dassault Systèmes, 2009). The stress and strain in each element can be determined by computing the displacement of the nodes. Both the magnitude and distribution can be analyzed. The most critical regions can give the engineer a hint of where and how to redesign the object with a more equally distributed stress as a result. When computers are used to simulate the finite elements, a fast and accurate result will be achieved on the condition that suitable selections are made by the engineer throughout the procedure. (Dassault Systèmes, 2009) 2

Theoretical background 2.2.1

FE-elements

There are eight commonly used element types for stress analysis available in Abaqus; continuum, beam, truss, shell, rigid, membrane, infinite and connector elements. The three element types used in the new FEA-model are continuum, beam and truss elements which main features are explained below (Dassault Systèmes, 2009). Beam and truss elements are both one-dimensional line elements which act in two or three dimensions and have been assigned stiffness against deformations. By approximating that a three-dimensional part is slender, the beam or truss can be treated as a one-dimensional element in three dimensions given a constant cross section. This assumption can only be made if the cross-section is small compared to the axial length. The type of deformations that can be supported is the main difference between beam and truss elements. (Dassault Systèmes, 2009) Truss elements

The only supported deformations in trusses are compression and tension i.e. originated from axial forces. No forces normal to the truss axis or moments are supported. If the cross section is not manually defined in Abaqus, an area will be assumed by the software. A material must be selected but it is not allowed to define a material orientation in trusses. Two types of truss elements are available in Abaqus; 2-node straight elements that have a constant stress and use linear interpolation for positions and displacements, or 3-node curved elements that uses quadratic interpolation which offers a varying strain along the structure (Dassault Systèmes, 2009). Beam elements

In beam elements deformations can arise from axial forces, shear, torsion and bending forces in contrast to truss elements where only axial forces are supported. The beam section can generally not be deformed in its own plane and the use of beam elements should therefore be considered carefully in each case. An exception to this assumption is for example when open section beams are used and warping occurs (Dassault Systèmes, 2009). Beams have few degrees of freedom and are geometrically simple which implies that the computational time is reduced, one of the main advantages with beam elements. The required bending moment must be defined as a function of the curvature, the torque as a function of twist, or the axial force as a function of strain. Two types of section definitions are available in Abaqus; general beam section where the cross section properties are established in the preprocessor once or a beam section integrated during the analysis where the behavior is recomputed during the analysis (Dassault Systèmes, 2009). The beam is created along a wire region and the section profile must be defined as well as the section orientation. The cross-section selection affects the beam behavior and can either be chosen from the list in Abaqus or created manually. They can either be solid or thin-walled and for the later closed or open sectioned (Dassault Systèmes, 2009). 3

Theoretical background Continuum elements

Continuum or solid elements are the standard volume elements in Abaqus and can be used when contact, plasticity and large deformations occur but also in linear analysis. There are several solid element types available and it is important to choose a suitable type for the particular case. Some characteristics that can be considered is if the element should include hexahedral/quadrilaterals or tetrahedral/triangles, first or second order, reduced or full integration, normal, hybrid or incompatible mode formulation. To get an accurate result the elements should have as equal size and be as well-arranged as possible. The solid part must be assigned a section which can consist either of one material consistently or several layers of materials in a laminated composite design (Dassault Systèmes, 2009).

2.3 Optimization Theory By using optimization to a defined problem, a solution can be found that fulfills the given requirements and maximizes or minimizes the outcome. 2.3.1

The Optimization Process

For carrying out an optimization there exist some methodology/process for setting up and solving the assignment, see Figure 2.2. In this section the process and the activities for each step is explained. First of all the problem has to be identified in order to be simplified and formulated in a mathematical way. The reality is often very complex and reasonable estimations and delimitations have to be made for the problem to be solvable. At the same time it is important that the problem is not too generalized to represent the real conditions in a correct way. It is a careful balance between solubility and accuracy where decisions has to be taken according to the desired level of ambition. The identification process is a good source to acquire knowledge and understanding of the system and the relations within (Lundgren et. al, 2003). The outcome of the first step is a simplified problem; the way in which the simplified model is described mathematically is called an optimization model. The problem has to be formulated in terms of an objective functiondepending on different variables which are limited by a number of constraints (Lundgren et. al, 2003).

4

Theoretical background

RESULT

SOLUTION

OPTIMIZATION  MODEL SIMPLIFIED  PROBLEM ACTUAL  PROBLEM

Evaluation

Optimization  method

Mathematical formulation

Identification, delimitation & simplification

Figure 2.2: A flow chart of optimization process; the various activities and outcome of each step.

The objective function, or target function, is a function of the design variables. The design variables are parameters which can be changed in order to alter the design. They are generally continuous but might be discrete, e.g. due to the available assortment of cylinder sizes. Design variables can be; dimensions of beam crosssections or lengths, or other attributes, like holes and fillets. Depending on the design variable type structural optimization can be divided into three different disciplines; sizing, shape optimization and topology optimization (Jurecka, 2007), see Figure 2.3.

Figure 2.3: Illustration of the various disciplines in structural optimization; size, shape and topology optimization.

5

Theoretical background

Size optimization is the simplest optimization task and can be used to solve problems concerning the minimum cross-sectional area of truss elements for a fixed geometry. Shape optimization involves geometrical parameters and changes the shape within several limits. In topology optimization material can be added or removed. In structural optimization a problem can concern combinations of the different disciplines (Jurecka, 2007). The constraints, or design requirements, can also depend on state variables derived from constraints on the state of the system. These state variables can be included in the objective function. The state constraints are often limited to be equal or above/under a certain value while the design variables can change within an interval (Strömberg, 2010). The constraints are formulated as equality 0 or inequality constraints 0, as for the state variables. The upper and lower limit of the design variables defines the design space while the constraints of the variables divide the design space into a feasible and unfeasible domain (Jurecka, 2007), see Figure 2.4.

Figure 2.4: The feasible region, grey area, restricted by the function constraints 0 and

and the function 0.

, and by the

The objective function is a formulation of what is to be optimized; the objective is a measurement of the performance of the different variable combinations. Often it is formulated as a minimization problem, but this does not cause restrictions; maximization problems are simply re-formulated to minimization problems (Jurecka, 2007). As the problem has been formulated mathematically and the optimization model is fully defined it is possible to determine what optimization method to use and what algorithm to be implemented. As all the previous steps have been worked thru a suggestion for a solution is generated. The solution has to be evaluated and validated since the generalizations and simplifications of the problem will generate an approximate solution. The end of the optimization process is reached when the solution suggestion has been processed into a satisfactory result (Lundgren et. al, 2003). Optimization problems with two or more objective functions are dealt with within the area of multi-objective optimization, the subject is treated in the next section of this report.

6

Theoretical background 2.3.2

Multi-objective optimization and Pareto optimality

For optimization problems with several objectives the functions might be conflicting; e.g. trying to reduce mass while striving for lower stresses. No explicit optimal solution can be distinguished for these problems; all optimal solutions will be compromises between the different objectives. For problems where the objectives are interchangeable, all functions, except one, will be unnecessary. For single-objective problems and problems with several compatible objectives it is fairly easy to determine the optimal solution, in contrast to problems with multiple conflicting objectives (Messac & Mullur, 2007). One way of dealing with several objective functions is to minimize one of them and treat the other objectives as constraints. A downside with turning a soft preference to a hard constraint is that the final solution might largely depend on the chosen value for the constraint and may lead to an inadequate solution. Instead of re-formulating objectives into constraints, multi-objective problems and their solutions can be managed in a more efficient way – by the concept of Pareto optimality (Messac & Mullur, 2007). For multi-objective problems two different questions is important to consider; how is the solution defined for this type of problem and how is the problem solved? One answer to the first question is; for Pareto optimality a solution is to be considered optimal if the improvement of one objective will lead to a deterioration of another objective. This means a compromise between the different objectives and compromises are a central subject within multi-objective optimization. An optimal solution will be the one resulting in an optimal compromise, or tradeoff, between the design objectives. Identification of a representative Pareto optimal set is the aim for many of the current methodologies for managing multi-objective problems (Messac & Mullur, 2007). Pareto optimal sets can at times easily be identified in a design space – as a socalled Pareto front, see Figure 2.4. The Pareto front consists of all Pareto optimal points of the dominated region in the design space. No Pareto optimal solution is better than another, mathematically seen - it is up to the Engineer to determine the weight of the different objectives, and this is the answer to the second question. What is provided by the Pareto front is a good visualization of characteristics of the different objectives and guidance in decision making (Messac & Mullur, 2007).

7

Theoretical background

Figure 2.4: Pareto front for a bi-objective problem.

2.4 Metamodeling In order to reduce computational time for complex optimization problems metamodels are used for reducing the number of design evaluations. A metamodel is an approximate mathematical expression for a correlation between the different input variables and the outcome of an optimization problem. Briefly described metamodels describes the response of the outcome as the variables alter and provide predictions of the behavior for untested points. Reliable metamodels are generally created from an existing set of design of experiments, DoE. What characterizes a proper set of test points, or training data, is that they provide as much information as possible in relation to a minimum effort (Jurecka, 2007). How Design of Experiments (DoE) is planned is treated more thoroughly later on in this report.

8

Theoretical background

Behavior models of an original design are more complex providing more accurate results than a metamodel, on the downside the computational time for these might require numerous amount of time. In comparison, metamodels are cheaper to evaluate and are an efficient way of reducing time (Jurecka, 2007). Nevertheless it must be noticed that metamodels are approximations, the optimal solution for the metamodel might not always be the optimal solution for the original problem. However, the solution of the metamodel can provide a solution close enough to the optimal design to upgrade the original design (Strömberg, 2010).

Figure 2.5: Response surface in MATLAB™.

Aside from reducing computational time there are a number of various benefits with metamodels. One is better understanding of the problem and design space as the engineer investigates the system and correlations. Metamodels can also increase the computational speed as design experiments can be carried out and evaluated simultaneously. Multiple information sources can be included in a metamodel; this enables evaluation and analysis of multi-physical problems (Jurecka, 2007). Fast optimizers in modeFRONTIER™ uses four different methods for metamodeling; polynomial singular value decomposition, Kriging, radial basis function models and neural networks. 2.4.1

Response surface models

A certain type of metamodels are so called response surface models (RSM), just as other types of metamodels RSM’s provides the user with an analytical “surface” of the predicted behavior. Another name for RSM is polynomial regression models, since polynomials are fitted to the training data by regression functions (Jurecka, 2007). The aim with RSM is to create an explicit functional relationship by fitting free parameters β of a regression function η(v,β) to the response values. The relationship between input variable v and the output value y is given by 9

Theoretical background ,

where ε is the inaccuracy of the model and β are so called regression variables. By least square approach the regression coefficients can be estimated, this by minimization of the residuals. Residuals are the noted difference between the observed values and the predicted values. Depending on the propagation of the residual values is possible to determine the quality of the response surface. If the plotting the residuals show no distinct pattern then the regression model can be considered suitable (Jurecka, 2007). Taylor expansion makes it possible for the regression function to be fitted to higher order problems. A linear function would look like ,

Taylor expansion gives a function suitable for second orders ,

Usually the RSM are sequentially fitted to different sub-regions of the problem in an iterative process, this in order to create as good predictions as possible (Jurecka, 2007). Polynomial Singular Value Decomposition (SVD) in modeFRONTIER™ functions in the same way as the polynomial regression models, or RSM. Polynomial SVD is described as an effective, but imprecise response surface method for producing metamodels. The square of errors are minimized by producing the best fitting polynomial based on the available training set, as previously stated for RSM. The speed of the training process make SVD suitable for generating reliable guesses and detecting regions of interest rather than providing the strict behavior of the function (Rigoni, 2010). 2.4.2

Kriging

Kriging was originally a statistical method used within the area of geostatistics but is today a popular tool for metamodeling and RSM. Its popularity depends on the ability of interpolating response values in an exact way from a set of training points. The Kriging approximation is formulated as ,

10

Theoretical background

The term η (v, β ) has previously been identified as a polynomial with the free parameters β and ε is the approximate errors. Z(v) is a Gaussian random process where the mean is zero, variance σ 2 and non-zero covariance (Jurecka, 2007) Kriging interpolation uses known design values to predict the values of the untested points, this by weighting the sum of the known data and minimizing the mean squared error. All covariance of the sample points in the search area are collected in a matrix. This matrix is then inverted to get the weight, as a result large divergences turns into small weights (Lovison, 2007). Interpolation at the observed values at the sample points is according to Jurecka assured due to “local deviation”. The sample points becomes the same as the optimal values and no residuals remains (Jurecka, 2007). In modeFRONTIER™ the predicted values are expressed as a linear combination of the values for training: linear Kriging estimator. Mathematically formulated as

where the weights λi are point-dependent (Rigoni, 2010). The correlation between the different variables is described by a covariance function. The model function of the covariance is also known as variograms, which expresses the spatial variation. The miscalculation of the predicted values is minimized by estimation of spatial distribution. By the covariance function the smoothness of the model can be controlled, this also determines whether the algorithm is approximate or interpolative (Lovison, 2007). A covariance function controls the smoothness of the model and determines the correlation as a function between the different points. The covariance function is formulated as ,

where σ is the asymptotical value of the variogram function γ(h). The variogram type of default is Gaussian (Rigoni, 2010). There are three different parameters which can be varied of the variogram; range, sill and noise. The range decides whether the outcome of two points should be correlated or not; depending on the distance between the tested points. Sill is the asymptotical value and determines the variability function. The noise depends on the standard error of the expected response. If the noise of the variogram is large it will result in smooth response, while exact interpolation will be achieved by the noise set to zero (Rigoni, 2010).

11

Theoretical background 2.4.3

Radial basis function models

Radial basis function models are metamodels especially well-suited for approximating functions depending on many variables/parameters or depending on numerous data. Radial basis functions (RBF) also manages scattered data which is data sampled independent of a grid (Buhmann, 2003). What scattered data and data from grids are is closer explained in section 2.5 Design of experiments. This type of metamodel combines a polynomial part and a radial basis function for prediction of the response. Radial basis functions are, according to Buhmann, usually approximations of “finite linear combinations of translates of a radially symmetric basis function, say · , where · is the Euclidean norm” (2003, p. 3). Radial symmetry in this case means that possible rotations have no effect on the value of the function, since the function value is dependent only on the Euclidean distance (Buhmann, 2003). r Radial functions can take various forms, for instance a cubic function is r r e where r 0 and a 0 is a constant. and the Gaussian function While interpolation is ensured by the radial function, the polynomial part of the model provides a global trend of the behavior. The set of regression parameters should not be larger than the training set and the polynomial part should be chosen thenceforth. Some radial functions, like the Gaussian, do not require a polynomial term (Jurecka, 2007).

In ModeFRONTIER™ the RBF interpolant is formulated as /

where · as previously mentioned is the Euclidean norm, and r is a radial function chosen by the user. n is the number of sampling points from the function f x and δ is a scaling parameter (Rigoni, 2007). The available radial functions in ModeFRONTIER™ are concluded in Table 2.1. For the radial basis function requiring a polynomial term an extra column has been added, where m is the degree of the polynomial. The form of the polynomial term is

where all the real-valued polynomials in d variables of m degrees are enclosed by . is the basis of the space. The dimension of is the linear space denoted q and is defined as

For a number three variables and a second order polynomial the polynomial design space would be 10-dimensional (Rigoni, 2007).

12

Theoretical background

Gaussian Duchon’s Polyharmonic Spline

log

Hardy’s MultiQuadrics

1

Inverse MultiQuadrics

1

Wendland’s Compactly Supported C2

d odd d even

/

1 /2 /2

0

/

1 1 1

3 4 5

1 1 1

d d d

1 2, 3 4, 5

Table 2.1: Radial basis functions available in modeFRONTIER™ – for functions requiring an additional polynomial term the degree of the polynomial m for a certain number of d variables has been specified.

2.4.4

Neural Networks

Neural networks is a metamodeling type which imitates the human brain for processing data. A human brain consist of over a 100 billion neurons connected by 10 000 synapses forming a complex network able of reasoning and storing of data (Jurecka, 2007). The nodes in a neural network are a mathematical model of a neuron. In the network information is passed through an input layer to an output layer via a hidden layer. All components in the network is uniquely numbered in order to be identified, nodes are continuously number and denoted index i. Connectors are denoted by a double index ij where the first letter i stands for the receiving unit and the second letter j the emitting unit. A single unit i only accept onedimensional data which is transformed by an activation function ηi into nodal output oi. Net input, which is the scalar input of node i, is symbolized by neti (Jurecka, 2007). The net input is defined as

where Ai is the set of nodes that are anterior to unit i and wij are characteristic properties defined as individual weights. The output value from node i is passed forward to all posterior nodes after being evaluated of the activation function neti (Jurecka, 2007). The data flow is illustrated in Figure 2.6.

13

Theoretical background o1 o2

wi1 wi2

oi oi

o3 o4

wi3

net

oi

wi4

Figure 2.6: Data processing in a neural network at a node i with four anterior nodes and three posterior nodes.

In modeFRONTIER™ the data transfer process is given by net input u of the scalar input xi, the weights wi, but the free parameters of the model is also taken in account. The free parameters are represented by bias b. The net input u is processed by the activation function f giving output y (Rigoni, 2010). Mathematically formulated as

As previously mentioned, neural networks imitates the brain and its ability to learn from examples. The network’s weights and biases are altered to minimize prediction errors by a given training set and a target for the output (Rigoni, 2010). If the number of neurons in a single non-linear hidden layer is sufficiently high it is possible to represent any arbitrary function with a linear output by a neural network. ModeFRONTIER™ uses feedforward networks where the output from a node only is passed on through the subsequent layers, see Figure 2.7. The opposing to such networks is so called recurrent networks where the output from one node can be passed as input on another node on the same layer (Jurecka, 2007).

v1 v2 v3

y

v4 v5 Input layer

Hidden layer

Output layer

Figure 2.7: Neural network with one hidden layer where variables are transferred through the different layers.

14

Theoretical background

The neural network in modeFRONTIER™ the hidden layer has a sigmoid (sshaped) function and the output has a linear activation function. The training algorithm is carried out with a fixed number of iterations and initially the network’s weights are set to random small values. The number of neurons in the hidden layer is also automatically set in the software (Rigoni, 2010). 2.4.5

Advantages and disadvantages of different metamodels

Since several different methods for metamodeling are used in modeFRONTIER™ the imperfections of the different models are compensated for – a metamodel having weaknesses concerning one type of problem might prove to be advantageous for other types. By sequentially execution of the different models and applying the one with best result, the computational effort is utilized where required the most. In the following section some of the advantages and disadvantages with the different metamodeling types are reviewed. According to Jurecka, polynomial regression models have shown to be especially suitable for problems with less than 10 variables and problems where the problem is governed by linear or quadratic effects. Since RSMs are smoothing they also are suitable for problems where the original response “suffer” from large amounts of noise. They are also attractive for their cheap fitting process and the computational effort for predictions is insignificant (Jurecka, 2007). One of the most attractive characteristics of Kriging models are their flexibility. There is a large variety of correlation formulations that are feasible. The combination of a Gaussian process and a polynomial function makes the model suitable for universal applications, since the Gaussian compensates for possible deficiencies of the polynomial term. One disadvantage with Kriging is that if the sample points lay to close to each other the correlation matrix will be illconditioned and the predictions of the response will be unreliable. Another disadvantage is the sensitivity to noise, this due to its interpolative capability (Jurecka, 2007). In modeFRONTIER™ this can be avoided with the right covariance function, see closer explanation in section 2.4.2 Kriging. Compared to RSM the Kriging approximations are much more demanding concerning computational time since the fitting to sample points is optimized. Kriging is as a result of the computational demand not suitable for problems concerning more than 50 input variables (Jurecka, 2007). Just as Kriging, the RBF models are very flexible and have a wide range of application. This due to the wide diversity of radial basis functions available. The lowest order of the polynomial term has to be regarded in order to produce an exclusive solution. The limit of 50 variables is not an issue for RBF models since they are cheaper in computational time compared to Kriging which has to incorporate optimization of the fitting to sample points. Given enough amount of proper training points the RBF will produce accurate predictions even for higher non-linear problems (Jurecka, 2007).

15

Theoretical background

Neural networks ability to combine hidden layers and number of nodes in various ways make them very adaptable to different types of problems. The versatility put demands on the experience of the user and it is necessary to make conscious decisions between the different alternatives when setting up a neural network (Jurecka, 2007). Common for all metamodels above is that they all require a proper set of sample points in order to create accurate predictions for the response. How the Design of Experiments should be planned and set up is treated in next section in this report.

2.5 Design of Experiments In this report it has been previously stated that the best training points are the points providing as much information as possible in relation to a minimum effort. How these training points should be selected is a subject within the area of Design of Experiments DoE and the methodologies associated. In this section some different methods is closer described. The selection of techniques has been made according to the methods available in the software used in this thesis work. An experimental design is a set of experiments to be carried out and the experiment is expressed by a number of input variables, or factors. The factors are generally design variables and noise factors of the problem. Once it has been established what variables and noise parameters to take in consideration for the experiment it is possible to select a suitable DoE-technique. The proper DoE chosen depends on some different factors. First, what is the aim of the experiment? The prerequisites are different whether the aim is to create a training set for a metamodel or to obtain information about factor relations. Second, the understanding and knowledge of the problem to be analyzed. It is relevant to recognize whether the behavior of the response, if is linear or non-linear, and to identify regions of particular interest or non-interest. Third, and last, is the number of experiments desired or required. The conflict between the amount of information obtained and the calculation time of the experiment must assessed (Jurecka, 2007). Generally for Design of Experiments the design variables are scaled, or normalized, to an interval of 1 1 (Strömberg, 2010). If a design domain , 1,2 is taken into consideration the of the variables mapping into the -plane will be defined as

The mapping process is illustrated in Figure 2.8

16

Theoretical background

1

-1

1 -1

Figure 2.8: Illustration of the normalization and mapping of designs.

2.5.1

Sobol Design

Consider a case where a number of points x should be generated and well0, 1 with dimensions. How well the points distributed in a design space D are scattered over the space is measured in terms of discrepancy. A definition of the discrepancy can be formulated in the following way (Bratley & Fox, 1988): D and a subset H D , a counting function For a set of points x , x , … , x C H is defined as the number of points x H. For every x x ,x ,…,x D , let H be the rectangular d-dimensional region ,

,



,

with the volume x x … x .The discrepancy of the points x , x , … , x becomes ,

,…,

|



then

|

Sobol and other so-called quasi- or pseudo-random sequences aim to minimize the discrepancy for better spacing and are used for optimization and simulation (Bratley and Fox, 1988), see Figure 2.9 and Figure 2.10 for comparison of the distribution for a random sequence and a Sobol sequence.

17

Theoretical background

Figure 2.9: Scatter plot of the distribution of a pseudo-random sequence in modeFRONTIER™.

Figure 2.10: Scatter plot of the distribution of a pseudo-random Sobol sequence in modeFRONTIER™.

Quasi-random sequences are finite or infinite sequences with a set of points with very small discrepancies. Pseudo-random points are in fact not random at all, but are generated by simple arithmetic algorithms. For defining a random sequence of random point the following principles has been used as criteria’s; first, the distribution of the sequence must satisfy some distribution properties, and second, the properties should be unchanged for given selection rules for subsequences. Nevertheless, pseudo-random sequences pass as random in several statistical tests (Niederreiter, 1978). In modeFRONTIER™ a Sobol sequence is generated by simply entering the number of desired designs, see Figure 2.11.

Figure 2.11: ModeFRONTIER™ interface for Sobol.

2.5.2

Latin Hypercube Designs

Latin hypercube designs (LHD) a type of space-filling designs where the factor space is normalized and every factor range divided by m strata, resulting in a factor space divided into mn cells, n being the dimensions. Every strata is addressed once by selecting a subset of cells randomly in such way, see Figure 2.12. A sampling point is commonly placed in the middle of the cell, but can be positioned within the cell by random sampling according a uniform distribution.

18

Theoretical background

The result is a LHD where the observations are equally distributed over the range of the individual factors. The segmentation of the factor space assures that no replicates are generated. However, the space-filling with respect to the total factor space may be poor (Jurecka, 2007), see Figure 2.13. 2

2

1

1

Figure 2.12: Latin Hypercube Design with 8 strata.

Figure 2.13: Latin Hypercube Design with 8 strata with poor space-filling properties.

ModeFRONTIER™ provides two types of LHD – Uniform Latin Hypercube, Figure 2.14, and Latin Hypercube - Monte Carlo, Figure 2.15.

Figure 2.14: Available settings for Uniform Latin Hypercube Designs in modeFRONTIER™.

LHD – Monte Carlo combines LHD and Monte Carlo sampling (MC) for design of experiments. MC methods solves an integral based on a number of sampling points at which the function is evaluated. The sampling points are chosen with respect to probability density of the noise variables (Jurecka, 2007). The solution of the integral is formulated as

where is a function, and is a weight.

is the probability density of the noise variables Z

19

Theoretical background

While Plain MC sampling determines the sampling points randomly in consistency with probability density of the noise variables, Stratified MC sampling use partitioning of the entire noise space. This partitioning resembles the proceeding of Latin Hypercube Design and its division of the factor space into stratums. For Stratified MC, in contrast to Plain MC, the sampling is carried out in a systematic way; exploring subsets of noise space that may be of importance but otherwise might have been left out. That is, regions with high influence on the investigated statistics but with small probability (Jurecka, 2007).

Figure 2.15: Available settings for Latin Hypercube –Monte Carlo in modeFRONTIER™.

2.5.3

Incremental Space Filler

One of the easiest ways to create an even distribution of sampling point is by placing them in a n-dimensional grid within the factor space, see Figure 2.16. One disadvantage with this type of sampling is that the designer cannot freely decide which arbitrary points to be included and the segmentation of the factor space determine the total amount of sampling points. Another weakness is that “a projection of the design onto a subspace with reduced dimensionality would yield many replicated points” (Jurecka, 2007, p. 146), which is undesirable in cases where irrelevant factors has been identified (Jurecka, 2007). 2

2

min d 1

1

Figure 2.16: Design sequence sampled from a grid.

Figure 2.17: Latin Hypercube Design with distance criterion

20

Theoretical background

Instead of sampling point from a grid a distance-based criterion can be added for better space-filling properties (Jurecka, 2007), see Figure 2.17. For example, a minimum criterion for the Euclidian distance d between the sampling points vk and vl Designs which maximizes the minimum distance are so called maximin distance designs, modeFRONTIER™ provides the Incremental Space Filler (ISF) which is such a design, see Figure 2.18. The points are added sequentially with a specified radius from an existing design point, filling the design space in a uniform way. The ISF algorithm is among other things used for extending the design database with points around the Pareto front when running Fast optimizers in modeFRONTIER™ (Rigoni, 2010), see description of fast optimizers in Section 2.6: Fast optimizers in modeFRONTIER™. Two variants of the algorithm are available in modeFRONTIER™, the Genetic Algorithm Optimization (GOA) and Voronoi-Delaunay Tessellation (VDT).The GOA implements a genetic algorithm for internal optimization of the distance criterion. The method is approximative but works in a fast and robust way, in contrary to the VDT, which is a time-consuming method that will generate an exact solution for maximization of the minimum distance (Rigoni, 2010).

Figure 2.18: Illustration of settings for Incremental Space Filler in modeFRONTIER™.

2.5.4

Taguchi Orthogonal Array Designs

If a design sequence is to be considered orthogonal if the scalar product of any of the column vectors being evaluated is zero, or in other words, XTX is diagonal matrix. An important characteristic of orthogonal arrays is the resolution R, which explains the independent estimation of main and the interaction effects.

21

Theoretical background

The method might seem limited but the result is less computational effort. This due to the fact that interaction factors of interest are introduces as independent (Jurecka, 2007). As a result the number of sampling points will factors, be reduced, see Table 2.2 and Table 2.3. In modeFRONTIER™ it is possible to design experiments by a technique called Taguchi Orthogonal Arrays (TOA). TOA are orthogonal arrays which also considers both noise and design factors. Design factors, or Control factors, are parameters of the design that are effortless and cheap to control. Noise factors are factors that are hard or impossible to control but that might affect the performance of the product. Examples of noise factors are the environment the product will function in, called an external factor, and material defects or variation, also termed internal factor (Dean et. al, 1999). Two types of designs exist for TOA, mixed and product arrays. Product arrays consist of one fractional factorial experiment for the design factors and one for the noise factors. All combinations of design factors are considered in coordination with every possible combination of noise factors. Robustness is said to be achieved if the product work consistently well regardless of unrestrained variation of noise factor levels. Mixed arrays consider treatment combinations, which are level combinations of design and noise factors. The robust settings for the design factors are obtained by investigation of the interaction between the noise and design factors (Dean et. al, 1999).

NO. OF SAMPLING POINTS Full Factorial

Full Factorial

Orthogonal array

Orthogonal Array

23 → 8

4

-1

-1

-1

-1

-1

1

28 → 256

7

1

-1

-1

-1

1

-1

210 → 1 024

11

-1

1

-1

1

-1

-1

1

1

-1

1

1

1

-1

1

15

2 → 32 768 13

16

1

3 → 1 594 323

27

1

-1

1

45 → 1 024

6

-1

1

1

1

1

1

Table 2.2: Comparison of the number of experiments generated by Full Factorial Design and Orthogonal Array Design.

Table 2.3: Design sequence of the variables v1. v2 and v3 by Full Factorial Design compared to Orthogonal Array Design.

How the different settings for TOA designs in modeFRONTIER™ can be applied is seen in Figure 2.19.

22

Theoretical background

Figure 2.19: Possible settings for Taguchi Orthogonal Arrays in the modeFRONTIER™ interface.

2.5.5

Full factorial designs

A factorial design is design where a number of n design variables are varied over a predetermined number of m levels. The generated number of experiments will be mn distributed evenly over a region of interest (Strömberg, 2010). Mathematically expressed as

As an example; a case of 2 design variables varied over 3 levels would generate 9 test points and would be called a 9-factorial experiment. The formation of 9factorial design is illustrated in Figure 2.20 and Table 2.4.

1 1

1

-1

-1

Figure 2.20: A 9-factorial design – 2 variables varied over 3levels

23

Point

n1

n2

1

-1

-1

2

-1

0

3

-1

1

4

0

-1

5

0

0

6

0

1

7

1

-1

8

1

0

9

1

1

Table 2.4: Set up of a 9-factorial design with the two variables n1 and n2.

Theoretical background

Linear effects are estimated by 2-level factorials while quadratic behavior is estimated by 3-level factorials. As rule of thumb, a level of d+1 has to be assessed to fit a polynomial of degree d. The drawback with full factorial experiments is the fact that the experimental effort increases rapidly when the number of levels or variables is increased (Jurecka, 2007). A 3 level factorial with 9 variables would result in 19 683 experiments. Running a sequence with 39 experiments would fast yield a great amount of computational time. A way to decrease the number of experiments is to apply the method Fractional Factorial instead. In modeFRONTIER™ it is possible for the user to set the level of each variable. A brief summary points out the fact that the full factorial is suitable for problems with less than 8 variables and less than 4 levels, see Figure 2.21.

Figure 2.21: In modeFRONTIER™ the user decides the level of every single variable. It is possible to select higher levels for some variables and a lower for another variable.

2.5.6

Fractional Factorial Designs

While full factorial generates mn experiments a reduced factorial produces a subset of the full factorial sequence. A fractional factorial sequence contains mn-p experiments, where p>0 is an integer defining the reduction of the sequence compared to the full factorial. The expression 1/ is used to calculate the fractional of the full design (Jurecka, 2007). Due to the fact that only that only a fraction of the behavior is observed, each main-effect and interaction contrast cannot be estimated independently. On the other hand, the computational effort for the experiment will be reduced (Dean et. al, 1999). Fractional Factorial Designs are motivated by some different arguments. One is the fact that in systems which depends on a large number of input variables are expected to be governed mainly by only a few of them and by low-order interaction. Another argument is the projection of the design onto other levels; if a number of p factors is excluded from the design sequence due to irrelevance the consistent dimensions of the design space is removed (Jurecka. 2007). The original fraction factorial design becomes a full factorial design, see Figure 2.22. 24

Theoretical background

Figure 2.22: Illustration of the projection of a 23-1 fractional factorial design into three 22 full factorial designs.

Fractional factorial is in modeFRONTIER™ referred to as “reduced factorial” and is based on a 2-level factorial sequence, see Figure 2.23. By selecting the largest number in the “Number of Designs”-window will result in a full factorial. For a problem with 8 variables the largest number available would be 256.

Figure 2.23: Available settings for a Reduced Factorial Design in modeFRONTIER™

2.5.7

Central Composite Designs

For fitting second-order polynomials Central Composite Design (CCD) is one of the most popular techniques. It combines points from a two-level full factorial (or fractional factorial) sequence with a center point and so called star points. Star points are points which are located on all coordinate axes with a distance from the origin. On the axis two points will be positioned; one at and another at , see Figure 2.24. The number of star points will be 2p for a number of p factors (Dean et. al, 1999). 25

Theoretical background

α

Full Factorial Points

Center Point

Star Points

Figure 2.24: Illustration of distribution of points for a factorial design, the center point and the star points.

The value of α is chosen depending on the properties required of the design, often the expression is used for establishing a suitable value for , where are the orthogonal factorial points (Dean et. al, 1999). For this expression the resulting CCD will be rotatable, which is an important condition for the prediction discrepancy. The prediction discrepancy will be equal on a sphere around the center point if the design is rotatable (Jurecka, 2007). The three types of points play different roles in providing information. The factorial points contribute largely to the estimation the linear terms and they are the only points which contribute to computation of the interaction terms. The center point provides information about curvature of the system and estimation of quadratic terms. It also provides the “pure error” of the design. If curvature is found in the system the star point will estimate the quadratic terms in an efficient way (Meyers & Montgomery, 2002) Two types of regions can be taken in consideration for CCD, spherical and cuboidal regions. If the region interest is spherical a suitable value for α is given by √

where n is the number of factors. This way of choosing α will not result in an exact rotatable design instead the value will create better solution from a prediction point of view. A spherical CCD will put all the star points and factorial points on a sphere with radius √ (Montgomery, 2005). For cuboidal regions of interest it is suitable to use face-centered CCD, where 1. In this case the star points will be located at the centers of the faces of the factorial cube, see Figure 2.25. This type of design is not rotatable, but rotatability is of no greater importance when the region of interest is obviously cuboidal (Meyers & Montgomery, 2002). For sequential experimentation CCD is an efficient design which not involve a too large number of design points while still providing a reasonable amount of data for testing lack of fit. It is also a flexible design in its use of star points; it can 26

Theoretical background

accommodate a spherical region by √ and with five levels for each factor, and a cuboidal region for 1 and a three-level design. The CCD can also 1 cannot be used, the value of α can provide for situation where √ or be adopted depending on the circumstances (Meyers & Montgomery, 2002). For central composite design for three variables, see Table 2.5.

Point No.

Figure 2.25: Illustration of a face-centered CCD sequence for three factors and 1, the 8 corner points are full factorial points, the 6 points on the axes are star points and the middle point is the center point.

1

-1

-1

-1

2

-1

-1

3

-1

1

-1

4

-1

1

1

5

1

-1

-1

6

1

-1

1

7

1

1

-1

8

1

1

1

9



0

0

10

α

0

0

11

0



0

12

0

α

0

13

0

0



14

0

15

0

α 0

0

Table 2.5: Central Composite Design sequence for three factors.

In modeFRONTIER™ it is possible to choose between two types CCD, see figure 2.26. The first type, Cubic Face Centered, places points at all vertexes of the cube, as in Figure 2.25, the second type, Inscribed Composite Design, places points at the vertexes of a scaled hypercube within the cube.

27

Theoretical background

Figure 2.26: Parameter settings for Central Composite Designs in modeFRONTIER™.

2.5.8

Box-Behnken Designs

For Box-Behnken Designs (BBD) is a special methodology which constructs balanced incomplete block designs, see Table 2.6. The variables are treated in pairs, for an example of three variables: in the first block variable 1 and 2 is paired together in a 22 factorial while variable 3 is fixed at the center. In the second block variable 2 is fixed at the center and instead variable 1 and 3 is paired together. This proceeding holds for a number of variables 2 < n < 6 (Meyers & Montgomery, 2002). Treatment Block

1

2

1

X

X

2

X

3

X

3

X

X

Table 2.6: Demonstration of the order in which the variables are paired together.

For problems with 6 or more variables the procedure is slightly different, for a closer description see Meyers & Montgomery, 2002. The resulting design sequence of Box-Behnken with three variables is shown in Table 2.7 and Figure 2.27.

28

Theoretical background

BBD is closely related to Central Composite Designs but has the advantage of avoiding extreme values of the factors by not taking the corners of the factor space in consideration. On the other hand, the same characteristic of avoiding extreme factor settings result in BBD not being suitable for prediction of behavior in the corners of the factor space. As a conclusion this type of design should be used for cases where there is no interest of predictions of the corner points. Since BBD is a spherical design it is rotatable, or near rotatable, and all sampling points will have same distance to the center point (Jurecka, 2007). Point No.

Figure 2.27: Illustration of Box-Behnken Design for three variables.

1

-1

-1

0

2

-1

1

0

3

1

-1

0

4

1

1

0

5

-1

0

-1

6

-1

0

1

7

1

0

-1

8

1

0

1

9

0

-1

-1

10

0

-1

1

11

0

1

-1

12

0

1

1

13

0

0

0

Table 2.7: Design sequence for three variables generated by Box-Behnken. The last row is the center point

Box-Behnken Design is available in modeFRONTIER™ without any settings for the user to configure, see Figure 2.28.

Figure 2.28: Box-Behnken Designs in modeFRONTIER™.

29

Theoretical background 2.5.9

Latin Square Design

The Latin Square Designs (LSD) are n x n arrays with Latin letters arranged in such way that each of the letters only occurs once in each column and in each row, see Figure 2.29. If the letters in the first row and the first column of the Latin Square are in alphabetical order the square is known as a standard Latin square (Dean et. al, 1999). LSD is used to eliminate two sources of problems concerning the changeability, Latin squares make it possible to block in two directions in a systematic way. Small Latin Squares has the disadvantage that they have a fairly small number of error degrees of freedom, the error degree of freedom for a 3x3 Latin Square is equal to 2 . This disadvantage is compensated by replication of the squares; thus, increasing the error degrees. A common procedure for selecting a Latin square is to take a Latin square from a table providing standard Latin squares and randomly rearrange the order of rows, columns and letters of the square (Montgomery, 2005).

A

B

C

D

B

D

A

C

C

A

D

B

D

C

B

A

A

B

C

D

E

B

D

A

E

C

C

E

D

B

A

D

A

E

C

B

E

C

B

A

D

Figure 2.29: Examples of 4x4 and a 5x5 standard Latin Square.

In modeFRONTIER™ it is possible to create a 20x20 Latin Square, generating 400 design experiments, see Figure 2.29.

Figure 2.29: Latin Square Designs in modeFRONTIER™.

30

Theoretical background 2.5.10 Plackett-Burman Designs

Plackett-Burman Designs (PBD) are fractional designs for a number of variables for N runs. N is a multiple of four and when N is power of 2 the designs are the same as the respective fractional design. For some special cases (for N=12, 20, 24, 28, 36), called nongeometric PBD, this type of Design of Experiment can be of interest (Myers & Montgomery, 2002). In modeFRONTIER™ it is possible to determine the order of the variables, see Figure 2.30. The order of the variables is of significance due to the messy alias structure of PBD. The limited area of interest and the fact that the method is recommended to be used carefully by Myers & Montgomery, 2002, no closer explanation is provided for this method.

Figure 2.30: Settings for the Plackett-Burman Designs in modeFRONTIER™.

2.6 Fast optimizers in modeFRONTIER™ The fast optimizers in modeFRONTIER™ uses metamodeling to speed up the optimization process, a detailed description of metamodeling is found in Section 2.4: Metamodeling. In the following sections the working process of the fast optimizers is described in detail, as well as a closer explanation of the algorithms implemented in modeFRONTIER™; SIMPLEX and MOGA-II. 2.6.1

Working process

The working process of the fast optimizers is an iterative procedure where metamodels are trained and evaluated. Figure 2.31 illustrates the general steps of the work process.

31

Theoretical background

Metamodel  evaulation

Metamodel  training

Validation  process

Virtual  exploration

Virtual  optimization Figure 2.31: The iterative process of fast optimizers.

Initially the algorithm takes in consideration a number of design experiments; FSIMPLEX will use a number of N+1 (for a number of N variables) from the DoE as an initial population, while FMOGA-II will use the entire DoE table. The initial population m will be evaluated by means of the real solver and used for RSM training. If a set exists of previously tested points the RSM will be trained using them as a validation set instead of the initial population (Rigoni, 2010). In the first iteration RSM by default, depending on the scale of the problem, is engaged in the virtual exploration and optimization. For the subsequent iterations RSM is selected depending on the performance throughout the previous iteration. The training in this step only focuses on training RSM for the exploitation and optimization. The training set can be limited by the user in modeFRONTIER™ by specifying the maximum size, this reduces the computational time (Rigoni, 2010). In the step virtual exploration the database is improved by using the Incremental Space Filler (ISF), see description of ISF in section 2.5.3 Incremental Space Fillers. ISF is used for sampling new points around the current Pareto front and to increase robustness of the optimizer (Rigoni, 2010). During the virtual optimization process the FMOGA-II, or FSIMPLEX, is run over the best available meta-models. As a population for next iteration the database is built up of 50 % random points and 50 % of points from the current Pareto front. The Pareto front points contribute to faster convergence and the random points increases the robustness of the optimizer. A number of 2m designs are extracted from virtual design database for the validation step. Any duplication of previous evaluated points is removed from the database, when a number of m designs remain convergence is reached (Rigoni, 2010). In the validation process a number of m designs are randomly extracted from 2m designs from the virtual optimization step. The designs are then validated in terms of a real solver and constitute the validation set for evaluation of the different metamodels applied (Rigoni, 2010). 32

Theoretical background

The performance of the metamodels is evaluated and compared to the predicted result in the training step. The divergence of the predicted value and the observed value becomes a measurement of the metamodels performance. The best performing metamodel is used as the primary model for the next iteration and no other metamodels are trained. If the performance would impair, all metamodels will be available for training and evaluation once again (Rigoni, 2010). Fast optimizers are slow in terms of net computational time since they train metamodels and run ISF algorithm. Fast optimizers are primarily useful for optimizations involving an expensive CAE solver where the time for training and ISF negligible compared to the computational time of the solver (Rigoni, 2010). 2.6.2

The SIMPLEX algorithm

The original SIMPLEX is a robust algorithm for non-linear optimization problems. Due to the name it gets easily mixed up with the simplex method used for linear programming but should not be associated with each other. For a number of N dimensions the simplex will be a polyhedron with N+1 corner points, see Figure 2.32.

Figure 2.32: A simplex, ABC, in a 2-dimension (Poles, 2003 A).

The simplex will expand and contract searching for the minimum of the objective function. Through the iterative process the values at the corner points of the simplex are evaluated and replaced by lower function values. The operation procedure is the described by the three steps: reflection, expansion and contraction (Poles, 2003 A). In reflection, the worst vertex value is projected, or reflected, on the opposite surface to obtain a new value. As a consequence the movement will always be in a favorable direction (Poles, 2003 A). Figure 2.33 illustrates the reflection movement.

33

Theoretical background

Figure 2.33: Reflection movement of the simplex ABC (Poles, 2003 A).

Formulated mathematically this becomes where A is the worst vertex, D is the new point, H is the centroid of all points except for the reflected and α > 0 is a reflection coefficient. When reflection produces a new point that produces a better value than the previous the simplex expands. In expansion movement the simplex expands in the same direction as the previous movement. Point D is then rejected and instead the new point E is included, since it is expected that expansion in the same direction will produce an even better value (Poles, 2003 A) see Figure 2.34.

Figure 2.34: Expansion of the simplex ABC (Poles, 2003 A).

Mathematically expressing expansion where H is the centroid of all points (except point D), D is the reflected point from previous step and β > 1 is the expansion coefficient. If the reflection movement was unable to produce a better point than all the other points of the simplex, except for the worst vertex, the simplex contracts. In the contraction movement the reflected point D will be rejected and instead a new point E will be included (Poles, 2003 A). How the contraction movement is carried out can be seen in Figure 2.35, 34

Theoretical background

Figure 2.35: Contraction of the simplex ABC (Poles, 2003 A).

The contractions is mathematically expressed as where H is the centroid of all points (except point D), D is the reflected point from previous step and β ∈ [0,1] is the expansion coefficient. The SIMPLEX method is based on the Nelder-Mead Simplex algorithm but has been altered to obey possible constraints of the problem. This has been done by addition of a penalty function to the objective function. The penalty function is dependent on the sum of all deviations of the vertices and penalizes any violation of the constraints. In order to avoid premature convergence of the algorithm a penalty parameter adapting to the present extreme values of the function has been introduced. The SIMPLEX terminates when no improvement of the values is found, given a certain tolerance (Poles, 2003 A). 2.6.3

The MOGA-II algorithm

The MOGA-II is a multi-objective optimization algorithm which uses multisearch elitism and directional crossover. Multi-search elitism contributes to the robustness of the algorithm. For single-objective problem elitism is easily defined and copied to the next generation, however, for multi-objective problems the reality is quite different. Since the problem includes more than one objective there will also exist more than one elite solution. By introduction of Pareto optimality an acceptable solution easier can be distinguished (Poles, 2003 B). Pareto optimality is closer explained in section 2.3.2 Multi-objective optimization and Pareto optimality. In combination with the Pareto optimal solutions the algorithm practices dominance for producing a new generation. How, is explained in the following example: Figure 2.36 illustrates a point A of a multi-objective problem with the two objectives f x and g x . Point A defines two zones, where the green area represents a set of dominated points and the remaining area represents a set of non-dominated points.

35

Theoretical background

If point A is previously generated and point B is a part of the current generation; a movement in the direction of B would be favorable since B in the non-dominated set is dominating point A. Transitions like that are preserved since this kind of evolution always is desired. If evolution instead would to bring point A to point C or C’, which are non-dominated over A; the transition would nonetheless be preserved to favor scattering along the Pareto front. A transition of point A in the direction of point D on the other hand would be rejected; since A is dominating D (Poles, 2003 B). Elitism helps preserve the individuals that have the best dispersion and the ones that are closest to the Pareto front. For reproduction the algorithm uses four different operators; two-point crossover, directional crossover, mutation and selection. Throughout the reproduction process the algorithm uses the different operators and applies them to the current individual (Rigoni, 2010). Direction crossover is the main reason of the algorithm efficiency; by comparing fitness values of the individuals’ direction improvement can be detected. Evaluation takes place by comparing fitness of an individual with the fitness of two other individuals. The two individuals belonging to the same generation are selected by two different random walks. The new individual is then created from transition in a randomly weighted direction (Poles, 2003 B).

C’ B A

D C

Figure 2.36: Point A is dominated by point B, non-dominated by point C and C’, while dominating point D. Transition in the direction of point B is preferable, while transition in direction of point C and C’ is acceptable.

36

Method and implementation

3 Method and implementation In this section describes the coverage area as well as the current and the new FEA-model. The mathematical formulation of the optimization problem will be presented and also how to set up an optimization problem in the optimization software modeFRONTIER™.

3.1 Coverage area The coverage area is the two-dimensional surface, facing the boomer, which is enclosed by the maximum range of the boom, see Figure 3.1. It is the coverage area that determines the size of the drift and consequently the size of the vehicles able to work and travel in it. It is therefore of great importance for Atlas Copco to offer their clients a boomer with a large coverage area. CONFIDENTIAL By optimizing the coverage area, a solution can be found that provide the largest possible area while the strength in the structure is preserved.

  Figure 3.1 Coverage area of a two boom hydraulic tunneling rig (Technical specification, Boomer E3 C)

To use the coverage area in the optimization, a MATLAB™ program was created that calculates both the total coverage area and the area above ground-level with forward kinematics, see section 2.1. The program is based on an existing calculation procedure currently used by the company. It was designed for the existing boom and does not act correct when the dimensions are changed. Three main changes have been made in the procedure with a more realistic outcome as a result;

CONFIDENTIAL

37

Method and implementation

As an objective function in the optimization the coverage area over ground level is to be maximized since a hole seldom is drilled under ground level. The thorough calculation procedure will be explained in Appendix D: Calculation of coverage area and a reference example of a double pendulum can be found in Appendix E: tting up a Denavit-Hartenberg for a two-link kinematic chain.

3.2 The existing FEA-model Atlas Copco is currently using a simplified model of the boom where all parts are replaced by beam and truss elements, se section 2.2. The masses of each part are applied as concentrated mass forces on specified nodes, see Figure 3.2.

Figure 3.2 The existing beam model

The model is used to calculate the average forces in different sections and to compare the results if dimensional changes are made. CONFIDENTIAL The program is composed like follows; ‐

The dimensions and angles for fourteen load cases are added as parameters.



Element types and real constants such as masses are defined. The material properties Young’s modulus, Poisson’s ratio are added as well as densities.



The nodes are generated at chosen dimensions. Since the cylinder links position varies with the boom lift, boom swing and the dimension as, a local coordinate system are created.



All the elements are created by selecting a start node and end node for each beam, an element type and a material.



The mass elements are placed on nodes.



Boundary conditions and loads are given and an analysis is performed for three load cases; self-weight, self-weight plus drilling force and merely the drilling force. 38

Method and implementation



The average force and moment are saved as well as jpeg-pictures.

3.3 The new FEA-model As a result of the insufficient FEA-model used presently, a new model was created in the commercial software Abaqus CAE. Solid parts were mixed with beams and trusses to shorten the computation time, all in all 19 different parts, see Figure 3.3.

Figure 3.3 The new FEA-model

A macro was recorded for each part while it was constructed which resulted in python-code segments describing every action in the creation process. These scripts where compiled into one document and new macros were recorded and added. Individual materials and sections were created and the materials were given material properties such as young’s modulus, poisons ratio and density. Since every part got a density, the mass and center of gravity were enabled to change automatically if the dimensions are modified. The same materials as in the current boom were used but individual densities simplify a change of material for a specific part in the future. A section was assigned to each part and a suitable mesh was chosen individually. The parts from boom support to the first flange were assembled. Created planes, axis and points were used as references when the parts were put together. Since the tripod cylinders direction and length depends on the current boom lift and boom swing, they were created after the cylinder links and cylinder ends were aligned. After that, the rest of the boom was assembled and the whole model was constrained with allowed degrees of freedom. The boom support was fixed with a boundary condition and loads such as gravity, additional masses and a concentrated force on the drill end were added.

39

Method and implementation

A simulation of the model was made for ten different load cases, Appendix F: Load Cases, and the obtained outcome were written to a report file. The load cases were selected together with the engineers at Atlas Copco and are positions when the largest stresses occur in the structure. The python script was worked out in detail and the ten changeable dimensions were replaced by parameters. Each part was investigated thoroughly to make sure that relations between the parametric dimensions and affected dimensions were added to enable all possible configurations. A more detailed description of the model and the relations of each part can be found in Appendix G: New FEA-model. 3.3.1

Boom support

The boom support is mounted on the boomer and contains three hinges; one where the boom link is fastened and two for the cylinder links, see Figure 3.4.

Figure 3.4. Left: beam model Middle: The new model

Right: The actual boom support

On the current beam model four nodes are created; one on the back of the plate, one in the boom swing hinge and one in each of the two cylinder link hinges. Three beams are used to connect them and the total mass is divided in four and placed on the nodes. In the new model the boom support is replaced by a solid part with a shape that resembles the authentic boom support. Two parameters have been added and several relations have been created to ensure that the surrounding dimensions will work together. 3.3.2

Boom link

The boom link connects the boom support with the fork and contains both the boom swing hinge and boom lift hinge, see Figure 3.5.

Figure 3.5. Left: beam model Middle: The new model

40

Right: The actual boom link

Method and implementation

The current boom link is substituted by a beam between the boom swing hinge and boom lift hinge. The mass is divided equally on the two nodes. The new boom link is a solid part with one parameter that controls the lateral distance between the two hinges. 3.3.3

Cylinder link

There are two cylinder links in the model, connecting each one of the cylinders to the boom support, see Figure 3.6.

Figure 3.6. Left: beam model Middle: The new model

Right: The actual cylinder link

As in the boom link, the cylinder link is replaced by a beam between two nodes in the current model. The nodes are placed in the middle of the hinges and the total mass is divided equally on them. In the new model, a solid part is created which enables the center of gravity to move and the total mass to change if the dimensions are modified. 3.3.4

Fork

The rear end of the boom beam is called fork and has a boom lift hinge in the end that is not attached to the boom beam, see Figure 3.7.

Figure 3.7. Left: The new model

Right: The actual fork

In the current beam model the fork is not a separate part but is included in boom beam 1. 3.3.5

Boom beam 1

Boom beam 1, see Figure 3.8, is connected with the fork in one end and the boom ears are attached at the distance a2 as well as a flange in the other end.

41

Method and implementation

Figure 3.8. Left: beam model Middle: The new model

Right: The actual boom beam 1

In the current model the boom beam is a combination of the fork, boom beam 1, boom ears and flange. Five mass nodes share the total mass equally. If the length, cross section and the dimension a2 would change in this model the total mass would still be the same and center of gravity would not change realistic. In the new model the boom beam is separated from the other parts and a beam element is used to save computation time. In that way, the masses of the fork, boom ears and flange will act on the right distances. 3.3.6

Boom ears

The boom ears are attached on the boom beam and contain two brackets that holds the boom cylinders, see Figure 3.9.

Figure 3.9. Left: The new model

Right: The actual boom ears

In the current model the boom ears are represented by one point in each of the two brackets, one point in the center of boom beam 1 at the distance a2 from the boom link and one point in the rear bearing position. In the new model, a solid part has been created that resembles the real boom ears. 3.3.7

Tripod cylinders

The tripod cylinders or boom cylinders are two hydraulic cylinders that controls the boom lift and boom swing, see Figure 3.10. They are attached in the cylinder links in one end and boom ear brackets in the other.

Figure 3.10. Left: beam model Middle: The new model

42

Right: The actual cylinders

Method and implementation

In the current model, truss elements are created and the mass is divided equally on the start node and end node. The length of the cylinders is dependent of the boom lift and boom swing in the current position and they are therefore created after the boom have been assembled up to the first flange. Truss elements are chosen since no bending forces can be prepossessed in the cylinders. 3.3.8

Cylinder ends

To connect the cylinders and the boom ears, cylinder ends are added to the model. They are positioned in the boom ear brackets and connected normal to the cylinder trusses, see Figure 3.11.

Figure 3.11. The new model

3.3.9

Flange

In the end of boom beam 1 a flange is mounted to strengthen the beam. The flange is currently included in the total weight of boom beam 1, see Figure 3.12.

            Figure 3.12. Left: The new model

  Right: The actual flange

In the new model, the flange is generated as a separate part which ensures that the center of gravity will act on the right place. 3.3.10 Boom beam 2

Boom beam 2 is the telescopic beam that runs in boom beam 1 with the interval given by the boom extension, see Figure 3.13. 43

Method and implementation

  Figure 3.13. Left: beam model Middle: The new model

Right: The actual boom beam 2

The current boom beam 2 is made of four beams between five points. The part gear flange is included and to compensate, one point is placed on each side of the gear flange. 3.3.11 Gear flange

Gear flange is attached to the end of boom beam 2 and is the part that connects it to the first gear case, see Figure 3.14.

Figure 3.14. Left: The new model

Right: The actual gear flange

The gear flange is currently a part of boom beam 2 but is in the new model represented by a separate part. 3.3.12 Gear case

There are two gear cases in the model, one that regulates the feed rotation and one that controls the feed swing, see Figure 3.15. The first is connected to the gear flange in one end and the knee in the other. The second gear case is placed between the knee and bracket.

Figure 3.15. Left: beam model Middle: The new model

Right: The actual gear case

In the current model, the gear case is a beam between two nodes of which the mass is divided equally on. The gear case is replaced by a solid part in the new model.

44

Method and implementation 3.3.13 Knee

The knee is an angled part that is connected with one gear case on each side, see Figure 3.16.

Figure 3.16. Left: beam model Middle: The new model

Right: The actual knee

In the current model, two beams are connected through three nodes and in the new model the knee is a solid part. 3.3.14 Bracket

The bracket is attached to the second gear case and has a banana shaped part that is located slightly adjacent to the middle and that holds the tilt cylinder. In the beam model, the bracket consists of four nodes as can be seen in Figure 3.17. The mass is applied in the center of the lower hinge where the feeder fork is attached.

Figure 3.17. Left: beam model Middle: The new model

Right: The actual bracket

In the new model, the bracket is replaced by a solid part which guarantees that the center of gravity is placed in the right location. 3.3.15 Feeder fork

The feeder fork is connected with the bracket in the feed tilt hinge and is the part of which the feeder lies upon, see Figure 3.18.

45

Method and implementation

Figure 3.18. Left: The new model

Right: The actual feeder fork

In the current model, the feeder fork is a part of the structure feeder console which in the new model is divided into feeder fork and feeder. 3.3.16 Feeder

The feeder is placed on the feeder fork and holds the feeder beam where the drill is mounted, see Figure 3.19.

Figure 3.19. Left: beam model Middle: The new model

Right: The actual feeder

The current feeder is a beam structure that runs between a number of nodes. The mass of the feeder and the feeder beam is divided in two and placed in the rear and front of the feeder. The new feeder is a solid part that bears the total mass of the feeder and the magazine. 3.3.17 Feeder beam, Steel holder and Feeder steel

The feeder beam lies on the feeder and contains two steel holders which hold the feeder steel, see Figure 3.20.

Figure 3.20. Left: beam model

Right: The new model

The feeder beam, steel holders and feeder steel is a structure that does not belong to the optimization problem described by Atlas Copco. In this model, a simplification of the parts are included anyway to obtain the correct masses and to enable the working force to operate in the right point

46

Method and implementation

3.4 Mathematical formulation of the optimization In accordance with previously presented theories the design objective, constraints and variables for the BUT45 was formulated in a mathematical manner. Three different optimizations problems has been formulated, two of them as singleobjective and the third as a bi-objective problem. In this section the optimization problems are briefly described, for a full version of the formulation see Appendix H: Mathematical formulation of the optimization problems. 3.4.1

Formulation of single-objective optimization problem 1

The first optimization problem is a single-objective maximization problem. The aim is to maximize the objective function, dependent on 9 variables while fulfilling the constraints. The problem has mathematically been formulated as

. .

0 1,2, … ,9

and are the lower and where v are the design variables of the problem, is allowed to vary in. The problem above is upper limit of the interval formulated as a minimization problem since it’s the conventional manner within optimization methodology. In the first optimization workflow, the optimization problem is treated as the single-objective problem above. 3.4.2

Formulation of single-objective optimization problem 2

The second optimization problem is a single-objective minimization problem. The aim is to minimize the objective function, dependent on 2 variables while fulfilling the 170 constraints of the problem. The problem has mathematically been formulated as

. .

0

1,2, … ,170 1,2

47

Method and implementation

where ξ are state variables and v are the design variables of the problem, v and v are the lower and upper limit of the interval v is allowed to vary in. In the second optimization workflow, the optimization problem is treated as the single-objective problem above. 3.4.3

Formulation of multi-objective optimization problem

The third optimization problem is a bi-objective problem which can be treated in some different ways. The two objectives depend on 9 variables with 170 constraints which must be fulfilled. One way is to re-formulate the problem as a weighted objective function and solving a single-objective problem

1 . .

1

2 1,2, … ,170

0

1,2, … ,9

where 0 1 and are state variables and are design variables of the problem, and are the lower and upper limit of the interval is allowed to vary in. The value of is subjective number depending on the “weight” of the objective. The Engineer chooses a value of alpha in such way that it represents the importance of respective objective. Another way, is by conversion of one objective function to a constraint, solve the problem as a single-objective 1 . .

2 0

0 1,2, … ,170 1,2, … ,9

where are state variables and are design variables of the problem, the lower and upper limit of the interval is allowed to vary in. 48

and

are

Method and implementation

In the third formulation the objectives are retained and is instead treated as a biobjective problem, mathematically formulated as 2

1 . .

0

1,2, … ,170 1,2, … ,9

where are state variables and are design variables of the problem, the lower and upper limit of the interval is allowed to vary in.

and

are

The third optimization workflow in ModeFRONTIER™ treats the optimization problem as the bi-objective problem above.

3.5 Optimization workflows in modeFRONTIER™ In this thesis the commercial software modeFRONTIER™ has been used for creating an optimization workflow, a detailed description of how to build a workflow which solves a simple multi-objective optimization problem is included in Appendix I: Solving a simple multi-objective optimization problem in modeFRONTIER™. In this section, the three different workflows built for optimization of the BUT45 L is described, see a more thorough description of the problems in Section 3.4: Mathematical formulation of the optimization and Appendix H: Mathematical formulation of the optimization problems. 3.5.1

Single-objective optimization workflow 1 in modeFRONTIER™

The first optimization workflow, see Figure 3.21, is built to solve a singleobjective maximization problem. Workflows in modeFRONTIER™ can be approached in two ways; from top to bottom, data flow, and from left to right, process flow.

49

Method and implementation

Figure 3.21: Optimization workflow of the first single-objective problem in modeFRONTIER™.

Starting from the top and moving down; 9 design variables (8 variables and 1 constant), ruled by 1 constraint, are sent to MATLAB™ which processes the data and pass it on as an output variable to the Objective-node. Approaching the workflow from left to right, see Figure 3.22; the Schedulernodes, the DOE-node and the chosen optimization algorithm, is the starting point of the optimization. Sobol Design is used for setting up the DoE and SIMPLEX is the algorithm used, see Section 2.5.1: Sobol Design for details on planning design experiments and 2.6.2: The SIMPLEX algorithm for algorithm description.

Figure 3.22: Illustration of the process sequence in the workflow.

In the next process MATLAB™ run the necessary calculations for the output variable. The Logic End-node terminates the process flow. A detailed description of this workflow and the configurations made in the different node is found in Appendix J: Workflow; the “single-objective optimization problem 1”. 3.5.2

Single-objective optimization workflow 2 in ModeFRONTIER™

The second optimization workflow, see Figure 3.23, is built to solve a singleobjective minimization problem. As constant input variables the optimal design of the “Single-objective optimization workflow 1” is used and 2 variables are allowed to vary.

50

Method and implementation

Figure 3.23: Optimization workflow of the first single-objective problem in modeFRONTIER™.

Starting from the top and moving down; 11 design variables (2 variables and 9 constants) are sent to an Input file-node. The Input file-node sends the data on to a DOS Batch-node and on to ABAQUS. ABAQUS process the data into 172 output-variables where 2 output-variables are passed on to an objective-function and remaining 170 output-variables are used as input to Design constraint-nodes. Approaching the workflow from left to right, see Figure 3.24; the Schedulernodes, the DOE-node and the chosen optimization algorithm, is the starting point of the optimization. Sobol Design is used for setting up the DoE and FSIMPLEX is the algorithm used.

Figure 3.24: Illustration of the process sequence in the workflow.

A MATLAB™ process run the calculations required for providing the next process step with information and for providing the objective with input variables. When MATLAB™ is finished, the DOS-node commands ABAQUS to read the Input file and perform a FE-analysis on the boom. Once ABAQUS has completed the Logic end-node terminates the process flow. A detailed description of this workflow is found in Appendix K: Workflow; the “single-objective optimization problem 2”. 51

Method and implementation 3.5.3

Multi-objective optimization workflow in modeFRONTIER™

In the same way as the problem in Appendix I: Solving a multi-objective problem in modeFRONTIER™, an optimization workflow of the BUT45 L has been set up, see Figure 3.25.

Figure 3.25: Optimization workflow of the BUT45 L in modeFRONTIER™.

Starting from the top and moving down; 11 design variables are sent to MATLAB™ for processing. MATLAB™ passes on the information in two directions; it sends 1output variable to an objective-node and 50 output variables an Input file-node. The Input file-node sends the data on to a DOS Batch-node and on to ABAQUS, which also retrieve the same 11 design variables which MATLAB™ has been processing. ABAQUS process the data into 172 outputvariables where 2 output-variables are passed on to an objective-function and remaining 170 output-variables are used as input to Design constraint-nodes. Approaching the workflow from left to right, see Figure 3.26; the Schedulernodes, the DOE-node and the chosen optimization algorithm, is the starting point of the optimization. Sobol Design is used for setting up the DoE and FMOGA-II is the algorithm used, see Section 2.6.3: The MOGA-II algorithm for details of the optimization algorithm.

Figure 3.26: Illustration of the process sequence in the workflow.

A MATLAB™ process run the calculations required for providing the next process step with information and for providing Objective1 with input variables. 52

Method and implementation

When MATLAB™ is finished, the DOS-node commands ABAQUS to read the Input file and perform a FE-analysis on the boom. Once ABAQUS has completed the Logic end-node terminates the process flow. A detailed description of this workflow is found in Appendix L: Workflow; the “multi-objective optimization problem”.

53

Findings and analysis

4 Findings and analysis The results of the three optimizations will be described in this chapter.

4.1 Single-objective optimization 1 in modeFRONTIER™ The aim of this optimization was to find a maximized coverage area. Running time for the optimization process was 3 min, 28 sec, during which 357 design experiments was validated, see Figure 4.1. An increase of the coverage area could be identified. CONFIDENTIAL

Figure 4.1: History of the Objective function for SIMPLEX.

CONFIDENTIAL

54

Findings and analysis

4.2 Single-objective optimization 2 in modeFRONTIER™ The objective function was to minimize the weight of the two boom beams. Running time for the optimization was 9 hours, 55 min, 18 sec, during which 21 design experiments was validated, see Figure 4.7. The optimization gave results where the weight was kept or even reduced compared to the current boom. CONFIDENTIAL

Figure 4.7: History of the Objective function for SIMPLEX.

CONFIDENTIAL

63

Findings and analysis

4.3 Multi-objective optimization in modeFRONTIER™ The multi-objective optimization in modeFRONTIER™ gave, with the chosen settings, an increased coverage area while the volume was decreased. All stresses were kept within the given limitations. The optimization ran for around 135 hours and executed 277 design validations, where 10 load cases were evaluated for each design, see Figure 4.9. The designs validations were divided upon 6 iterations and between each iteration a training set were performed. Since FMOGAII is a large multi-objective optimization, no convergence occurred and no obvious paretofront could be seen. The designs can be seen in Figure 4.9 as the volume relative the coverage. CONFIDENTIAL

Figure 4.9 Volume relative coverage area where the yellow symbols represents unfeasible designs. The marked design is the chosen solution suggestion.

CONFIDENTIAL 65

Discussion and conclusions

5 Discussion and conclusions The methods and results will be discussed and further improvements will be proposed in this chapter.

5.1 Discussion of method During the project, many essential decisions were taken that had great impact on the results. 5.1.1

Using modeFRONTIER™

One of the most crucial decisions taken throughout this project has been using modeFRONTIER™ for building the optimization workflow. An alternative way of solving the problem might have been to use Matlab to run different commands for calling the software needed for FE-analysis and running response surface and algorithm scripts. This solution would have put demands on the programming skills of the students and further delimitations would have to be made in order to fit the time plan. The workflow would also have been limited to the selected design of experiment method, algorithm and metamodeling technique. The flexibility of modeFRONTIER™ allows the user to implement and evaluate different optimizers without any major effort and knowledge. This as an advantage as well as a disadvantage; it is of great importance that the Engineer is well-informed with the consequences of different settings and algorithms. The nodes in the program must not be treated as black boxes where the Engineer has no or very little knowledge of the procedures taking place. Another reason of implementing modeFRONTIER™ is the compability with the software used at Atlas Copco Rock Drills AB in Örebro. It is possible to include Matlab as well as the various CAD and FEA-software in the optimization workflow. Part geometry can, instead of being generated in the FEA-software, be imported from a CAD-software into the FEA-software. 5.1.2

Formulation of the multi-objective problem

In Section 3.4.3: Formulation of the multi-objective optimization problem three different formulations of the multi-objective problem where presented. The first formulation treat the multi-objective problem by implementing a weighted formulation, this formulation make it possible for the Engineer to determine the importance of the different characteristics. On the other hand, it puts considerable demands on the knowledge of system and of the problem. Also, the favoring of one objective before another could result in missing out on possible solutions that the Engineer unaware of.

71

Discussion and conclusions

The second formulation treats one of the objectives as a constraint. It made the problem easier to solve in terms of finding the optimal solution but the solution could be inadequate. As previously mentioned, the downside with turning a soft preference to a hard constraint is that the final solution might largely depend on the chosen value for the constraint. The Engineer has to be careful in the decision concerning the limits of the constraint. The third formulation treats the problem as a multi-objective problem, which is harder to solve in terms of finding the solution. By introducing the concept of Pareto optimality and Pareto fronts in Section 2.3.2: Multi-objective optimization and Pareto optimality it was established that every design point on the Pareto front mathematically was an optimal design. This generates a large number of solutions but it also gives the Engineer a good overview of possible solutions. In contrast to the first formulation, the favoring of the objectives is done after the problem has been solved. It is possible for the Engineer to evaluate and validate the solutions before possible rejection of the design solution. In this thesis, the third formulation of the problem has been implemented since it provides all possible solutions without decisions having to be made that way affect the outcome in a negative way. In this way the decision-making of the optimal design is passed forward to the specialists at Atlas Copco Rock Drills AB. 5.1.3

Used algorithms

For solving for the single-objective optimization problems 1 and 2, see Section 3.5.1: Singe-objective optimization work flow 1 in ModeFRONTIER™ and Section 3.5.2: Singe-objective optimization work flow 2 in ModeFRONTIER™, the SIMPLEX and the FSIMPLEX algorithm has been used. As described in Section 2.6.2: The SIMPLEX algorithm, the SIMPLEX is an optimization algorithm that gives an accurate result even for non-linear problems. When searching for a solution the contraction and reflection movement assures that the SIMPLEX always move in a positive direction and towards a better solution. The fast version of SIMPLEX, the FSIMPLEX, which utilizes response surfaces for speeding up the optimization, could also have been utilized for solving first the singe-objective problem. Since the calculation of the coverage area only takes approximately 0.5sec, it is considered unnecessary to use response surfaces to predict the behavior – the time for evaluating the different RSM models will take substantially much more time than the actual validation of the design. For the second single-objective problem the use of RSM is motivated by the computational time for the FE-analysis. One design validation takes approximately 30 minutes to complete. The validation of one design is “too expensive to waste on nonessential designs”. For the multi-objective problem, the FMOGA-II algorithm has been used, see Section 2.6.3: The MOGA-II algorithm. The optimizer, just as the FSIMPLEX, implements metamodels to predict the behavior of the problem and speed up the optimization process. For the multi-objective problem, as for the second singleobjective, the use of metamodels is highly motivated by the demanding 72

Discussion and conclusions

computational time for the FE-analysis. Also, the FMOGA-II is the only available optimizer in ModeFRONTIER™ which both implements metamodels and manages multi-objective problems.

5.2 Discussion of findings

CONFIDENTIAL

5.3 Further improvements Due to a limited time-plan the content of the thesis have been delimited. An understanding of the boom has been developed during the project and further improvements are presented below. 5.3.1

More realistic FEA-model

The FEA-model in this thesis has been created based on the understanding about the boom that was obtained during the time of the project. An engineer at Atlas Copco with more experience might find ways to improve the FEA-model further with a more accurate analysis as result. By extending the parameterization of the model the total weight of the boom can be affected in a more realistic way. A change in the dimensions of one part would in real life often lead to other parts being re-dimensioned to correspond with the altered part. In the FEA-model used in this project only the parts with dimensions that are used as design variables in the optimization are changed. Resulting in the weight reduction being less than it would have been if the model would have been fully parameterized. By extending the model, taking in consideration more relations and conditions, a solution suggestion closer to a final result might be achieved. The computational effort and the time for building the model will increase but in the end time can be saved. This since the design suggestion might not have to be re-designed in the same degree as a solution generated by a more approximate model. Also, setting up a design optimization workflow requires a one-time effort of the Engineer, solving of the workflow is taken care by computers and do not require constant supervision. CONFIDENTIAL 73

References

6 References Arora, J., (2007). Optimization of Structural and Mechanical Systems. A. Messac & A. A. Mullur, Multi-objective optimization: concepts and methods (p. 121-147). Singapore :World Scientific, ISBN 981-256-962-6. Atlas Copco, (2007). Rocket Boomer E-series rigs geared up for efficient tunneling. http://www.atlascopco.com/us/News/ProductNews/070423_Rocket_Bo omer_E-series_rigs_geared_up_for_efficient_tunnelling.asp (Acc. 8 June 2011)

Atlas Copco, (2009). Technical Specification COP 3038. http://pol.atlascopco.com/SGSite/SGAdminImages/PrintedMatters/5723 .pdf (Acc. 26 May 2011) Bratley, P. & Fox, B., (1988). ALGORITHM 659 Implementing Sobol’s Quasirandom Sequence Generator . ACM Transactions on Mathematical Software., volume 14, 88-100. Buhmann, M., (2003). Radial Basis Functions: Theory and Implementations. New York: Cambridge University Press, ISBN 0-521-63338-9. Dassault Systèmes, (2009). Abaqus 6.9-EF Documentation. Providence, RI: Dassault Systèmes. Dean, A. & Voss, D., (1999) Design and analysis of experiments. New York: Springer, ISBN 0-387-98561-1 Jurecka, F., (2007). Robust design optimization based on metamodeling techniques. Aachen : Shaker, ISBN 978-3-8322-6390-4. Lovision, A., (2007). Kriging (Esteco Technical Report 2007-003). Trieste: Esteco. Lundgren, J., Rönnqvist, M. & Värbrand, P., (2008). Optimeringslära. Lund: Studentlitteratur, ISBN 978-91-44-05314-1. Mathworks, (2011). MATLAB documentation http://www.mathworks.com/help/?s_cid=global_nav (Acc. 8 June 2011) Myers, R. & Montgomery, D., (2002). Response surface methodology: process and product optimization using designed experiments (2. ed.). New York: Wiley, ISBN 0-471-41255-4. Montgomery, D., (2005). Design and analysis of experiments (6. ed.). Hoboken, N.J: Wiley, ISBN 978-0-471-48735-7. Niederreiter, H., (1978). Quasi-Monte Carlo methods and pseudo-random numbers. Bulletin of the American mathematical society, volume 84, 957-1041. Poles, S., (2003a). The SIMPLEX method (Esteco Technical Report 2003005). Trieste: Esteco. Poles, S., (2003b). MOGA-II and improved multi-objective genetic algorithm (Esteco Technical Report 2003-006). Trieste: Esteco.

76

References

Rigoni, E., (2007). Radial Basis Functions Response Surfaces (Esteco Technical Report 2007-001). Trieste: Esteco. Rigoni, E., (2010). Fast Optimizers: General Description (Esteco Technical Report 2010-004). Trieste: Esteco. Spong, M., Hutchinson, S. & Vidyasagar, M. (2006). Robot modeling and control. Hoboken, N.J: Wiley, ISBN 978-0-471-64990-8. Strömberg, N., (2010). Nonlinear FEA and Design Optimization for Mechanical Engineers. Jönköping: Jönköping University, School of Engineering.

77

Search terms

7 Search terms Beam elements ....................................7 Continuum elements ...........................8 coverage area ......................................2 Denavit-Hartenberg Convention .........5 FE-analysis ..........................................6 FE-elements ........................................7 Forward Kinematics ............................5 metamodels .................................12, 13

Multi-objective optimization ............ 11 objective function ......................... 8–10 Optimization ....................................... 8 Pareto front ................................. 11, 12 response surface models (RSM) ....... 13 The Optimization Process ............. 8–10 Truss elements .................................... 7

78

Appendices

8 Appendices Appendix A – Drawing of BUT 45 L Appendix B – Angles and variables Appendix C – Forward Kinematics Appendix D – Calculation of coverage area Appendix E – Setting up a Denavit-Hartenberg for a two-link kinematic chain Appendix F – Load cases Appendix G – New FEA-model relations Appendix H – Mathematical formulation of the optimization problems Appendix I – Solving a simple multi-objective problem in modeFRONTIER™ Appendix J – Workflow; the “single-objective optimization problem 1” Appendix K – Workflow; the “single-objective optimization problem 2” Appendix L – Workflow; the “multi-objective optimization problem”

Appendix A – Drawing of BUT45 L

Drawing of BUT 45 L

Appendix B – Angles and variables

Angles and variables

CONFIDENTIAL

Appendix C – Forward Kinematics

Forward Kinematics An algorithm for assigning the coordinate frames according to the Denavit-Hartenberg Convention. The following text is an extract from p.110–111 from the book: Spong et. al, (2006) Robot Modeling and Control and describes a procedure for deriving the forward kinematics based on DH convention. Step l: Step 2:

Locate and label the joint axes z0, . . . , zn−1. Establish the base frame. Set the origin anywhere on the z0-axis. The x0 and y0 axes are chosen conveniently to form a right-handed frame. For i = 1, . . . , n − 1, perform Steps 3 to 5.

Step 3:

Locate the origin oi where the common normal to zi and zi−1 intersects zi. If zi intersects zi−1 locate oi at this intersection. If zi and zi−1 are parallel, locate oi in any convenient position along zi.

Step 4:

Establish xi along the common normal between zi−1 and zi through oi, or in the direction normal to the zi-1 − zi plane if zi−1 and zi intersect.

Step 5: Step 6:

Step 7:

Establish yi to complete a right-handed frame. Establish the end-effector frame onxnynzn. Assuming the nth joint is revolute, set zn = a parallel to zi−1. Establish the origin on conveniently along zn, preferably at the center of the gripper or at the tip of any tool that the manipulator may be carrying. Set yn = s in the direction of the gripper closure and set xn = n as s × a. If the tool is not a simple gripper set xn and yn conveniently to form a right-hand frame. Create a table of link parameters ai, di, αi, θi. ai = distance along xi from the intersection of the xi and zi−1 axes to oi . di = distance along zi−1 from oi−1 to the intersection of the xi and zi−1 axes. If joint i is prismatic di is variable. αi = the angle from zi−1 to zi measured about xi. θi = the angle between xi−1 and xi measured about zi−1. If joint i is revolute, θi is variable

Step 8:

Form the homogeneous transformation matrices Ai by substituting the above parameters into Equation (3.10)*.

*Note: Equation (3.10) refers to the A matrix presented in Chapter 2.1: Forward Kinematics, p.5

Appendix C – Forward Kinematics

Step 9:

Form … . This then gives the position and orientation of the tool frame expressed in base coordinates.

Appendix D – Calculation of coverage area

Calculation of Coverage Area

CONFIDENTIAL

Appendix E – Setting up Denavit-Hartenberg for a two-link kinematic chain

Setting up Denavit-Hartenberg for a two-link kinematic chain Consider a two-link kinematic chain, as seen in Figure E.1: the base frame, or the origin o0of the chain, is established as shown. The length of link1 is a1 and the length of link2 is a 2 .

y2 x2 x1 θ2

y1

2

y0

1 θ1

x0 Figure E.1: A two-link kinematic chain with coordinate frames assigned according to DH-convention.

The origin o0 is located where “the z0-axis intersects with the paper”. All z-axes are normal to the page and all origins are located in the same way at the intersection with the paper. The direction of axis x0 is set in an arbitrary manner, while the subsequent frames o1 x1 y1 z 1 and o 2 x 2 y 2 z 2 are fixed by the Denavit-Hartenberg convention. The A-matrices of the links now become ⎡cos θ1 ⎢ sin θ 1 A1 = ⎢ ⎢ 0 ⎢ ⎣ 0

and

− sin θ1 cos θ1 0 0

0 a1 cos θ1 ⎤ 0 a1 sin θ1 ⎥⎥ 1 0 ⎥ ⎥ 0 1 ⎦

Appendix E – Setting up Denavit-Hartenberg for a two-link kinematic chain ⎡cosθ 2 ⎢ sin θ 2 A2 = ⎢ ⎢ 0 ⎢ ⎣ 0

− sin θ 2 cosθ 2 0 0

0 a 2 cos θ 2 ⎤ 0 a 2 sin θ 2 ⎥⎥ ⎥ 1 0 ⎥ 0 1 ⎦

Assembling the T-matrices of the links gives

T10 = A1 and

⎡cos(θ1 + θ 2 ) − sin(θ1 + θ 2 )1 ⎢ sin(θ + θ ) cos(θ1 + θ 2 ) 1 2 0 T2 = A1 A2 = ⎢ ⎢ 0 0 ⎢ 0 0 ⎣

0 a1 cos θ1 + a 2 cos(θ1 + θ 2 )⎤ 0 a1 sin θ1 + a 2 sin(θ1 + θ 2 ) ⎥⎥ ⎥ 1 0 ⎥ 0 1 ⎦

The coordinates of the end of link2, with respect to the base frame, is now given 0 by position [1, 4] and [2, 4] in the T2 -matrix: x = a1 cos θ 1 + a 2 cos( θ 1 + θ 2 ) y = a1 sin θ 1 + a 2 sin( θ 1 + θ 2 )

z =0

Appendix F – Load Cases

Load Cases

CONFIDENTIAL

Appendix G – New FEA-model relations

New FEA-model relations

CONFIDENTIAL

Appendix H – Mathematical formulation of the optimization problems

Mathematical formulation of the optimization problems

CONFIDENTIAL

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Solving a simple multi-objective problem in modeFRONTIER™. In this appendix a simple optimization problem is presented as an example of how optimization problems can be solved using modeFRONTIER™. A structure of two beams with a rectangular cross-section is loaded with a force F, se Figure I.1.

45⁰ F

1

Figure I.1: A beam structure bearing a load F.

The aim is to reduce weight by minimizing the volume V and to reduce the stress S under a given load F. The problem has two conflicting objectives min V and

min S

which will be dependent on the four variables 0.02

0.10

[m]

0.02

0.10

[m]

0.02

0.10

[m]

0.02

0.10

[m]

where a is the width and b is the height of beam 1, a is the width and b is the height of beam 2. The load F is equal to 100 kN.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Every variable is included in the flow as an Input variable-node and the value/ interval for the input variable is entered under properties (variables are set as a constant or variable), see Figure I.2.

Figure I.2: Settings for the Input variable-node, the variable can be set as constant or variable.

The variables are linked to a Python script which automatically builds the beam structure in Abaqus CAE and perform FE-calculations. The script, which is enabled in the flow by the Input file-node has been parameterized for the four variables, see Figure I.3 and Figure I.4. The variables are tagged in the script in order for modeFRONTIER™ to distinguish what values to change.

Figure I.3: Possible settings for the Input file-node. At the button Edit Input File it is possible to select the Python script and make adjustments to the script and enter the variables in the flow, see Figure 4.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.4: The Input file-node editor.

The Input file-node is connected to a DOS Batch Script-node which commands the startup of Abaqus CAE and prompts Abaqus to run the Python script, see Figure I.5.

Figure I.5: Properties of the DOS-node, commands are written under Edit DOS Batch Script.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

The DOS-node get input from the Scheduler-nodes where the first node is a DOE-node, which sets up a design matrix for the various variables, see Figure I.6. It is possible for the user to import user-defined design experiments or select among a large variety of DOE methods. Every DoE method has different settings. In this example Sobol has been used to generate five design experiments.

Figure I.6: Properties of the DOE-node.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

The second node is an algorithm-node, which retrieves the design experiments from the DOE-node and passes them forward to the DOS-node, see Figure I.7. The user has several different optimization algorithms to select among, different algorithms are suitable for different types of problems. Just as for the DoE methods, every algorithm has different settings. In the example FSIMPLEX is the algorithm used.

Figure I.7: Properties of the Scheduler-node.

The DOS-node is connected to an ABAQUS-node which retrieves files required for analyzing and interpreting the result of the FE-analysis from the DOS-node. The ABAQUS-node gets input files from the Transfer file-node see Figure I.8.

,

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.8: Properties of the ABAQUS-node.

The Transfer file-node, just as the name implies, transfer necessary files to the ABAQUS-node. The location of the different transfer files is specified manually by the user, the files can copied or moved to a destination, see Figure I.9.

Figure I.9: Properties of the Transfer file-node.

To the ABAQUS-node a Support file-node is connected; when files are created outside the system they have to be retrieved by specifying the direction of the file, see Figure I.10. The user adds a file from outside the system that will be included in the optimization, in this case; the report file generated by ABAQUS.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.10: Properties of the Support file-node.

The support file is processed by the ABAQUS-node and passed on as output to the Output-file node , see Figure I.11.

Figure I.11: Properties of the Output file-node, the output file is a report file from ABAQUS which has been retrieved outside the system for every FEA simulation.

The user has to open the Output-file, look up the sought output data, see Figure I.12, and tag it in order for modeFRONTIER™ to locate the data during the optimization.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.12: The user tags the data to be retrieved from the Output-file. The data is then passed on to the Output variable-node, in this case the total volume of the two beams is tagged for retrieval.

The data is processed by the ABAQUS-node and turned into output for the Output variable-node . The node retrieves the desired output directly from the ABAQUS-node or the Output file-node, see Figure I.13.

Figure I.13: Properties of the Output variable-node

In this example; the volume of the beams is retrieved from the ABAQUS report file, while the stress is retrieved from the ODB-file in ABAQUS and has to be located and tagged within the ABAQUS-node, see Figure I.14.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.14: Introspection of the ODB-file at the ABAQUS-node.

All Output variable-data is passed forward as input to the Objective node in which the objective function is written as a mathematical expression. The user also specifies whether the objective should be maximized or minimized, see Figure I.15. This workflow has one objective which minimizes the volume and one which minimizes the stress.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.15: Properties of the Objective-node.

The fourth and last node connected to the ABAQUS-node is the Logic End-node which terminates the process flow, see Figure I.16.

Figure I.16: Properties of the Logic End-node.

The final work flow can be seen in Figure I.17. The workflow can be approached in two ways; from top to bottom, the data flow, and from left to right, the process flow.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Figure I.17: The final optimization workflow.

Concluding the workflow: the optimization algorithm used to solve the problem was FMOGA-II, implementing Sobol as DOE-method. The algorithm was allowed to run 25 iterations before exiting, resulting in a total number of 175 design tested. The result of the optimization is the Pareto front seen in Figure I.18.

Figure I.18: Pareto front of the bi-objective problem.

Appendix I – Solving a simple multi-objective problem in modeFRONTIER™

Depending on the weight of the two objectives, optimal designs are picked from the Pareto front. If the most important objective is to reduce stress; the designs in the lower right corner is the best performing. For these designs where the weight of Objective 1 is insignificant, volume has no importance. If instead minimizing volume is the most important objective, the designs should be retrieved from upper left corner. These designs will have low volume but high stresses, the weight of Objective 2 is insignificant. Designs picked from other positions on the Pareto front are compromises between the objectives, but are still optimal depending on how the different properties of the design are valued by the Engineer. In Figure I.19 three different designs from different positions on the Pareto front are presented, the designs are marked with green in Figure 18. Design A is picked from the lower right corner, Design B is picked from the “middle” of the Pareto front and Design C is picked from the upper left corner.

Design A

Design B

Design C

Large Volume

Medium Volume

Low Volume

Low Stress

Medium Stress

High Stress

Figure I.19: Three Pareto optimal solutions provided by the workflow in modeFRONTIER™.

Appendix J –Workflow; the “single-objective optimization problem 1”

Workflow; the “single-objective optimization problem 1”

CONFIDENTIAL

Appendix K – Workflow; the “single-objective optimization problem 2”

Workflow; the “single-objective optimization problem 2”

CONFIDENTIAL

Appendix L – Workflow; the “multi-objective optimization problem”

Workflow; the “multi-objective optimization problem”

CONFIDENTIAL

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF