<%BANNER%>

Development of Interactive Structural Optimization Module in C++

Permanent Link: http://ufdc.ufl.edu/UFE0045638/00001

Material Information

Title: Development of Interactive Structural Optimization Module in C++
Physical Description: 1 online resource (59 p.)
Language: english
Creator: Chung, Yongmin
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2013

Subjects

Subjects / Keywords: sqp
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Mechanical Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Sequential Quadratic Programming(SQP) is well known algorithm  to solve the optimization problem. In simple mathematical problems, the algorithm can find the optimum easily. However, in real structural problems,  using only the SQP algorithm without proper line search method cannot find the optimum easily, because of the nonlinearity of the objectiv4e and constraints. In this research, modifications of SQP are applied to find the optimum easily with proper line search method.  The gradient vector and Hessian matrix in normalized design variable domain are applied. Even though a proper line search method is applied, the automated algorithm sometimes could be inefficient because of the complexity of the problem. In addition, the designer's experience may help to find the optimum design which is difficult to formulate in the optimization problem. This research applied an interactive optimization session, What-if study and Trade-off analysis so that designer can join while the SQP algorithm is running, change the algorithm and affect the result. In this case, designer can get the result as they want without violating constraints, or prevent the algorithm from going inefficiently.  After calculating the optimum point, it is not easy to use continuous variable as it is. In real world, discrete variable is widely used, so this research will present how to get a discrete optimum variable efficiently based on the continuous optimum variable. In addition, based on the final result, this research will show how to interpolate the design variable between the optimum design and the specific design.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Yongmin Chung.
Thesis: Thesis (M.S.)--University of Florida, 2013.
Local: Adviser: Kim, Nam Ho.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2013
System ID: UFE0045638:00001

Permanent Link: http://ufdc.ufl.edu/UFE0045638/00001

Material Information

Title: Development of Interactive Structural Optimization Module in C++
Physical Description: 1 online resource (59 p.)
Language: english
Creator: Chung, Yongmin
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2013

Subjects

Subjects / Keywords: sqp
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Mechanical Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Sequential Quadratic Programming(SQP) is well known algorithm  to solve the optimization problem. In simple mathematical problems, the algorithm can find the optimum easily. However, in real structural problems,  using only the SQP algorithm without proper line search method cannot find the optimum easily, because of the nonlinearity of the objectiv4e and constraints. In this research, modifications of SQP are applied to find the optimum easily with proper line search method.  The gradient vector and Hessian matrix in normalized design variable domain are applied. Even though a proper line search method is applied, the automated algorithm sometimes could be inefficient because of the complexity of the problem. In addition, the designer's experience may help to find the optimum design which is difficult to formulate in the optimization problem. This research applied an interactive optimization session, What-if study and Trade-off analysis so that designer can join while the SQP algorithm is running, change the algorithm and affect the result. In this case, designer can get the result as they want without violating constraints, or prevent the algorithm from going inefficiently.  After calculating the optimum point, it is not easy to use continuous variable as it is. In real world, discrete variable is widely used, so this research will present how to get a discrete optimum variable efficiently based on the continuous optimum variable. In addition, based on the final result, this research will show how to interpolate the design variable between the optimum design and the specific design.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Yongmin Chung.
Thesis: Thesis (M.S.)--University of Florida, 2013.
Local: Adviser: Kim, Nam Ho.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2013
System ID: UFE0045638:00001


This item has the following downloads:


Full Text

PAGE 1

1 DEVELOPMENT OF INTERACTIVE STRUCTURAL OPTIMIZATION MODULE IN C++ By YONGMIN CHUNG A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 201 3

PAGE 2

2 201 3 Y ongmin C hung

PAGE 3

3 To my parents

PAGE 4

4 ACKNOWLEDGEMENTS I would like to express the deepest appreciation to my committee chair, professor Nam Ho Kim, who has patience a nd guided through this work. He has taken all the pain to go through this work and make necessary correction as and when needed. I express my thanks to the members of my supervisory committee, Professor Raphael T. haftka for extending his support and willi ngness to review my Master research and provide constructive comments to help me complete this paper. I would also thank to Jinsang Chung, Susheel Kumar Gupta, Shu Shang in M ulti D isciplinary O ptimization lab in M echanical and A erospace E ngineering depar tment for their comments and help. Most of all, I would like my hear t felt thanks to my family for support ing me. Without their praying, I couldn't finish my work in abroad. I also extend my thanks to all well wishers.

PAGE 5

5 TABLE OF CONTENTS Page ACKNOWLEDGEMENTS ................................ ................................ ............................... 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 8 ABSTRACT ................................ ................................ ................................ ................... 9 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 11 1.1 Motivation ................................ ................................ ................................ ......... 11 1.2 Objective ................................ ................................ ................................ ........... 12 2 SEQUENTIAL QUADRATIC PROGRAMMING ................................ ...................... 14 2.1 Gradient Based Search Method ................................ ................................ ....... 14 2.1.1 General Concepts ................................ ................................ .................. 14 2.1.2 General Iterative Algorithm ................................ ................................ .... 15 2.2 Steepest Descent Method ................................ ................................ ................. 16 2.2.1 Descent Di rection ................................ ................................ .................. 16 2.2.2 Steepest Descent Algorithm ................................ ................................ .. 17 2.3 Constraints Normalization ................................ ................................ ................. 18 2.4 Design Variable Normalization ................................ ................................ .......... 19 2.5 Convergence Criteria ................................ ................................ ........................ 19 2.6 Gradient Vector ................................ ................................ ................................ 20 2.7 BFGS Hessian Update Method ................................ ................................ ......... 21 2.8 Sequential Quadratic Programming ................................ ................................ .. 23 2.8.1 Quadrati c Programming Subproblem(QP Subproblem) ......................... 23 2.8.2 Descent Function ................................ ................................ ................... 26 2.8.3 Inexact Line Search Method ................................ ................................ .. 27 2.8.4 Program S tructure ................................ ................................ ................. 28 2.9 Numerical Result ................................ ................................ ............................... 30 3 INTERACTIVE DESIGN OPTIMIZATION ................................ ............................... 34 3.1 Algorithms in Interactive S ession ................................ ................................ ...... 35 3.1.1 Trade O ff Analysis ................................ ................................ ................. 35 3.1.2 Steepest Descent Method ................................ ................................ ..... 39 3.1.3 User Defined Direction ................................ ................................ .......... 40 3.2 What I f S tudy ................................ ................................ ................................ .... 41

PAGE 6

6 4 POST PROCESS ................................ ................................ ................................ ... 42 4.1 Discrete O ptimum Design ................................ ................................ ................. 42 4.2 Design Interpolation ................................ ................................ .......................... 44 4.3 Numerical Example ................................ ................................ ........................... 46 5 SUMMARY AND CONCLUSION ................................ ................................ ............ 50 APPENDIX A PROGRAM INPUT FILE ................................ ................................ ......................... 52 B A LGORITHMS ................................ ................................ ................................ ........ 54 LIST OF REFERENCES ................................ ................................ ............................... 56 BIOGRAPHICAL SKETCH ................................ ................................ ............................ 59

PAGE 7

7 LIST OF TABLES Table Page 2 1 Design data for the 10 bar truss ................................ ................................ ......... 30 2 2 Comparison of optimum solutions for the 10 bar truss in e xample 1 ................. 30 2 3 Comparison of the iteration number in e xample 1. ................................ ............. 31 2 4 Design data for the 10 bar truss in e xample 2. ................................ ................... 31 2 5 Comparison of optimum solutions for the 10 bar truss in e xample 2. ................. 32 2 6 Design data for the 25 bar truss in e xample 3. ................................ ................... 32 2 7 Comparison of optimum solutions for the 25 bar truss ................................ ....... 33 4 1 Design data for the 10 bar truss ................................ ................................ ......... 46 4 2 R esult of optimum solution for the 10 bar truss ................................ .................. 47 4 3 D iscrete candidate table ................................ ................................ ..................... 48 4 4 Sum of the gradient of active co nstraints ................................ ............................ 48 4 5 R esult of discrete optimum design ................................ ................................ ...... 49

PAGE 8

8 LIST OF FIGURES Figure Page 2 1 Design v ariable in n ormalized domain ................................ ................................ 28 2 2 Flow chart of the program. ................................ ................................ .................. 29 2 3 Ten bar truss ................................ ................................ ................................ ...... 30 2 4 Ten bar truss ................................ ................................ ................................ ...... 32 2 5 Twenty T wo bar truss ................................ ................................ ......................... 33 3 1 Design v ariable in n ormalized domain ................................ ................................ 40 4 1 Discrete optimization ................................ ................................ .......................... 43 4 2 Design Interpolation ................................ ................................ ............................ 45 4 3 Ten b ar truss ................................ ................................ ................................ ...... 46 4 4 I nput variables for 10 bar truss ................................ ................................ ........... 47

PAGE 9

9 Abstract of Thesis Presented to the Graduate School of the University of Florida in Part ial Fulfillment of the Requirements for the Degree of M aster of science DEVELOPMENT OF INTE R ACTIVE STRUCTURAL OPTIMIZATION MODULE IN C++ By Yongmin Chung May 2013 Chair: Nam Ho Kim Major: Mechanical Engineering Sequential Quadratic Programming (SQP) is well known algorithm to solve the optimization problem. In simple mathematical problems, the algorithm can find the optimum easily However, in real structural problem s using only the SQP algorithm without proper line search method cannot find the optimu m easily because of the nonlinearity of the objectiv4e and constraints In this research, modification s of SQP are applied to find the optimum easily with proper line search method. The g radient v ector and Hessian matrix in n ormaliz ed design variable dom ain are applied. E ven though a proper line search method is applied, the automated algorithm sometimes could be inefficient because of the complexity of the problem In addition, the designer's experience may help to find the optimum design which is diffi cult to formulate in the optimization problem. This research applied an interactive optimization session, What if study and Trade off analysis so that designer can join while the SQP algorithm is running, change the algorithm and affect the result. In this case, designer can get the result as they want without violating constraints, or prevent the algorithm from going inefficiently.

PAGE 10

10 After calculating the optimum point, it is not easy to use continuous variable as it is. In real world, discrete variable is widely used, so this research will present how to get a discrete optimum variable efficiently based on the continuous optimum variable. In addition, based on the final result, this research will show how to interpolate the design variable between the optim um design and the specific design.

PAGE 11

11 CHAPTER 1 INTRODUCTION 1.1 Motivation For structural optimization problems, Sequential Quadratic Programming (SQP) is one of the robust algorithm s However, it still has a slow convergence for some practical problems, and the algo rithm is for finding a local optimum point In addition, because of the automated procedure, there is no chance for a designer to intervene into the process until the result comes out which may converge to different designs from what the designer expects To improve th e se problems, although the Sequential Quadratic Programming is a good algorithm as it is, it is needed to be modified for the practical proble ms to be faster and more robust In addition, in order to give the designer a chance to control the process, it needs some interactive procedure and incorporation with other algorithms. Also, after calculating the final optimum design, it needs some post process so that the optimum design can be adjusted in order to consider practical situations, such a s discreteness of design variables and compensating a small violation of constraints Just using continuous various in the real world can be difficult. To overcome this, it needs some methods to calculate discrete variables based on the continuous variable s. However, simulating all possible discrete grids from optimum design with continuous assumption may require a huge number of additional simulations. For example, with 10 design variables, the possible discrete candidates are 2 10 = 1024 simulations, which is easily out of resources in practical applications. And, the optimum design is not always adopted in the real world. For many reasons, even though they are not the optimum design, some structures are built not as the optimum

PAGE 12

12 design, but based on the o ptimum design. For this, a design interpolation method is needed to deal with this. 1.2 Objective The first objective of this research is to improve the Sequential Quad ratic Programming (SQP) more robust and stable so that improved algorithm can solve the str uctural problems faster and find the optimum better Using a proper line search method can help to find a proper step size. To modify the algorithm, two constant s which one is to control the gradient of the cost function and the other is to control the vio lation of the constraint functions are added to the original algorithm in this research. The constant for the gradient of the cost function can help the algorithm reduce the iteration number compared to the original SQP algorithm And, the other constant for controlling constraint violation can control the final result with constraints. For the step size, many methods are studied and applied the one which shows the good result in many practical problems. Those are explained in C hapter 2. The second objec tive of this research is to explore the capability of the interactive design optimization. By proving a chance for a designer to intervene during optimization a designer can control and manage the design updating in each iteration. This capability is crit ical to utilize designer's experience during optimization. An a lgorithm can be a good numerical method for automation, but still well trained designer could be better than any automated algorithm s and in many cases, a designer can suggest better design th an the computer algorithm. With the interactive optimization module, a designer can use not only Sequential Quadratic Programming, but also suggest other design updating for the better design which will be explained in C hapter

PAGE 13

13 4. Throughout this research, it will suggest how other variation of Quadratic Programming can be used and a designer can suggest other design during the automated process. Last ly after calculating the optimum design, it will show the post process of the optimum design. In real world it is not easy to use continuous variable as it is. This research will discuss which way is good to choose discrete variables based on continuous variable. In addition, There are some cases that even though the design suggested by the program is good, a designer should change the design for some other reasons, for example aesthetic or some specific purpose. In this case, it needs to interpolate two designs from the program and a designer. This will be discussed though C hapter 5.

PAGE 14

14 CHAPTER 2 SEQUENTIAL QUADRATIC PRO GRAMMING 2.1 Gradient Based Search Method 2.1.1 General Concepts Gradient based Methods use gradients of the cost and constraint functions to search the optimum solution. To use these method s all of the functions should be continuous and smooth that means the f unctions should be differentiable everywhere in the design space And cost function should be at least twice continuously differentiable everywhere in the feasible region for the Hessian matrix This will be explained later. Also, design variables are assu med to be continuous in their space Since these methods use only local information at a point during their search process, they could converge only to a local minimum point for the cost function. However, based on the newly developed methods, for example Newton's method modified Newton's method and Quasi Newton method with many ways of direct Hessian Updating ; the DFP method proposed by Davidon[1], modified by Fletcher and Powell[2] and the BFGS method by Gill[ 3 ] and Nocedal and Wright [ 4 ] the gradient bas ed method become s getting better to search for global minimum points for the cost function. In t his research, quasi Newton method with modified BFGS method, which is by Powell[5], is improved to solve practical problems more robust and stable. Gradient Ba sed Search Methods are iterative where the same calculations are repeated in every iteration. With those calculated information, the initial design point will be improved until optimality conditions, which are als o called Convergence Criteria, are satisfie d.

PAGE 15

15 2.1.2 General Iterative Algorithm Many gradient based methods are described by the following prescription [6] : ( 2 1) where k : representing the iteration number x (k) : current point x : change in the current point To calculate x cost functions and constraints are used. Here, their gradients, derivatives, plays a key role. In most cases, x is decomposed into two part s : ( 2 2) where : desirable search direction in the design space : positive scalar called the step size in the search direction When k d (k) is added to the current design x (k) a new point x (k+1) can be calculated in the design s pace. There are various ways to calculate the desirable direction d (k) and proper step size k In this research will show a good combination of calculating direction and step size for practical structural optimization. The iterative process es search the global minimum in the design space. The following procedure shows a general algorithm for every iterative problems: 1. Estimate a starting design x (0) Set the iteration counter k=0 2. Compute a search direction d (k) in the design space. This calculation gen erally requires cost function value and its gradient for the unconstrained problems and, in addition, constraint functions and their gradients for constrained problems. 3. Check for convergence of the algorithm. If it has converged, terminate the iterative pr ocess. Otherwise, continue. 4. Calculate a positive step size k

PAGE 16

16 5. Calculate the new design as x (k+1) = x (k) + x (k) Set k=k+1 and go step 2. 2.2 Steepest Descent Method 2.2.1 Descent Direction The objective of the iterative optimization p rocess is to reach a minimum point having the minimum cost value. So, at each iteration, cost value should be smaller than the previous iteration This statement can be proved mathematically as ( 2 3) ( 2 4) From the equation desirable direction of any vector d (k) should satisfy E quation 2 4 Those vectors that satisfy the equation are called directions of descent because they reduce the cost value. A step of the iterative method based on those directions is called a descent step A method based on the idea of a descent step is called a descent method With these ideas, the method can search the optimum point that minimiz es the cost va lue.

PAGE 17

17 The Steepest Descent Method is the simplest, the oldest, and probably the best known numerical method for unconstrained optimization. This idea introduced by Cauchy[ 7 ] to find the desirable direction d (k) which decreases the cost function most rapi dly at the current iteration. It is also called the gradient method, because the properties of the gradient of the cost function are used. The gradient of a scalar function f(x 1 x 2 x 3 .... ,n) is defined as the column vector. ( 2 5) One of the most important properties of the gradient is that the gradient at a point x indicates the direction of maximum increase in the cost function. Thus the opposite way of the direction of the gradient decrease the cost function most rapidly In this way, the negative gradient vector represents a direction of steepest descent direction for the cost function and is written as ( 2 6) This equation always satisfies ( 2 7) 2.2.2 Steepest Descent Algorithm 1. Estimate a reasonable starting design x (0) Set the iteration counter k=0 and select a 2. Compute the gradient of f(x) at the current point x (k) 3. Calculate the magnitude of Check for convergence of the algorithm. If terminate the iterative process. Otherwise, continue.

PAGE 18

18 4. Set the direction 5. Calculate a positive step size k that minimizes 6. Calculate the new design as x (k+1) = x (k) + d (k) Set k=k+1 and go step 2 2.3 Constraints Normalization In mathematical problems, constrain functions and cost functions are already normalized or the order of magnitude s is similar for easy calculation in most cases. However, in structural optimization problems there are not only stress constraints, but also displacement constraints. So, it is desirable to normalize all of the constraint functions and cost function with a similar magnitude In case of constraint functions, using their limit value, or target val ue, is proper for their normalization. But cost function, it does not have any limit value or target value, so in this case using the value of first iteration is desirable. For example, Cost Function : ( 2 8) where = initial cost function value = normalized cost function Constraints : ( 2 9) where = target value of each constraints = normalized constraint function s

PAGE 19

19 With th e se normalization, the orders of magnitude will be the same for all constraints which makes the same convergence criteria can be used for all constraints. 2.4 Design Variable Normaliz ation With constraint functions and cost function normalization, i t is also desirable to normalize the design variable s In this research, all design variables are normalized between 0 and 1 as ( 2 10) where Original Design Va riables : x il i iu New Design Variables : x il : Lowerbound of x i x iu : Upperbound of x i 2.5 Convergence Criteria To use the iterative method, there should be some condition to stop iteration Stopping conditions or c alled convergence criteria, also mean the current design point reaches a minimum point having minimum cost value. In this research there are two conditions for convergence criteria. One is the magnitude of desirable direction This means ,the next point of is very close to so the current point is almost near minimum point. It can calculate more and then go to the m inimum point, but still there is not much difference and running the program more is i nefficient. The other k current design point is near the minimum point, so more calculat ion is meaningless. Those two criteria are used in this research.

PAGE 20

20 2.6 Gradient Vector The partial derivative of the function with respect to x 1 at a given point x* is defined as with respect to x 2 as and so on. Let represent the partial derivative of f( x ) with respect to x vector. This column vector is called the gradient vector. And geometrically, the gradient vector is normal to the tangent plan e at the point x which directs in the direction of maximum increase in the function. It can be calculated with Finite Difference Method[ 8 ] in numerical analysis as ( 2 11) In this research, design variables are normalized bet ween 0 and 1. Because of that, gradient in physical domain cannot be used as it is. It has to be converted to the gradient in normalized domain like below: Cost function: ( 2 12) Constraint functions: ( 2 13) where

PAGE 21

21 2.7 BFGS Hessian Update Method Hessian matrix is a matrix of second partial derivatives of the function simply called Hessian. In matrix form, it looks like below, ( 2 14) However, calculating second order derivatives costs too much when there are many design variables. So, in numerical analysis, it can be approximated. The basic idea is update the current approximation of the Hessian matrix using the changes in design and t he gradient vectors between two successive iterations. This works based on the assumption that the curvature between two iterations are maintained. There are several ways to calculate approximate Hessian H in this research, the modified BFGS method suggested by Powell[ 5 ], is used. This method works well in constrained optimization

PAGE 22

22 problem. During the process, several scalars and vectors are used before calculating the Hessian matrix. De sign change vector ( k is the step size) : ( 2 1 5 ) Vector: ( 2 1 6 ) Difference in the gradients of the Lagrange function at two points: ( 2 1 7 ) Scalar : ( 2 18 ) Vect or : ( 2 1 9 ) Final Formula for updating Hessian: ( 2 20 ) Like the gradient vector, Hessian matrix also need to be converted to fit into the normalized domain in this research It can be expressed l ike below: Design change vector ( k is the step size) : ( 2 2 1 ) Vector: ( 2 2 2 ) Here, equality constraints are not used. So, the d ifference in the gradients of the Lagrange function at two points is: ( 2 2 3 ) Scalar : ( 2 2 4 ) Vector :

PAGE 23

23 ( 2 2 5 ) Final Formula for updating Hessian: ( 2 2 6 ) where 'tilda(~)' over the variable means it is a normalized variable. 2.8 Sequential Quadra tic Program m ing Among the gradient based methods, Sequential Quadratic Programming is one of the most robust method to find the optimum point and been used for many years successfully. There are two basic steps to implement the Sequential Quadratic Progra mming. First step, calculate the desirable direction in the design space by using the values and the gradients of the cost and constraint functions. Second step, calculate a step size along the search direction to minimize a descent function. For step 1 ca n be solved by Quadratic Programming Subproblem In addition, other variations of Quadratic Programming can be solved in the process Those will be shown in chapter 4. 2.8.1 Quadratic Programming Subproblem(QP Subproblem) Quadratic Programming can be expres sed as ( 2 2 7 ) ( 2 2 8 ) ( 2 2 9 ) w here h : equality constraint g : inequality constraint e : equality constraint value b : inequality constraint value

PAGE 24

24 In most practical problems cost and constraint functions are usually nonlinear functions. But, in QP Subproblem, th e se fu nctions will be linearized using gradient vector and Hessian matrix at the current design In case of cost function, second order term will be added so that it becomes a quadratic function, that is why it is called a quadratic programming subproblem. The factor of 1/2 in the second term in Equation 2 27 is introduced to eliminate the factor of 2 during differentiations. There are t wo important properties of SQP. One is the QP subproblem is strictly convex and therefore its minimum(if one exists) is globa l and unique. The other is the cost function represents an equation of a hypersphere with its center at (circle in two dimensions, sphere in three dimensions). Solution to the QP subproblem gives the search direction d In addit ion, it gives values for the Lagrange multipliers for the constraints. These multipliers will be needed to decide the penalty values and descent function. Here, to compose QP subproblem, the normalized gradient vector and normalized Hessian matrix are use d. In addition, for more robust and stable algorithm for practical structural problems, original QP subproblem is modified as ( 2 30 ) ( 2 3 1 ) where k : constant

PAGE 25

25 : gradient of cost function in normalized domain : gradient of constraint s in normalized domain : normalized inequality function value : Hessian matrix in normalized domain For numerical calculation, equality constraints are rem oved, because it is hard to express exactly the same value. Instead of equality constraint s inequality constraint s with a small interval can be used like ( 2 3 2 ) In the modified QP Subproblem, k is constant for scaling the gra dient of cost function. If the initial cost function is larger it is possible that the gradient of the cost function could be small, because of the normalization. Because of that the convergence rate could be slow In order to compensate for this problem a scale factor k is used. Here, k=100 is used when the initial cost function value is greater than 1000. T wo values, 100 a nd 1000 are empirical values. Often optimization algorithm allows a small violation of constraint allowed such that the optimum desi gn often violates the constraints slightly. By [12] constraint violation can be controlled, and change the magnitude of violation in final result. When the initial design violates constraints, it is necessary to find a feasible design. With Quadra tic Programming Subproblem, Constraint Correction Algorithm [6] one of the variation of Quadratic Programming, is used for better convergence. This algorithm is used to search the shortest path to the feasible design space when the current design point is out of the feasible region. It is expressed as ( 2 3 3 ) ( 2 3 4 )

PAGE 26

26 The original algorithm used unity for the step size, but this research there i s a line search for this. The step size of "2.0", "1.0", "0.5" will be checked for this algorithm, because of the linearlized constraint functions. This algorithm will search the shortest path for linearized domain, but there is a difference between the li nearized feasible domain and the real feasible domain. So, b y searching 3 step sizes, the step size which has the smallest violation in real domain will be used for the step size. By adding this algorithm to the QP subproblem, it will be more robust and st able. 2.8.2 Descent Function In iterative method of optimization problem, cost function is monitored at every design iteration to see the cost value is decreasing or not. With that, a descent towards the minimum point is maintained. A function used to monitor progress towards the minimum is called the descent function or the merit function. This idea is to compute a desirable search direction d and a proper step size along it to reduce the descent function. In unconstrained optimization problems, the cost fu nction is used as the desce nt function. However, f or constrained problems, the descent function is usually constructed by adding a penalty for constraint violations to the current value of the cost function. One of the properties of a descent function is t hat its value at the optimum point must be the same as that for the cost function. Here, the descent function proposed by Han[ 9 ] is used with modification Descent function : ( 2 3 5 ) where

PAGE 27

27 k = (x(k)) f : cost function value : penalty parameter for inequality constraints. u : Lagrange Multiplier for inequality constraints. Originally, it is used with equality constraint, but in this research there is no equality constraint, so only inequality constraint part is used. 2.8.3 Inexact Line Search Method A step size determination is needed to solve gradient based optimization problem. In most practical implement ations o f the algorithm, an inexact line search is used to determine the step size. Although t here are many other methods like Golden Section search or polynomial interpolation, inexact line search method is preferred in most constrain ed optimization methods. In i nexact line search, a sequence of trial step sizes t j is defined by : t j =(1/2) j ; j=0,1,2,3,4 ... ( 2 3 6 ) The step size starts with the unity, which is the magnitude of direction the result from the QP S ub problem. If a certain descent condition is not satisfied, then the ste p size is bisected again until the condition is satisfied. Also, the other initial step size can be used. As shown in F igure 3 1, the normalized design domain is between 0 and 1. T he maximum step size from the current design point to the boundary of the design domain can be calculated. Then, the inexact line search method is used with the maximum step size like t j = max (1/2) j ; j =0,1,2,3,4 ... ( 2 3 7 )

PAGE 28

28 where max is the maximum step size in the normalized domain Figure 2 1. Design v ariable in n ormalized domain The condition used for the step size is called the descent condition [6] which i s k+1,j k t j k ( 2 3 8 ) where t : Step Size k = || d (k) || 2 ( : specified constant between 0 and 1 ) d : direction k is the descent function, which is mentioned. 2.8.4 Program S tructure Thi s program is consist of three parts: Main program, Finite Element Analysis part and optimization part. The program is developed for the module programming. That means, now it has only SQP algorithm module for optimization part, later other modules can be a lso used in the program. To run this program, the designer needs two input files: one for finite element analysis, the other for the optimization. Two sample input files are attached in the Appendix.

PAGE 29

29 Main program is reading the input files and saves the d ata during the process. Finite Element Analysis part solve the structure problems to calculate stresses and displacements in each element. Optimization part runs based on the information from input data and finite element analysis. Figure 2 2. Flow chart of the program.

PAGE 30

30 2.9 Numerical Result The abovementioned algorithm has been applied to three test problems all of them are consists of truss problem with nonlinear constraints. In each problem, a comparison of the algorithm in this research to other methods is shown. The first example is one of the famous problem s among optimization problems with ten design variables. This has ten bar truss with stress constraints in each member which are nonlinear constraints. Sample Examp le1.(10 bar with stress constraints) : Table 2 1. Design data for the 10 bar truss Problem Definition Material properties Modulus of elasticity : E =10 4 ksi Stress constraints 25ksi for all members Initial Volume 20982lb Unit Inch, lbf Figure 2 3 Ten bar truss T able2 2. Comparison of optimum solutions for the 10 bar truss in e xample 1 DV1 DV2 DV3 DV4 DV5 DV6 DV7 DV8 DV9 DV10 Cost Ref. 7.9378 0.1 8.062 3.9379 0.1 0.1 5.7447 5.5690 5.5690 0.1 15 9 3 2 Optist 7.936 0.12 8.073 3.963 0.1 0.1 5.754 5.57 5.569 0.1 15947 Result 7.984 0.1 8.021 4.07 68 0.1 0.1 5.6870 5.6370 5.7540 0.1 16082

PAGE 31

31 Table2 3. Comparison of the iteration number in Example 1. Iteration Default SQP 26 Optistruct 11 Modified Program 6 Before modifying Sequential Quadratic Programming algorithm, it took 26 iterations to reach the optimum design point. However, after adding constant k for the cost gradient, the number of i teration is reduced from 26 to 6 Also, a commercial program, Optistruct, is also used to solve the same problem, converged after 11 iterations. Similar to the first example, second example is also ten bar truss problem [11] However, in this case displacement constraint is added, which is also nonlinear constraints. Sample Example2.(10 bar with stress and di splacement constraints) : Table2 4 Design data for the 10 bar truss in e xample 2. Probl em Definition Material properties Modulus of elasticity : E =10 4 ksi Displacement constraints Stress constraints 2 in x and y directions at nodes 1 and 2 25ksi for all members Initial Volume 20982lb Unit Inch, lbf

PAGE 32

32 Figure 2 4 Ten bar truss Table2 5. Comparison of optimum solutions for the 10 bar truss in e xample 2. Volume DV1 DV2 DV3 DV4 DV5 DV6 DV7 DV8 DV9 DV10 Schmit 5107 3 30.57 0.369 23.97 14.73 0.1 0.364 8.547 21.11 20.77 0.320 Gellatly 51120 31.35 0.1 20.03 15.60 0.14 0.24 8.35 22.21 22.06 0.1 Ven kayya 50849 30.42 0.128 23.41 14.91 0.101 0.101 5.696 21.08 21.08 0.186 Rizzi 50766.6 30.73 0.1 23.93 14.73 0.1 0.1 8.542 20.95 21.84 0.1 Result 51105.7 31. 67 0.1 25.982 11.323 0.1 0.1 7.6601 22.930 20.696 0.1 Last e xample is twenty five bar truss problem [11] with eight desi gn variables. In this problem, groups of design variable are used for twenty five truss elements. With those groups, stress and displ acement constraints are used. Sample Example3. (25bar with stress and displacement constraints) : Table 2 6. Design data for the 25 bar truss in e xample 3. Problem Definition Material properties Modulus of elasticity : E =10 4 ksi Displacement constraint s Stress constraints Loading data 3.5 in x, y and z directions at nodes 1 to 6 40ksi for all members Nodal number P x (kips) P y (kips) P z (kips) 1 1.0 10 10 2 0 10 10 3 0.5 0 0 4 0.6 0 0 Initial volume 9922lb Unit Inch, lbf

PAGE 33

33 Figure 2 5 Twenty T wo bar truss Table 2 7. Comparison of optimum solutions for the 25 bar truss Volume DV1 DV2 DV3 DV4 DV5 DV6 DV7 DV8 Rajeev 5460.1 0.1 1.8 2.3 0.2 0.1 0.8 1.8 3.0 Cai 4874.1 0.1 0.1 3.4 0.1 2.0 1.0 0.7 3.4 Duan 5629.3 0.1 1.8 2.6 0.1 0.1 0.8 2.1 2.6 Result 4993.14 0.1 0.1 4.1 0.1 1.31 0.98 0.682 3.29 From the result shown above, Sequential Quadratic Programming (SQP) is a powerful algorithm to nonlinear optimization problem. However, SQP algorithm runs from initial design point to finial optimum des ign point automatically, there is no chance for designer to intervene during the process. Sometimes, a designer with the backgr ound knowledge of optimization theory and mechanics wants to change the direction or updated design point for better design, beca use SQP is powerful, but not perfect. As the algorithm allows a designer to change input parameters or direction and step size, the design can get better design point than the design point from the SQP algorithm. Next chapter, it will be discussed.

PAGE 34

34 CHAPTER 3 INTE RACTIVE DESIGN OPTIMIZATION In this chapter, by adding Interactive optimization algorithm, Sequential Quadratic Programming( SQP ) will support a designer more chance to change the design point as he/she wants, and can be more flexible for the optimization problems. Although SQP process i s still very powerful alg orithm for optimization, there are some numerical errors, for example, rounding off errors or discrepancy between the real problem definition and linearized cost and constraints functions. To overcome these problem s a design can i ntervene during the SQP process to correct these problem. In addition, by choosing other algorithms, a designer can get better result for their own object. In addition, sometimes the designer's experience can play an important role in finding optimum desig n, because not all design problems can be formulated as an optimization problem. In such case, it would be beneficial to allow designer to work with the optimization algorithm. There are several algorithms which can be added to the SQP algorithm I n this research, so me variations of SQP algorithm and Steepest Descent Method are added for a designer. By using variations of SQP algorithm, the algorithm can keep consistency as gradient based method it doesn't require more information or calculation than the original algorithm By using the Hessian matrix of cost function and gradients of constraint functions differently, other variations of SQP algorithm can be used. In addition, by adding "User Defined Direction" category, a designer can input his/her own direction, a designer can choose their own direction for updating design

PAGE 35

35 variables. This will be useful for incorporating designer's intuition during the optimization process. 3.1 Algorithms in Interactive S ession 3.1.1 Trade O ff Analysis The purpose of using opti mization is to reduce the cost function value without violating many constraints. However, sometimes the designer might want to reduce the value in certain condition. Trade off analysis is the module that calculate the various design updating direction and step size to satisfy the designer based on the current design point. At current design point, the gradient of cost and constraint functions and Hessian matrix of cost function are already calculated. With th e se information, a designer can choose one of 5 algorithms as he/she wants. All of these algorithms are mentions in [6]. In this research, all of these algorithms are modified to fit into the normalized design space. In addition, the equality constraints are not used. Original algorithms are attached in Appendix. Here, tilda(~) on the variables means the variables are normalized value. 1) Quadratic Programming(default) This is the basic SQP algorithm as mentioned in Chapter 3. The formula is minimize ( 3 1) subject to ( 3 2) For step size, inexact line search is used.

PAGE 36

36 2) Constraint Correction Algorithm This algorithm is useful when the constraint violation is relatively large at the current design point. Solving SQP to reach to the optimum point from in feasible region could be one way, but finding the optimum point inside of the feasible region of the design variable could be more effective. This algorithm suggests a designer the shortest path to the linearized feasible region from the current point whi ch is out of the feasible region. To show the shortest path, this algorithm do es not put any restriction on the changes in the cost function. That means, the cost function value could be increase dramatically, because this algorithm doesn't care about cost function, only finds the shortest path to move into the feasible region. This algorithm is define as minimize ( 3 3) subject to ( 3 4) Instead of using the unit step size in this research 0.5, 1. 0 and 2.0 is used for step size. It will check three step sizes, and choose the one that has the smallest constraint violation. This is because when constraints are linear, one as a step size works well, but in case of nonlinear problems, there is inconsis tency between real constraint function and linearized constraint function. So, by using those three step sizes, this can be overcome.

PAGE 37

37 3) Cost Reduction Algorithm This algorithm is defined without the approximate Hessian matrix H Without Hessian updating, th e problem is defined as below, minimize ( 3 5) subject to ( 3 6) Although Hessian updating can be used, in this research cost reduction problem without Hessian matrix will be discussed. After directio n has been determined, the step size can be calculated with a certain reduction in the cost function. For example, if a design er wants to reduce the cost function value as 5%, input a fract ion ( 3 7) This will yield linearization of cost and constraint functions. 4) Correction a t Constant Cost Algorithm In some cases, the current design point is out of feasible region, but constraint violation is not that large. In this case, a designer may want to move the current design point into the feasible region without increasing the cost function value too much. This algorithm can be used in this situation. B asic definition is almost same as Constraint Correction Algorithm, but one more constraint is added to the Constraint Correction Algorithm. With this condition, direction will be determined that either reduces or keeps the cost function value unchanged and

PAGE 38

38 find a shortest path to the feasible region. Actually, it has not to increase the cost function value, there could be a small increase of cost function value because of the linearization of the cost function. It is defined as below: minimize ( 3 8) subject to ( 3 9) 5) Constraint Correction at specified Cost In some cases, the cost function value must be increased to move into the feasible region. A designer can allow some percentage of the cost function value to be increased by adding to Equation 3 4 here is the allowable percentage of the cost function value, minimize ( 3 10) subject to ( 3 11) 6) Constraint Correction with Minimum Increase in Cost It can be possible that minimizes the increase in cost with correcting constraints. That is defined as minimize ( 3 1 2 ) subject to ( 3 1 3 )

PAGE 39

39 This is also similar to Linear Programming. A line search method can be used for the proper step size. 3.1.2 Steepest Descent Method As it is mentioned before, the Steepest Descent direction is used for di rection. Here, it needs line search algorithm. Inexact line search will be used, but not in the same way. Before, to use descent function, Lagrange multipliers are used for penalty parameter. But, those parameters will be calculated from Q uadratic P rogramm ing(QP) Subproblem. This algorithm will not use QP subproblem, there will be no Lagrange multipliers. So, it needs another penalty parameter. In this research, a designer can input the penalty parameter, so a designer can see the difference depend on the p enalty parameter. The descent function in this method is as below: (x)=f(x)+RV(x) ( 3 1 4 ) where k = ( x (k)) =f k +RV k f ( x ) : cost function value R : penalty parameter V(x)>=0 : maximum constraint violation And, in QP subproblem, initial step size is a unity, because the magnitude of direction is the difference between the current design point and updated design point. But here, direction will check only the steepest descent direction, it will search from the boundary of the design space. So, initial step size will be calculated as

PAGE 40

40 Figure 3 1. Design v ariable in n ormalized domain in normalized design variable space. The algorithm of this is if d[i] is greater than 0, ( 3 15) if d[i ] is less than 0, ( 3 16) From these equation, we can get the maximum step size which will be the initial step siz e. After then, according to the descent condition, it will be bisected until the descent condition is satisfied. 3.1.3 User Defined Direction Sometimes, a design can think the direction which the program suggests is not correct or he/she wants, so wants to cha nge the direction. In that case, a design can choose this category, and can input the direction as he/she wants.

PAGE 41

41 3.2 What I f S tudy What if module suggests new design updating direction and step size based on the algorithms abovementioned without additional a nalysis. A designer can choose the direction and step size one of the algorithms or input as the designer wants. After comparing all the directions and step sizes, the best one for the designer can be chosen.

PAGE 42

42 CHAPTER 4 POST PROCESS 4.1 Discrete O ptimum Design In op timization algorithms in the previous chapters continuous variable s are used. However, in real life, it is hard to use continuous variable for cost problem. For example, if the optimum area of truss area is 7.32332in 2 it is hard to produce this type of t russ, and later this is rarely used again, so it costs more to produce this truss. Instead of that, discrete variable is used in real life, for example, 7in 2 or 8in 2 so it can be easily produced again and later for other problems. However, choosing disc rete variable based on continuous variable is not that simple problem. If a designer choose s the discrete variable less than continuous variable, it could violate the constraints of that problem and reach to the bad result. Or, if a designer choose s the di screte variable greater than continuous variable, it could cost more. On the other hand, it would be costly to check all possible discrete design candidates. For example, when there are 10 design variables, there are 2 10 = 1024 number of discrete designs a round one continuous optimum design. Therefore, it would be extremely costly to check them all to find the best discrete design. To choose correct discrete variable, a simple way using active constraints is suggested in this research. A designer can input some discrete variables in input file, and this algorithm will choose based on th e se variables. At the current design point, gradients of cost and constraint functions are already calculated. This algorithm will use this information. As mentioned before, gradient of the function means the direction of maximum increasing of function value. So, by avoiding those constraint violation directions, more safe design point can be

PAGE 43

43 calculated. Figure 4 1. Discrete optimization The algorithm of this method is f or i 1 to Number of Active Constraint G total Sum( g i ) end for i 1 to Number of Design Variable if G total > 0 (x i ) optimum = x lower else if G total <0 (x i ) optimum = x upper else ( G total ==0) if ( f)>0 (x i ) optimum = x upper else (x i ) optimum = x lower end First, check the active constraints which are near optimum point and ignore other constraints which are not affect by moving optimum point. Then, calculate sum of the gradients of active constraints. That direction means constraints will be vio lated more

PAGE 44

44 along that direction. Then, depend on that direction, choose each discrete variable on the opposite direction. The result of this will yield the desirable discrete variable. 4.2 Design Interpolation Optimization algorithm is used to reduce the co st function value, which can be volume or weight of structures. However, sometimes a designer wants some other designs which is not optimum design for aesthetic design or some special reason. If all of the designers in the world pursue the optimum design, every structure in the world looks like the same, but not. For aesthetic reason, or because of the culture, sometimes designer wants another design. But that does not mean a designer ignore optimum design, that should be based on optimum design. This method will suggest a designer some interpolation design between the optimum design and the design which a designer wants. During the interpolation, the important issue is not to reduce the cost function value, but to avoid constraint violation. Even though ho w beautiful the design is, it is no use if it is not safe. So, in this method, it only cares about constraint violation. This algorithm will use the gradient at the optimum point, there will be no more calculation for gradient. Based on the information a t the optimum point, it will approximate the linearized constrain t functions. First, it will calculate new direction from the optimum point to the design point from a designer, which will be called user design. Then, new design point can be defined as belo w. ( 4 1)

PAGE 45

45 Figure 4 2. Design Interpolation where : the optimum design from the program : the user design : the interpolated design And then, define linearized constraint function as ( 4 2) where From now on, it will check whether new design point will violate constraints or not iteratively with small step size. I f it violates the constraints, starts from the last design point with smaller step size until the convergence condition is satisfied. Iterative process Step 1. as convergence criteria. Calculate d with X p rogram and X u ser Set t=0.1 and interval=0.1.

PAGE 46

46 Step 2. Calculate X new and with X new Check if If less 0 set t=t+interval and repeat again. Otherwise, go step 3 Step 3. If int erval is less than stop the process, set t=t current return Otherwise, set t l ower =t interval, t u pper =t, set interval=interval*0.1. Go to step 2. After some iterations, a designer can get new interpolated design between the optimum poi nt and the design point he/she wants. 4.3 Numerical Example The example is the ten bar truss problem which is used in Chapter 2. Table 4 1. Design data for the 10 bar truss Problem Definition Material properties Modulus of elasticity : E =10 4 ksi Stress con straints 25ksi for all members Initial Volume 20982lb Unit Inch, lbf Figure 4 3. Ten bar truss

PAGE 47

47 Table4 2 R esult of optimum solution for the 10 bar truss DV1 DV2 DV3 DV4 DV5 DV6 DV7 DV8 DV9 DV10 Cost Result 7.984 0.1 8.021 4.07 68 0.1 0.1 5.6870 5 .6370 5.7540 0.1 16082 The result of the optimization is continuous variable. Discrete optimum variable will be calculated based on this continuous optimum variable. At first, discrete variables are applied to design variable 1, 2, 3, 7, 8, 9 and 10 as b elow; Figure 4 4 I nput variables for 10 bar truss Entire input file is attached in Appendix. First column of the Figure 4 3 means the index of design variable. Second column means the number of discrete variables, third column means initial value, fourt h and fifth column mean lower bound and upper bound of the design variable. If there are discrete variables, then input discrete variables next line of each design variable. Based on continuous variables, the program compares each discrete variable, makes the discrete candidate table as

PAGE 48

48 Table4 3 D iscrete candidate table Design Variable Lower bound Upper bound 1 (discrete) 7.8 8.1 2 (discrete) 0 0.1 3 (discrete) 8.0 8.5 4 (continuous) 4.0768 5 (continuous) 0.1 6 (continuous) 0.1 7 (discrete) 5.5 6.5 8 (di screte) 5.5 6.5 9 (discrete) 5.5 6.5 10 (discrete) 0.0 0.1 When the result of the continuous variable is the same to the lower or upper bound of the discrete variable, then the value of boundary is applied. After making the table, the program checks the active constraints, and calculating sum of the gradient of the active constraints. In this problem, there are 3 active constraints, and the sum of the gradients of these is Table4 4 Sum of the gradient of active constraints Design Variable Sum of the gradi ent of active constraints 1(discrete) 12.7730 2(discrete) 2.5810 3(discrete) 12.1798 4(continuous) 0.1042 5(continuous) 0.9827 6(continuous) 2.5810 7(discrete) 16.7877 Design Variable 0.9109 1(discrete) 0.1480 2(discrete) 7.3267 Table 4 4 shows constraints will be more violated along the direction. So, choose the discrete variable at Table 4 3 based on Table 4 4; choose the discrete variable on opposite direction. The, the discrete variable is

PAGE 49

49 Table4 5. R esult of discrete optimum design Design Variable Discrete optimum variable 1(discrete) 8.1 2(discrete) 0.1 3(discrete) 8.5 4(continuous) 4.0768 5(continuous) 0.1 6(continuous) 0.1 7(discrete) 6.5 8(discrete) 6.5 9(discrete) 6.5 10(discrete) 0.1

PAGE 50

50 CHAPTER 5 SUMMARY AND CONCLUSION A m odified Sequential Quadratic Programming with inexact line search method is proposed for robust and stable algorithm. Not only it could reduce the number of iteration but also could reduce the number of analysis. Throughout numerical examples, it is show n that the proposed algorithm works well with the practical structural models. Still, it needs to be tested more examples for other composite materials. Now, finite element analysis in this program can only solve the truss problem, it needs to be added mor e elements for various problems. In interactive optimization process, many other variation s of Quadratic Programming Subproblem and algorithms are shown. With th e se, a designer can intervene and control the procedure, it can help a designer understand wh ich way design point will be updated and which way is better to update. By suggesting many ways, it will improve the designer's ability to design other structure. With post process, continuous optimum design can be applied to discrete optimum design in t he real world. Based on the information at the continuous optimum design, discrete optimum can be calculated so that reduce the computational cost for the further calcu lation. In addition, design interpolation suggests a designer reasonable design point interpolated between the optimum design and the design user suggested. By avoiding constraint violation, a designer can design as they want for aesthetic structure or som e s tructures for specific reasons based on the information at the optimum design. However, still there are many ways to reduce the cost of the procedure, for example, applying approximation model to reduce the line search method. It is not

PAGE 51

51 applied in thi s research, this should be applied later. And, as title mentioned, this work is based on the module. Other module can be used instead of the Sequential Quadratic Programming. Deterministic method is widely used, however, uncertainty is one of the most impo rtant part in these day's research. It could reduce the cost of building structure in the real world better than deterministic method. For that, reliability based design optimization module is necessary. When more reliable uncertainty method is applied to this research, a designer can get more information to build a structure based on this program.

PAGE 52

52 APPENDIX A PROGRAM INPUT FILE A.1 Sample Input file of Optimization for 10 bar truss

PAGE 53

53 A.2 Sample Input file of FEA for 10 bar truss

PAGE 54

54 APPENDIX B ALGORITHMS 1) Quadratic Prog ramming(default) minimize subject to 2) Constraint Correction Algorithm minimize subject to 3) Cost Reduction Algorithm minimize subject to Step size 4) Correction a t Constant Cost Algorithm minimize subject to

PAGE 55

55 5) Constraint Correction at specified Cost minimize subject to

PAGE 56

56 LIST OF REFERENCES [1] Davidon, W.C.(1959 ) "Variable metric method for minimization, research and de velopment report". ANL 5990. Argonne, IL:Argonne National Laboratory. [2] Fletcher, R. & Powell, M.J.D.(1963) A "A rapidly convergent descent method for minimization". The computer journal, 6, 163 180. [3] Gill P.E., Murray,W. & Wright,M.H.(1981) "Pract ical Optimization". New york:Academic Press. [4] Nocedal J. & Wright, S.J.(2006) "Numerical optimization). New york:Springer Science. [5] Powell, M.J.D. (1978) A fast algorithm for nonlinearly constrained optimization calculations ". In G.A. Wastson, et al. (Eds), Lecture notes in mathematics. Berlin: Springer Verlag. Also published in Numerical Analysis, Proceedings of the Biennial Conference, Dundee, Scotland, Jun 1977. [ 6 ] Arora, J.S. (1989). Introduction to Optimum Design McGrawHill. [7] Cau chy, A.(1847). Methode generale pour la resolution des systemes d'euqations simultanees Comptes Rendus. Acad. Sci. Paris. 536 538. [8] Newton, Issac.(1687). Principia ", Book Lemma V, case1. [9]Han, S.P.(1977). A global ly convergent method for nonlinear programming ". Journal of Optimization Theory and Applications, 22, 297 300. [1 0 ] Der Shin Juang & Yuan Ta We &Wei Tze Chang.(2003). "Optimum design of truss structures using discrete lagrangian method". Journal of the Ch inese Institute of Engineers, Vol. 26, No. 5, pp.635 646. [ 11 ] Powell, M.J.D. (1977). "The convergence of variable metric methods for nonlinearly constrained optimization calculations". presented at Nonlinear Programming Symposium 3, Madison, Wisconsin. [ 12 ] [ 13 ] Andrew R,Conn & Tomasz Pierzykowski(1977). A penalty function method converging directly t o a constrained optimum ". SIAM. J. NUMER. A NAL. [ 14 ] Tao Wen Liu & Dong Hui Li. (2004) A practical update criterion for SQP method ". Instityte of Applied Mathematics, Hunan University [1 5 ] E. Michael Gertz.(2001). A quasi newton trust region method ".

PAGE 57

57 [1 6 ] Nick I.M. Gould & Daniel P.Robinson(2008 ). A second derivative SQP method : Local convergence ". oxford university computing laboratory. [1 7 ] Mehdi Lachiheb & Hichem Smaoui(2008). "A simple decomposition based SQP algorithm for large scale nonlinear programming". [1 8 ] P. Spellucci. "A SQP meth od for general nonlinear programs using only equality constrainted subproblems". [1 9 ] D.Q. Mayne & E.Polak(1980). "A superlinearly convergent algorithm for constrained optimization problems". [ 20 ] Klaus Schittkowski. "An active set strategy for solving optimization problems with up to 200,000,000 nonlinear constraints". [ 21 ] A.D. Belegunds & L.Berke & S.N. patnaik(1995). "An optimization algorithm based on the methods of feasible directions". [ 22 ] Mehdi Lachiheb & Hichem Smaoui(2003). "An SQP adapted s imple decomposition". [ 23 ] Jian L. Zhou & Andre L. Tits. "An SQP algorithm for finely discretized continuous minimax problems and other minimax problems with many objective functions". [ 24 ] Malik Abu Hassan & Leong Wah June & Mansor Monsi. "Convergence o f a modified BFGS method". [ 2 5 ] Xiaoming Yu & Erwin H.Johnson & Shenghua Zhang. "Discrete optimization in MSC Nastran". [ 2 6 ] K. Schittkowski & Ch. Zillober. "Nonlinear programming". [ 2 7 ] Alexander martin. "Large Scale optimization". optimization and op erations research vol2. [ 2 8 ] E. Barbieri & M.Lombardi.(1998). "Minimum weight shape and size optimization of truss structures made of uncertain materials". Structural optimization 16. 147 154.Springer. [ 2 9 ] Zhenjun Shi & Shengquan Wang(2010) Modified nonmonotone Armijo line search for descent method". Numer Algor(2011)57:1 25. [ 30 ] David Q. Mayne. "On the use of exact penalty functions to determine step length in optimization algorithms".

PAGE 58

58 [ 31 ] Hsichun M. Hua.(1982). "Optimization for structures of di screte size elements". Computers & structures. Vol.17. No.3. pp.327 333. 1983 [ 32 ] Ronald Schoenberg.(2001) "Optimization with the Quasi Newton method". [ 33 ] Der Shin Juang & Yuan Ta We &Wei Tze Chang.(2003). "Optimum design of truss structures using di screte lagrangian method". [ 34 ] Silvana Maria Basros Afonso. "Optimum Solutions for trusses using sequential quadratic programming and genetic algorithms". [3 5 ] P.B. Thanedar & J.S.Arora(1990). "Robustness, generality and efficiency of optimization algor ithms for practical applications". Structural optimization 2, 203 212, Springer Veriag 1990. [3 6 ] Alex Barclay & Philip E. Gill & J. Ben Rosen. "SQP methods and their application to numerical optimal control". Birkhauser Verlag Basel. [3 7 ] Hemant Chicker mane & Hae Chang Gea(1996). "Structural optimization using a new local approximation method". international journal for numerical methods in engineering. vol.39, 829 846 (1996) [38] Adrew Whiston(1966). "A decomposition algorithm for quadratic programming ". University of Virginia.

PAGE 59

59 BIOGRAPHICAL SKETCH Yongmin Chung was born in Daegu, South Korea in 1982. He received a Bachelor of Science in mechanical engineering and computer science engineering from Hanyang University in Korea in February 2011. He receiv ed his M.S. degree from the University of Florida in the spring of 2013.