Citation |

- Permanent Link:
- https://ufdc.ufl.edu/UFE0014446/00001
## Material Information- Title:
- OPTCON: An Algorithm for Solving Unconstrained Control Problems
- Creator:
- LI, SHUO (
*Author, Primary*) - Copyright Date:
- 2008
## Subjects- Subjects / Keywords:
- Conjugate gradient method ( jstor )
Control theory ( jstor ) Cost functions ( jstor ) Flow charts ( jstor ) Information search ( jstor ) Mathematical independent variables ( jstor ) Mathematical variables ( jstor ) Optimal control ( jstor ) Orbits ( jstor ) Rocket propulsion ( jstor )
## Record Information- Source Institution:
- University of Florida
- Holding Location:
- University of Florida
- Rights Management:
- Copyright Shuo Li. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
- Embargo Date:
- 8/31/2007
- Resource Identifier:
- 476206011 ( OCLC )
## UFDC Membership |

Downloads |

## This item is only available as the following downloads: |

Full Text |

PAGE 1 OPTCON: AN ALGORITHM FOR SOLV ING UNCONSTRAINED CONTROL PROBLEMS By SHUO LI A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2006 PAGE 2 Copyright 2006 by Shuo Li PAGE 3 To my wonderful parents, Zh anwu Li and Peirong Zheng. PAGE 4 iv ACKNOWLEDGMENTS I express my sincere gratitude to Dr. William W. Hager for his trust, encouragement, guidance, and support (without which this work could not have been completed). I would also like to than k Dr. Shari Moskow and Dr. Jayadeep Gopalakrishnan for agreeing to serve on my committee. I thank the Department of Mathematics for their financial support. Special thanks go to my parents, who have been working very hard in their careers to enable me to study overseas. They have always given their loving support for my studies. I would like to thank my lovely hus band who provided endle ss love and support while I was writing this thesis. I would like to thank my friends (Hongchao, Sukanya, and Beyza) for their encouragement and support. PAGE 5 v TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES...........................................................................................................ix ABSTRACT....................................................................................................................... ..x CHAPTER 1 INTRODUCTION........................................................................................................1 2 OPTIMAL CONTROL PROBLEMS...........................................................................2 2.1 Discrete-time Systems and Runge-Kutta Discretization........................................2 2.2 Numerical Solution Methods..................................................................................3 2.3 Introduction to the CG_DESCENT Method...........................................................4 2.4 Applying CG_DESCENT in Optimal Control Problem.........................................5 3 IMPLEMENTATION OF OPTCON...........................................................................7 3.1 Introduction to OPTCON.......................................................................................7 3.2 Comparison of Performance...................................................................................7 4 SURVEY AND ANALYSIS........................................................................................9 4.1 Penalty Factor.........................................................................................................9 4.2 The Gradient Tolerance Factor.............................................................................10 4.3 Line Search Parameters........................................................................................10 4.4 Applying Different Runge-Kutta Schemes...........................................................11 PAGE 6 vi APPENDIX A HOW TO USE OPTCON...........................................................................................15 System Requirements.................................................................................................15 Parameter File and Default Values.............................................................................15 Running OPTCON......................................................................................................15 B TESTING DATA FOR GRADIE NT TOLERANCE FACTOR................................24 LIST OF REFERENCES...................................................................................................30 BIOGRAPHICAL SKETCH.............................................................................................31 PAGE 7 vii LIST OF TABLES Table page 3-1. Comparison of performance for Problem 1................................................................8 3-2. Comparison of performance for Problem 2................................................................8 4-1. Discrete state error in L and CPU time with different penalty factors...................13 4-2. Performance of OPTCON using differen t value of gradient tolerance factor..........13 4-3. Performance of OPTCON using a combination line search conditions...................14 4-4. Discrete state error in L for Problem 1 and schemes 1-6.......................................14 4-5. Discrete state error in L and CPU time for Problem 2 and schemes 1-6................14 A-1. Options for explicit Runge-Kutta schemes..............................................................23 A-2. ParametersÂ’ default values in optcon_c.parm..........................................................23 B-1. Approximate Wolfe line search, c = 100000............................................................24 B-2. Approximate Wolfe line search, c = 10000..............................................................24 B-3. Approximate Wolfe line search, c = 1000................................................................25 B-4. Approximate Wolfe line search, c = 100..................................................................25 B-5. Approximate Wolfe line search, c = 10....................................................................25 B-6. Approximate Wolfe line search, c = 1......................................................................26 B-7. Approximate Wolfe line search, c = 0.1...................................................................26 B-8. Approximate Wolfe line search, c = 0.01.................................................................26 B-9. Approximate Wolfe line search, c = 0.001...............................................................27 B-10. Combination line search, c = 10000.........................................................................27 B-11. Combination line search, c = 1000...........................................................................27 PAGE 8 viii B-12. Combination line search, c = 100.............................................................................28 B-13. Combination line search, c = 10...............................................................................28 B-14. Combination line search, c = 1.................................................................................28 B-15. Combination line search, c = 0.1..............................................................................29 B-16. Combination line search, c = 0.01............................................................................29 B-17. Combination line search, c = 0.001..........................................................................29 PAGE 9 ix LIST OF FIGURES Figure page 2-1. Flow chart for solving optimal control problems.......................................................6 3-1. Main components of OPTCON..................................................................................8 4-1. Relationship of grad ient tolerance factor c and CPU time.......................................12 4-2. Comparison of CPU times when applyi ng a combination of Wolfe/approximate Wolfe........................................................................................................................12 A-1. Driver1.c................................................................................................................. ..21 A-2. Segment of Driver1.c if user provides Runge-Kutta scheme...................................22 PAGE 10 x Abstract of Thesis Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science OPTCON: AN ALGORITHM FOR SOLV ING UNCONSTRAINED CONTROL PROBLEMS By Shuo Li August 2006 Chair: William W. Hager Major Department: Mathematics William Hager and Hongchao Zhang developed a new optimization algorithm (CG_DESCENT). Our study shows how CG _DESCENT can be used to solve unconstrained optimal control problems. Th e resulting algorithm is called OPTCON. This numerical work is meaningful, since op timal control applications appear in so many fields (such as aerospace, electroni c circuits, heat conduction, and energy optimization). These problems are usually complicated, with a large number of variables, parameters, and initial values. Without a good numerical method, we barely can solve them by hand. Thus we need a fast, stable, and highly accurate method that requires less memory. We tested OPTCON and compared it with another conjugate gradient method. PAGE 11 1 CHAPTER 1 INTRODUCTION Based on a study by Hager and Zhang [1], a new conjugate gradient method CG_DESCENT was produced. This method can obtain even higher convergence speed than the ordinary conjugate gradient method and has relatively low memory requirement during the computation. My study used the ne w conjugate gradient method to obtain a new method-OPTCON for solving non linear optimal control problems. We first define several terms that are us ed throughout this thesis. Consider an unconstrained optimal control problem in Equation 1-1. minimize )) ( (ft x J (1-1) subject to ) ), ( ), ( ( ) ( t t u t x f t x , ] , [0 ft t t (1-2) ) (0t x , (1-3) where nxt xR ) (, ) ( t x means x dt d, and nct uR ) (, nx nc nxfR R R :, and R R nx:. The function J evaluates the system performance or system cost. Equation 1-2 is called the system dynamics, which is a group of differential equations for nxt xR ) ( (the state variable). The variable ) ( t u is the control variable , which is used to optimize the system cost . The system has an initial time (0t ), and a final time (ft ). Our numerical work only focuses on problems for which the initial condition (Equation 1-3) is given. Note that some problems have a final condition (f fx t x ) (). PAGE 12 2 CHAPTER 2 OPTIMAL CONTROL PROBLEMS 2.1 Discrete-time Systems and Runge-Kutta Discretization Hager [2] analyzed a Runge-Kutta di scretization and its convergence for unconstrained control problems. To discretize the unconstrai ned optimal control problem (1.0.1)-(1.0.3), we use the uniform mesh of time interval, which has the length N t t hf 0 , N N. Now, if we apply ButcherÂ’s [3 ] s-stage Runge-Kutta integration scheme with coefficients ija and ib , s j i , 1 to the system dynamics (Equation 1-2), we get Equation 2-1. s i ki i i ku y f b x1) , (, (2-1) where h x x xk k k 1 s j kj j ij k iu y f a h x y1) , ( , s i 1, 1 0 N k. (2-2) Therefore, the discrete cont rol problem is Equation 2-3. minimize ) (Nx J (2-3) subject to s i ki i i ku y f b x1) , (, where 0x , (2-4) s j kj j ij k iu y f a h x y1) , (, s i 1, 1 0 N k. Since h x x xk k k 1, Nx in Equation 2-3 is obtained by solving Equation 2-5 (the state equation ). PAGE 13 3 s i ki i i k ku y f b h x x1 1) , (, where s j kj j ij k iu y f a h x y1) , (, (2-5) s i 1, 1 0 N k, and 0x . Now, we explain how to compute th e gradient of the cost function ) (Nx with respect to the discrete control. To start we introduce the associated costate equation (Equation 2-6). ) ( , ) , (1 1 N x N s i ki i x i i k kx u y f b h (2-6) where s j j i ji k ib a1 1 (2-7) ) , (kj j x j j ju y f hb . (2-8) As shown in Hager [2], ) , ( ) (kj j u j j uu y f hb ukj . (2-9) We used Equations 2-1 through 2-9 in our OPTCON to evaluate the function cost and function gradient. For our numerical work , we assume the Runge-Kutta scheme is explicit. The conditions for the explicit 2nd, 3rd, and 4th order of a Runge-Kutta discretization can be found in Hager [2]. 2.2 Numerical Solution Methods Analytic solution or accurate solution of optimal control problem is hard to obtain due to the complexity of the cost function ) ( u J and the system dynamics ) ), ( ), ( ( t t u t x f . In most practical problems, numerical opt imization method must be used. Figure 2-1 shows a flow chart for solving the optimal control problem. Many methods can be applied in the optimi zation process. The gradient (or steepest descent) method is one of the oldest and most obvious methods, but experience has shown that this method can be extremely sl ow. The conjugate gradient method (such as PAGE 14 4 the feasible direction method and the gradient projection method) can be applied to our project. We compared the gradient pr ojection method with Hager and ZhangÂ’s CG_DESCENT method in Chapter 3. 2.3 Introduction to th e CG_DESCENT Method The CG_DESCENT method [1,4] is a conj ugate gradient method for solving an unconstrained optimization problem } : ) ( min{nR x x f , where R R fn : is continuously differentiable. The variable kx satisfies the recurrence: k k k kd x x 1. k is the step size and is positive. kd is called the searching direction. It is generated by Equation 2-10. . ,0 0 1 1g d d g dk k k k (2-10) CG_DESCENT method developed a special c hoice (Equation 2-11) for the parameter k . } , max{k k kB , (2-11) where } , min{ 1k k kg d , and 1 2) 2 ( 1 k T k T k k k k k T k kg y d y d y y d B . is a positive constant and k is updated by a line search procedure. It uses secant and bisecant steps for faster convergence rate. This procedure will stop as soon as the WolfeÂ’s conditions (Equation 2-12) are satisfied. k k ) 0 ( ) ( ) 0 ( and ) 0 ( ) ( k, (2-12) where ) ( ) (k kd x f . Note that and are positive constants satisfying 1 0 . PAGE 15 5 The disadvantage of using the WolfeÂ’s condition is that when the variable kx is close to the local minimum, the term ) 0 ( ) ( k becomes relatively inaccurate. Hence, in Hager and Zhang [1], Equation 2-12 is replaced by the approximate Wolfe conditions (Equation 2-13). ) 0 ( ) ( ) 0 ( ) 1 2 ( k, (2-13) where 1 and 2 / 1 0 . This condition will be used only when the function value reaches some neighborhood of a local minimum. By default, this method will apply approximate Wolfe condition. User can compute with the standard Wolfe conditions by se tting AWolfe parameter to FALSE in the CG_DESCENT parameter file. 2.4 Applying CG_DESCENT in Optimal Control Problem First we initialize the control variable n k kt u0)} ( {: n k kt u u0 0 0)} ( { . Then, we update the control by k k k kd u u 1, where k is the searching step size and kd is the searching direction generated by Equation 2-10. The step size k is computed with the same fashion as in Section 2.3. The updating of the control will termin ate if Inequality 214 holds. tolerance gradient xN u ) ( (2-14) This method has been implemented in C. The program is called OPTCON.C. PAGE 16 6 Figure 2-1. Flow chart for so lving optimal control problems PAGE 17 7 CHAPTER 3 IMPLEMENTATION OF OPTCON 3.1 Introduction to OPTCON OPTCON is a C program for solving the opt imal control problems (Equation 1-1). It can be downloaded at http:// www.math.ufl.edu/~lishuo/optcon.html. In OPTCON, we provide the routin e that evaluates the gradient ) ( ~ u J and the routine that evaluates the objective cost ) ( ~ u J . The optimization is performed using Hager and ZhangÂ’s CG_DESCENT [1]. The main components of OPTCON are shown in Figure 3-1. To use this program, user needs to provi de a driver code, with routines that evaluate the following: )) ( (ft x )) ( (ft x dx d nxR t u x f ) , , ( nx nxR t u x x f ) , , ( nc nxR t u x u f ) , , ( How to use OPTCON package and the sample problems can be found in Appendix A. 3.2 Comparison of Performance For evaluate the performance of OPTCON, we compare with another code GP.C, which uses the gradient projec tion method in Hestenes [5], with an Armijo rule for step and then applies the conjugate gradient method with Polak-Ribiere update of the search PAGE 18 8 direction. The source code can be found and downloaded at http://www.math.ufl.edu/~lishuo/optcon.html. We compared performance using Problem 1 and Problem 2 (Appendix A). In these two codes, we applied the same Runge-Kutta scheme, time mesh intervals, initial guess, and gradient tolerance. Since Problem 2 is solved by the penalty approach, we also applied the same penalty factor. Data in Ta bles 3-1 and 3-2 shows that OPTCON with the new CG_DESCENT method runs much faster than the prev ious conjugate gradient method. Figure 3-1. Main components of OPTCON Table 3-1. Comparison of performance for Problem 1 Optimization method Cost evaluation Gradient evaluation CG iterations CPU time (sec) CG_DESCENT 32 times 34 times 20 0.11 Gradient Projection 170 times 170 times 47 0.24 Table 3-2. Comparison of performance for Problem 2 Optimization method Cost evaluation Gradient evaluation CG iterations CPU time (sec) CG_DESCENT 2616 times 3379 times 1367 18.00 Gradient Projection 25815 times 23947 times 2736 139.00 PAGE 19 9 CHAPTER 4 SURVEY AND ANALYSIS This methodÂ’s performance varies with diffe rent choice of parameters and schemes. In this chapter, we will discuss performance relative to the parameters of CG_DESCENT and the Runge-Kutta schemes. Sample probl ems used in this Chapter are given in Appendix A. 4.1 Penalty Factor Let us use Problem 2 (Appendix A). We solve this problem by using a penalty approach. Since the problem provide s the final-time boundary conditions: 0 ) (2 ft x and ) ( ) (1 3f ft x t x, we try to minimize the terms: 0 ) (2Nt x and ) ( ) (1 3 N Nt x t x . Hence, the target cost function becomes Equation 4-1. ) ( ) ( ) ( ) ( ) ( ) 0 ) ( ( ) ( ) , , , , ( ~1 3 2 2 1 3 2 2 1 3 2 1 N N N T k N N N Nt x t x t x t x t x P t x P t x t x x x J , (4-1) where P is the penalty factor. Equa tion 4-2 updates the multiplier vector k . ) ( ) ( ) ( 21 3 2 1 N N N k kt x t x t x P (4-2) We will recursively apply OPTCON, and update the multiplier vector k in each iteration. Now, letÂ’s try different values of penalty factor. Table 41 shows the infinity PAGE 20 10 norm of the state error and the total CPU time for different choice of penalty factors. Note: we apply time mesh size 500 N, Runge-Kutta scheme 2 (Table A-1) and 510 _ tol grad for all iterations. Greater penalty fa ctor yields faster convergence rate. However, higher convergence rate is sacrificed with longer CPU time. Note that when you choose penalty factor P less than 1000, this method will not converge. 4.2 The Gradient Tolerance Factor For Equation 4-2 we solve the discrete problem (Equation 4-1) to the level of accuracy using CG_DESCENT. The convergence criterion in CG_DESCENT is based on the norm of the gradient. The code terminates when the norm is less than an input parameter grad_tol . Let c be the gradient tolerance factor. We first apply 510 _ tol grad . Then, we compute ) ( ) (f Nt x t x and set ) ( ) ( _f Nt x t x c tol grad for the sub problem. Data in Tables B-1 to B-9 result in the performance shown in Table 4-2. Based on the total CPU time listed in Table 4-2, we obtained Figure 4-1. Comparing the performance with the different value of c , we can conclude the following. Very large c will not work. In this case, when 100000 c the computation does not converge. CPU time as 100000c. Choosing smaller c can take longer CPU time. The largest c such that computation converg es provides the best CPU time. 4.3 Line Search Parameters In the previous computation, we always used an approximate Wolfe line search [1]. Now, we will use a combination line search where the ordinary Wolfe conditions are used until the value of Equation 4-1 average is sufficiently small. Then, we switch to PAGE 21 11 approximate Wolfe conditions. To do this, we should set Â“AWolfeÂ” parameter in CG_DESCENT_C.PARM to Â“FALSEÂ”. Data in Tables B-10 to B-17 result in the performance shown in Table 4-3. Comparing the performance of OPTCON using combination line search conditions with the previous one, we obtained Figure 4-2. It is clear that the performance is improved when we applying a combination line search. The best CPU time is 15 seconds, one fourth faster than the previous one. 4.4 Applying Different Runge-Kutta Schemes OPTCON provides 6 optional explicit R unge-Kutta schemes (Table A-1). The performance varies with different schemes. For Problem 1 (Appendix A), the discrete state error in L for different choice of Runge-Kutta schemes and different time mesh is shown in Table 4-4. Scheme 6 which has the f ourth order of accuracy provides the best discretization error for this problem . To obtain the error of the order 810, we need to take N=2000, 160, 1000, 2000, 2000, 40 for schemes 1-6 respectively. Table 4-5 shows the performance of R unge-Kutta schemes 1-6 applying to the Problem 2 (Appendix A) with respect to 10000 P and the corresponding N. It shows that scheme 6 provides the best CPU time. Among the third-order schemes (schemes 25), scheme 2 gives the best performance. PAGE 22 12 Figure 4-1. Relationship of gradient tolerance factor c and CPU time Figure 4-2. Comparison of CPU times when applying a combination of Wolfe/approximate Wolfe PAGE 23 13 Table 4-1. Discrete state error in L and CPU time with different penalty factors Iterations P = 1000 P = 5000 P = 10000 P = 50000 P = 500000 1 5.87e-003 1.43e-003 7.38e-004 1.52e-004 1.53e-005 2 1.52e-003 1.35e-004 3.84e-005 1.71e-006 1.76e-008 3 6.30e-004 1.66e-005 2.51e-006 2.37e-008 2.21e-011 4 2.59e-004 2.04e-006 1.64e-007 3.39e-010 3.61e-011 5 1.06e-004 2.49e-007 1.07e-008 1.78e-011 CPU time: 36 sec 6 4.37e-005 3.05e-008 6.56e-010 3.49e-011 7 1.79e-005 3.66e-009 9.49e-011 CPU time: 33 sec 8 7.37e-006 4.97e-010 2.33e-011 9 3.03e-006 1.13e-010 2.33e-011 10 1.24e-006 2.16e-011 CPU time: 27 sec 11 5.11e-007 1.78e-011 12 2.10e-007 1.78e-011 13 8.62e-008 CPU time: 20 sec 14 3.53e-008 15 1.46e-008 16 5.95e-009 17 2.41e-009 18 1.34e-009 19 4.10e-010 20 3.71e-010 21 1.65e-010 22 1.70e-010 CPU time: 14 sec Table 4-2. Performance of OPTCON using di fferent value of gradient tolerance factor Factor c CPU time ) ( ) (f Nt x t x CG iterations 100,000 Unbounded 10,000 20.00 sec. 4.47e-011 1693 1,000 22.00 sec. 6.76e-012 1917 100 24.00 sec. 2.36e-011 2119 10 29.00 sec. 2.53e-011 2578 1 31.00 sec. 2.63e-011 2548 0.1 35.00 sec. 9.18e-012 2842 0.01 36.00 sec. 3.17e-011 3053 0.001 36.00 sec. 3.17e-011 3053 PAGE 24 14 Table 4-3. Performance of OPTCON us ing a combination line search conditions Factor c CPU time ) ( ) (f Nt x t x CG iterations 100,000 Unbounded 10,000 15.00 sec. 5.17e-011 1279 1,000 18.00 sec. 2.33e-011 1453 100 21.00 sec. 2.11e-011 1758 10 23.00 sec. 2.27e-011 1843 1 26.00 sec. 4.08e-011 2224 0.1 29.00 sec. 2.76e-011 2401 0.01 30.00 sec. 4.25e-011 2406 0.001 30.00 sec. 4.25e-011 2406 Table 4-4. Discrete state error in L for Problem 1 and schemes 1-6 time mesh Scheme 1 Scheme 2 Scheme 3 Scheme 4 Scheme 5 Scheme 6 N=2000 7.0e-008 8.0e-012 1.5e-008 3.5e-008 6.0e-008 1.4e-013 N=1000 2.8e-007 6.4e-011 6.1e-008 1.4e-007 2.4e-007 1.6e-013 N=600 7.8e-007 3.0e-010 1.7e-007 3.9e-007 6.7e-007 5.4e-013 N=320 2.7e-006 2.0e-009 5.9e-007 1.4e-006 2.4e-006 5.9e-012 N=160 1.1e-005 1.6e-008 2.4e-006 5.5e-006 9.5e-006 9.6e-011 N=80 4.4e-005 1.3e-007 9.7e-006 2.2e-005 3.8e-005 1.5e-009 N=40 1.8e-004 1.1e-006 3.9e-005 8.9e-005 1.6e-004 2.4e-008 Table 4-5. Discrete state error in L and CPU time for Problem 2 and schemes 1-6 Scheme 1 N=2000 Scheme 2 N=160 Scheme 3 N=1000 Scheme 4 N=2000 Scheme 5 N=2000 Scheme 6 N=40 ) ( ) (f Nt x t x 4.2e-011 7.4e-011 9.3e-012 2.6e-011 2.6e-011 7.6e-012 CPU time 27.00 sec 6.00 sec 69.00 sec 43.00 sec 65.00 sec 1.00 sec PAGE 25 15 APPENDIX A HOW TO USE OPTCON System Requirements OPTCON uses GNU gcc complier, which is available on most UNIX systems. Availability of memories for running the program depends on the complexity of the problem. Once you run OPTCON, it will first ch eck the availability of your computer memory. Parameter File and Default Values The OPTCON package has two parameter f iles. One file is cg_descent_c.parm, which is used in CG_DESCENT. The meaning of the parameters and their default values can be found in [6]. Another parameter file optcon_c.parm is used in OPTCON subroutine. The parameters are as follows: PrintLevel (to print the result for each iteration) PrintFinal (to print the final result) scheme (choice of explicit Runge-Kutta schemes) Table A-1 gives the options of schemes. Tabl e A-2 gives the defau lt values of OPTCON parameters. Running OPTCON To run the OPTCON, the user needs to cr eate a driver program, which should be placed in the same directory where the OPTC ON package stored. I will demonstrate how to use OPTCON with the following example. PAGE 26 16 Problem 1: minimize dt t x t u1 0 2 2) ( 2 ) ( 2 1 (A-1) subject to 1 ) 0 ( ), ( ) ( 5 . ) ( x t u t x t x (A-2) Problem 1 [2] can be solved with the analytic optimal solution: ) 2 ( 2 ) (3 2 / 3 3 3e e e e t xt t , ) 2 ( ) ( 2 ) (3 2 / 3 3 3e e e e t ut t . Now, let ) ( t x be denoted by ) (1t x and 2 2 1 2) ( ) ( 2 ) ( t u t x t x , by fundamental theorem of calculus, the original cost function (Equation A-1) equals to Equation A-3. minimize ) 0 ( ) 1 ( 2 12 2x x . (A-3) Let 0 ) 0 (2 x, we transformed Problem 1 into Equations A-4 through A-6. minimize ) 1 ( 2 12x (A-4) subject to 1 ) 0 ( ), ( ) ( 5 . ) (1 1 1 x t u t x t x (A-5) 0 ) 0 ( , ) ( ) ( 2 ) (2 2 2 1 2 x t u t x t x . (A-6) Now, we create the driver program-drive r1.c as shown in Figure A-1 for the transformed system (Equations A-4 to A-6). No te that we use the default values of the OPTCON parameters. Lines 9 to 12 of Figur e A-1 defined the number of time mesh intervalsn , number of state variablesnx , number of controlsnc and number of stage(s)ns in the Runge-Kutta scheme. Note th at the size of th e state array is nx nx ns n (see line 29 of Figure A-1), in which the last nx elements store the final state. Let k j ix, be the discrete state, where k is the time level 1 0 n k, i is the component of state nx i 1, and j is the stage in Runge-Kutta scheme ns j 1. We store k j ix, in the array state as the following: PAGE 27 17 For each fixed j and k , we first increment i from 1 to nx , next we increment j from 1 to ns , and finally we increment k from 0 to 1 n. The discrete control k j iu, (nc i 1) is stored in the control array (defined in line 30 of Figure A-1) in the same fashion. The user-provided routines include (lines 49 to 73 in Figure A-1) double my_phi(double *x_f) Â– the routine that evaluates the cost function )) ( (ft x . The input is Â“ double *x_f Â”-a pointer points to the first element of the array x_f which contains the final states. The out put is a double precision value of cost )) ( (ft x . void my_dphi(double *dphi, double *x_f) Â– the routine that evaluates )) ( (ft x dx d . The input is Â“ double *x_f Â”. The output is the array dphi that contains the value of nx fR t x dx d )) ( ( . void my_f(double *f, double *x, double *u, double time) Â– the routine that evaluates the system dynamic function nxR t u x f ) , , (. Inputs are: Â“ double *x Â”-an array contains the states x , Â“ double *u Â”-an array contains the controls u , and Â“ double time Â”-time t . The output is Â“ double *f Â”-an array contains the value of nxR t u x f ) , , (. void my_fx(double *fx, double *x, double *u, double time) Â– the routine that evaluates nx nxR t u x x f ) , , (. Â“ Double *x Â”, Â“ double *u Â”, and Â“ double time Â” are inputs. The output is Â“double *fx Â”-an array contai ns the value of nx nxR t u x x f ) , , ( . Note that we store the matr ix by rows (see lines 63-66 of Figure A-1). void my_fu(double *fu, double *x, double *u, double time) Â– the routine that evaluates nc nxR t u x u f ) , , (. Â“ Double *x Â”, Â“ double *u Â”, and Â“ double time Â” are inputs. The output is Â“ double *fu Â”-an array contains the value of nc nxR t u x u f ) , , (. Note that we store the matr ix by rows (see lines 70 and 71 of Figure A-1). We applied the provided Runge-Kutta sche me in this example. To provide your own Runge-Kutta scheme, you should set the Â“s chemeÂ” parameter to 0 and initialize the a PAGE 28 18 and b arrays to contain the coefficients of th e scheme (as shown in Figure A-2, lines 20 and 21). Note that we store the coeffici ents of Runge-Kutta matrix A by rows. After calling OPTCON (line 42 in Figure A1 or line 43 in Figure A-2), we obtain the final state (the last nx elements in the state array) and the discrete optimal control for Problem 1. Now, letÂ’s consider another example, wh ich has both initial conditions and final conditions. Problem 2: orbit transfer problem (s ee Bryson and Ho [7] page 66-68) For a constant-thrust rocket with thru st T, operating from time 0 to time ft , we want to find an optimal thrust angle history ) ( t that transfer the rocket from an initial given orbit to the largest pos sible circular orbit. Now, letÂ’s transform it into a mathematical model. First, the variables and parameters are given by: 1x-the radius of the orbit with an attracting center; 2x-the radial velocity; 3x -the tangential velocity; 0m -the mass of the rocket; m-the fuel consumption rate; ) ( t -the history of thrust angle; -the gravity constant of the attracting center. Th en, the model is formulated by Equations A-7 to A-15. maximize ) (1ft x , (A-7) System dynamics: 2 1x x , (A-8) t m m T x x x x 0 2 1 1 2 3 2sin , (A-9) t m m T x x x x 0 1 3 2 3cos. (A-10) PAGE 29 19 Initial conditions: a x ) 0 (1, (A-11) 0 ) 0 (2x, (A-12) a x ) 0 (3. (A-13) Final conditions: 0 ) (2ft x , (A-14) ) ( ) (1 3f ft x t x . (A-15) Where, , 336 . 8 , 10 6 . 149 , / 9 . 12 , 000 , 109 0N T m a day kg m kg m days t s mf193 and / 3273310 . 12 3 20 . Since the final conditions are given, we solve this problem using a penalty approach. At step k , the cost function is given by Equation A-16, ) ( ) ( ) ( ) ( ) ( ) ( ) ( )) ( ), ( ), ( (1 3 , 2 2 , 1 2 1 3 2 2 1 3 2 1N N k N k N N N N N N Nt x t x t x t x t x P t Px t x t x t x t x (A-16) with Lagrange multipliers 1 and 2 , and a constant penalty factor P. At step 1 k, 1 and 2 are updated by Equations A-17 and A-18. ) ( 22 , 1 1 , 1N k kt Px (A-17) ) ( ) ( 21 3 , 2 1 , 2N N k kt x t x P (A-18) Note that 0 , 1 and 0 , 2 are given. By penalty approach, we solve this problem by recursively calling OPTCON, each iterati on involved in updating the value of 1 and 2 . The program will stop when the value ) ( ) ( ) (1 3 2N N Nt x t x t x is no longer PAGE 30 20 decreasing. Please refer to the driver pr ogram-driver2.c, which can be found in the OPTCON package. PAGE 31 21 Figure A-1. Driver1.c PAGE 32 22 Figure A-2. Segment of Driver1.c if user provides Runge-Kutta scheme PAGE 33 23 Table A-1. Options for explicit Runge-Kutta schemes scheme 1 scheme 2 2 / 1 2 / 1 , 0 1 0 0 b A 6 / 1 3 / 2 6 / 1 , 0 2 1 0 0 2 / 1 0 0 0 b A scheme 3 scheme 4 9 / 4 3 / 1 9 / 2 , 0 4 / 3 0 0 0 2 / 1 0 0 0 b A 3 / 1 3 / 1 3 / 1 , 0 2 / 1 2 / 1 0 0 2 / 1 0 0 0 b A scheme 5 scheme 6 3 / 2 6 / 1 6 / 1 , 0 4 / 1 4 / 1 0 0 1 0 0 0 b A 6 / 1 3 / 1 3 / 1 6 / 1 , 0 1 0 0 0 0 2 / 1 0 0 0 0 2 / 1 0 0 0 0 b A Table A-2. ParametersÂ’ default values in optcon_c.parm ParameterÂ’s name Default value PrintLevel 0 PrintFinal 1 scheme 2 PAGE 34 24 APPENDIX B TESTING DATA FOR GRADIENT TOLERANCE FACTOR We test different value of gradient tolerance factor c for Problem 2 in Appendix A. LetÂ’s apply time mesh size 500 N, scheme 2 (Table A-1), penalty factor 10000 P, and initial 510 _ tol grad . Using only approximate Wolfe line search conditions in CG_DESCENT, we obtain Tables B-1 to Table B-9. Using a combin ation of Wolfe and approximate Wolfe line search conditions in CG_DESCENT, we obtain Tables B-10 to Table B-17. Table B-1. Approximate Wolfe line search, c = 100000 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 11.00 sec 1128 2 7.44e-005 0.00 sec 2 3 3.80e-005 0.00 sec 6 4 5.48e-006 1.00 sec 12 5 9.31e-006 0.00 sec 39 Not convergent Table B-2. Approximate Wolfe line search, c = 10000 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 12.00 sec 1128 2 4.24e-005 0.00 sec 22 3 1.02e-005 0.00 sec 28 4 5.21e-007 0.00 sec 26 5 6.42e-008 1.00 sec 66 6 3.20e-009 1.00 sec 35 7 4.98e-010 1.00 sec 136 8 1.06e-010 1.00 sec 60 9 7.25e-011 1.00 sec 49 10 2.58e-011 2.00 sec 120 11 1.50e-011 0.00 sec 4 12 3.91e-012 1.00 sec 18 13 4.47e-011 0.00 sec 1 Total: 20.00 sec PAGE 35 25 Table B-3. Approximate Wolfe line search, c = 1000 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 12.00 sec 1128 2 2.89e-005 0.00 sec 80 3 2.58e-006 2.00 sec 91 4 2.42e-007 0.00 sec 82 5 1.88e-008 2.00 sec 114 6 1.00e-009 1.00 sec 109 7 8.39e-011 1.00 sec 97 8 4.60e-011 3.00 sec 184 9 1.27e-011 1.00 sec 9 10 5.80e-012 0.00 sec 7 11 6.76e-012 0.00 sec 16 Total: 22.00 sec Table B-4. Approximate Wolfe line search, c = 100 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 11.00 sec 1128 2 2.95e-005 2.00 sec 133 3 2.04e-006 2.00 sec 138 4 1.35e-007 1.00 sec 132 5 8.95e-009 3.00 sec 288 6 5.95e-010 3.00 sec 231 7 7.19e-011 1.00 sec 33 8 1.10e-011 1.00 sec 21 9 2.36e-011 0.00 sec 15 Total: 24.00 sec Table B-5. Approximate Wolfe line search, c = 10 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 12.00 sec 1128 2 2.96e-005 2.00 sec 172 3 1.92e-006 2.00 sec 208 4 1.26e-007 3.00 sec 294 5 8.19e-009 5.00 sec 410 6 5.51e-010 2.00 sec 171 7 8.01e-011 0.00 sec 27 8 2.03e-011 1.00 sec 51 9 7.67e-012 1.00 sec 87 10 2.53e-011 1.00 sec 30 Total: 29.00 sec PAGE 36 26 Table B-6. Approximate Wolfe line search, c = 1 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 12.00 sec 1128 2 2.96e-005 3.00 sec 246 3 1.93e-006 4.00 sec 342 4 1.26e-007 5.00 sec 355 5 8.25e-009 2.00 sec 175 6 5.22e-010 2.00 sec 114 7 3.65e-011 1.00 sec 88 8 1.09e-011 1.00 sec 97 9 4.12e-012 1.00 sec 1 10 2.63e-011 0.00 sec 2 Total: 31.00 sec Table B-7. Approximate Wolfe line search, c = 0.1 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 12.00 sec 1128 2 2.96e-005 4.00 sec 350 3 1.93e-006 5.00 sec 377 4 1.26e-007 4.00 sec 363 5 8.17e-009 4.00 sec 258 6 5.52e-010 2.00 sec 150 7 9.22e-011 0.00 sec 52 8 4.66e-011 1.00 sec 37 9 3.65e-011 2.00 sec 76 10 1.54e-011 0.00 sec 2 11 1.32e-011 0.00 sec 10 12 6.47e-012 1.00 sec 19 13 9.18e-012 0.00 sec 20 Total: 35.00 sec Table B-8. Approximate Wolfe line search, c = 0.01 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 11.00 sec 1128 2 2.96e-005 6.00 sec 441 3 1.93e-006 4.00 sec 404 4 1.26e-007 4.00 sec 327 5 8.21e-009 5.00 sec 378 6 5.16e-010 3.00 sec 200 7 7.05e-011 0.00 sec 48 8 2.31e-011 2.00 sec 104 9 3.17e-011 1.00 sec 23 Total: 36.00 sec PAGE 37 27 Table B-9. Approximate Wolfe line search, c = 0.001 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 11.00 sec 1128 2 2.96e-005 5.00 sec 441 3 1.93e-006 5.00 sec 404 4 1.26e-007 4.00 sec 327 5 8.21e-009 5.00 sec 378 6 5.16e-010 3.00 sec 200 7 7.05e-011 0.00 sec 48 8 2.31e-011 2.00 sec 104 9 3.17e-011 1.00 sec 23 Total: 36.00 sec Table B-10. Combina tion line search, c = 10000 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 4.25e-005 0.00 sec 22 3 1.00e-005 1.00 sec 31 4 8.34e-007 0.00 sec 26 5 5.21e-008 1.00 sec 48 6 1.58e-008 1.00 sec 59 7 1.99e-010 0.00 sec 51 8 6.55e-011 2.00 sec 138 9 4.86e-011 1.00 sec 74 10 4.79e-011 0.00 sec 3 11 5.17e-011 1.00 sec 20 Total: 15.00 sec Table B-11. Combina tion line search, c = 1000 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.82e-005 1.00 sec 71 3 3.36e-006 1.00 sec 79 4 1.15e-007 1.00 sec 60 5 9.48e-009 2.00 sec 132 6 8.46e-010 1.00 sec 95 7 4.25e-011 2.00 sec 114 8 2.35e-011 0.00 sec 34 9 1.70e-011 1.00 sec 15 10 2.33e-011 1.00 sec 46 Total: 18.00 sec PAGE 38 28 Table B-12. Combina tion line search, c = 100 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.97e-005 1.00 sec 138 3 1.83e-006 2.00 sec 140 4 1.26e-007 1.00 sec 129 5 7.74e-009 4.00 sec 314 6 4.82e-010 1.00 sec 82 7 6.55e-011 1.00 sec 50 8 9.31e-012 1.00 sec 83 9 2.11e-011 1.00 sec 15 Total: 21.00 sec Table B-13. Combination line search, c = 10 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.96e-005 2.00 sec 181 3 1.92e-006 2.00 sec 184 4 1.26e-007 3.00 sec 234 5 8.23e-009 3.00 sec 250 6 5.04e-010 2.00 sec 139 7 6.21e-011 1.00 sec 28 8 6.92e-012 0.00 sec 17 9 2.27e-011 1.00 sec 3 Total: 23.00 sec Table B-14. Combination line search, c = 1 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.96e-005 2.00 sec 233 3 1.93e-006 4.00 sec 306 4 1.26e-007 4.00 sec 373 5 8.19e-009 3.00 sec 240 6 5.62e-010 3.00 sec 180 7 3.74e-011 1.00 sec 80 8 4.08e-011 1.00 sec 5 Total: 26.00 sec PAGE 39 29 Table B-15. Combination line search, c = 0.1 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.96e-005 4.00 sec 358 3 1.93e-006 5.00 sec 368 4 1.26e-007 3.00 sec 275 5 8.23e-009 3.00 sec 237 6 5.32e-010 3.00 sec 165 7 4.86e-011 0.00 sec 44 8 1.68e-011 2.00 sec 97 9 1.52e-011 0.00 sec 5 10 2.76e-011 1.00 sec 45 Total: 29.00 sec Table B-16. Combination line search, c = 0.01 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.96e-005 6.00 sec 432 3 1.93e-006 5.00 sec 354 4 1.26e-007 3.00 sec 260 5 8.20e-009 4.00 sec 320 6 5.40e-010 2.00 sec 132 7 3.97e-011 1.00 sec 55 8 2.51e-011 1.00 sec 20 9 4.25e-011 0.00 sec 26 Total: 30.00 sec Table B-17. Combinatio n line search, c = 0.001 Iteration # ) ( ) (f Nt x t x CPU time CG iterations 1 4.54e-004 8.00 sec 807 2 2.96e-005 6.00 sec 432 3 1.93e-006 4.00 sec 354 4 1.26e-007 4.00 sec 260 5 8.20e-009 4.00 sec 320 6 5.40e-010 2.00 sec 132 7 3.97e-011 1.00 sec 55 8 2.51e-011 1.00 sec 20 9 4.25e-011 0.00 sec 26 Total: 30.00 sec PAGE 40 30 LIST OF REFERENCES 1. W. W. Hager and H. Zhang, Â“A new c onjugate gradient method with guaranteed descent and an efficient line searchÂ”, SIAM Journal on Optimization , 16, 2005, pp. 170-192. 2. W. W. Hager, Â“Runge-Kutta methods in optimal control and the transformed adjoint systemÂ”, Numerische Mathematik , 87, 2000, pp. 247-282. 3. J. C. Butcher, The numerical analysis of or dinary differential equations , John Wiley, New York, 1987. 4. W. W. Hager and H. Zhang, Source code C version 1.2 , November 14, 2005, University of Florida, Gainesville, FL, URL: http://www.math.ufl.edu/~hager/papers/CG/ cg_descent-C-1.2.tar.gz, last accessed: May 14, 2006. 5. M. R. Hestenes, Conjugate direction methods in optimization , Springer-Verlag, New York, 1980. 6. W. W. Hager and H. Zhang, CG_DESCENT version 1.4, user's guide , November 14, 2005, University of Florid a, Gainesville, FL, URL: http://www.math.ufl.edu/~hager/papers/C G/cg_manual-1.4.ps, last accessed: May 14, 2006. 7. A. E. Bryson and Jr., Y.-C. Ho, Applied optimal control , Blaisdell, Waltham, MA, 1969. PAGE 41 31 BIOGRAPHICAL SKETCH Shuo Li was born in Beijing, China, and completed her bachelorÂ’s degree in applied mathematics from Beijing Polytechnic University in July 2000. She obtained her M.S. in applied mathematics, speciali zing in numerical opt imization (under the supervision of Professor William W. Hager) fr om the University of Florida, August 2006. |