
Citation 
 Permanent Link:
 http://ufdc.ufl.edu/AA00064580/00001
Material Information
 Title:
 Computation of optimal controls for discontinuous systems
 Creator:
 Schulz, Edward Ross, 1929
 Publisher:
 publisher not identified
 Publication Date:
 1970
 Language:
 English
 Physical Description:
 viii, 110 leaves. : illus. ; 28 cm.
Subjects
 Subjects / Keywords:
 Electrical Engineering thesis Ph. D ( lcsh )
Dissertations, Academic  Electrical Engineering  UF ( lcsh )
 Genre:
 bibliography ( marcgt )
nonfiction ( marcgt )
Notes
 Thesis:
 Thesis  University of Florida.
 Bibliography:
 Bibliography: leaves 108109.
 General Note:
 Manuscript copy.
 General Note:
 Vita.
Record Information
 Source Institution:
 University of Florida Libraries
 Holding Location:
 University of Florida Libraries
 Rights Management:
 The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. Â§107) for nonprofit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
 Resource Identifier:
 021868475 ( ALEPH )
18261540 ( OCLC )

Downloads 
This item has the following downloads:

Full Text 
COMPUTATION OF OPTIMAL CONTROLS
FOR DISCONTINUOUS SYSTEMS
By
EDWARD R. SCHULZ
A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA 1970
ACKNOWLEDGMENTS
The author would like to express his gratitude to the members of his supervisory committee for their supervision and guidance. He specifically wishes to thank Dr. T. E. Bullock for stimulating discussions and penetrating questions which were helpful in guiding the direction of this effort. Without the patience and understanding of the author's family this work could not have been accomplished. Thanks are also due to Mr. E. C. Fehner and others of the General Electric Company for their kind encouragement and support.
ii
TABLE OF CONTENTS
Page
ACKNOWLEDGMENTS .................................... ii
LIST OF TABLES ............... v.. ............. v
LIST OF FIGURES .............................. .... ... vi
ABSTRACT ......... ........... .... ............... *.. vii
Chapter
I. INTRODUCTION ....... ....... .................. 1
II. DEVELOPMENT OF THE ALGORITHM ............... 6
Mathematical Description of
the Problem ............................ 6
Necessary Conditions ..................... 8
Expansion of the Necessary
Conditions .. ........... ............... ... 11
Jump Conditions .......................... 15
The Corner Times ............ ............ 18
Solution of the Boundary Value
Problem ............................... 19
Implementation of the Algorithm .......... 26 Zermello's Problem ....................... 27
III. APPLICATION TO SYSTEMS SUBJECTTO CONTROL VARIABLE INEQUALITY
CONSTRAINTS .... ......................... 34
BangBang Problems ....................... 37
A BangBang Example ...................... 39
Estimating the Number of
Corners for CVIC Problems ............ 45
IV. APPLICATION TO SYSTEMS SUBJECT TO STATE
VARIABLE INEQUALITY CONSTRAINTS .......... 48
Transforming Interior Point Constraints
into Terminal Constraints ............. 52
The Final Subarc ............................ 54
iii
TABLE OF CONTENTSContinued
Chapter Page
IV. The Constrained Subarc ................... 57
The First Subarc ......................... 59
A First Order SVIC Example ............... 61
Alternate Necessary Conditions
for SVIC Problems ..................... 68
The Analytical Solution of an
SVIC Problem Using the Alternate
Necessary Conditions ... ..... a .... 77
Other Simplifications ........... ........ 81
V. APPLICATION TO PROBLEMS HAVING
SINGULAR SUBARCS ......................... 85
Comments on Symmetry ..................... 88
Two Examples of Problems Having
Singular Subarcs .............. 89
Example 1 .........................90
Example 2 ................................ 92
VI. CONCLUDING REMARKS .......................... 99
Conclusions .................... 99
Recommendations for Future Work .......... 100
APPENDIX ............................................ 104
REFERENCES ................................ ..... 108
BIOGRAPHICAL SKETCH ................................. 110
iv
LIST OF TABLES
Table Page
1. Progress of the Algorithm in Solving
Zermello's Problem ...................... 31
2. Progress of the Algorithm in Solving
the BangBang Example ................... 41
3. Progress of the Algorithm in Solving
the First Order SVIC Problem ............ 63
4. Comparison of First Subarc Necessary
Conditions .............................. 72
5. Progress of the Algorithm in Solving
the Second Singular ExampleRun 1 ................................... 94
6. Progress of the Algorithm in Solving
the Second Singular ExampleRun 2 .................................... 97
7. Progress of the Algorithm in Solving
the Second Singular ExampleRun 3 ................................... 98
v
LIST OF FIGURES
Figure Page
1. An illustration of the jump in 6x(t)
at a corner ....................................... . 16
2. Zermello's Problem with an island ........... 29
3. Phase plane plots of the trajectories
generated in solving the bangbang
example .................................. 43
4. Terminal radius for feedback control
of the bangbang problem ................. 44
5. The assumed sequence of subarcs for
SVIC problems ........................... ... 50
6. Phase plane plots of the trajectories
generated in solving the first
order SVIC example ....................... 65
7. Final control history for the second
singular example .................................... 95
vi
Abstract of Dissertation Presented to the Graduate Council of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
COMPUTATION OF OPTIMAL CONTROLS
FOR DISCONTINUOUS SYSTEMS
By
Edward Ross Schulz
December, 1970
Chairman: Dr. T. E. Bullock Major Department: Electrical Engineering
optimization problems of interest to control engineers often involve physical systems described by ordinary differential equations subject to discontinuities. Furthermore, problems having inequality constraints and singular subarcs may be treated as members of this class so methods of solving such problems are of great importance in engineering. Analytical solutions are possible only in the simplest cases so problems of practical interest must generally be solved by numerical techniques.
The work described herein is concerned with the derivation of an iterative algorithm for the numerical solution of these problems. The NewtonRaphson method is used to obtain an algorithm that will generate a
vii
sequence of corner times and controls which converges to values satisfying the necessary conditions of the calculus of variations. The method features quadratic final convergence and yields feedback gains which may be used to find controls and corner times for trajectories near the optimum. Applications of the algorithm to control variable inequality constrained problems, state variable inequality constrained problem,s and problems having singular subarcs are developed and examples are given.
viii
CHAPTER I
INTRODUCTION
Numerical methods for the solution of optimum control problems have received much attention in the technical literature and many algorithms for their solution have been proposed. The emphasis has generally been on problems involving systems described by ordinary differential equations not subject to discontinuities. However problems with systems having discontinuous state equations are also important, particularly since inequality constrained and singular problems may be treated as members of this class. In the present study, a NewtonRaphson algorithm for the numerical solution of discontinuous problems is developed and its application to inequality constrained and singular problems is demonstrated.
one of the earlier approaches for systems described by ordinary differential equations not subject to discontinuities is due to Bryson and Denham [1]. The method, a steepest ascent or gradient technique, is relatively simple to apply and is relatively insensitive to a poor initial guess of the control history, but suffers from slow convergence as the final solution is approached. The difficulty with slow final convergence was later overcome by investigators
2
such as Merriam [2] and McReynolds and Bryson [3]. Merriam's algorithm minimizes a second order expansion of the objective function subject to linearized constraints, the goal being to generate a sequence of trajectories with an improved objective function on each iteration. McReynolds and Bryson used linear expansions to obtain an algorithm which generates a sequence of trajectories converging to values satisfying the necessary conditions of the calculus of variations. These methods are analogous to the NewtonRaphson approach to ordinary parameter optimization problems and yield rapid final convergence, if convergence is obtained. However, for successful convergence the methods often require better initial guesses of the control history than steepest ascent techniques. Bullock [4] and Bullock and Franklin [5] extended these results and developed a method featuring rapid final convergence, sophisticated step size control which improves convergence with a poor initial guess, and protection against conjugate points in the auxiliary problem.
The preceding investigations are all concerned with systems not subject to discontinuities nor to inequality constraints. Necessary conditions for problems with a state variable inequality constraint, (SVIC), and a control variable inequality constraint, (CVIC), are gi ven in [6] and [7] and a new set of SVIC conditions will soon be published [17]. Denhan and Eryson describe a steepest ascent algorithm for
3
the numerical solution of SVIC and CVIC problems in [8]. The algorithm, being a steepest ascent method, suffers from the slow final convergence mentioned earlier. In later work, SVIC problems on the one hand, and CVIC problems on the other, have usually been considered apart from one another and different approaches used for each.
A very ingenious technique for numerically solving SVIC problems has been suggested by Jacobson and Lele [9]. A transformation is employed which makes it impossible for the system to violate the inequality constraint, but which transforms the subarc along the constraint into a singular subarc. The problem can then be solved using any existing algorithm which is not sensitive to singular subarcs. Unfortunately the transformation increases the dimensionality of the problem and hence the number of differential equations which must be integrated.
Discontinuous systems have been considered by
Jacobson in [10] and by Dyer and McReynolds in [12]. The approach used in both cases is to derive algorithms which find trajectories satisfying the dynamic programming conditions of optimality. Jacobson allows large changes in the control to occur from iteration to iteration and obtains step size control by optimizing over a limited region of the latter part of the time interval, [totf]. Dyer and McReynolds are silent on the question of step size control
4
but comment on the need for a good initial guess. The examples given by these authors are of the bangbang type. In a later paper, [11], Dyer and McF~eynolds present a modif ied version of their approach using piecewise constant feedback gains to reduce the computational burden.
The approach taken in the present case is similar
to that of [3]. The NewtonRaphson method is used to develop an algorithm which seeks to find a trajectory satisfying the necessary conditions of the calculus of variations. The major difference here is that the discontinuities make it necessary to introduce jumps into the variations of the state and adjoint variables at the corner times. The estimates of the corner times themselves are adjusted in such a way as to force the Hamiltonian to be continuous at the corners.
The following chapter is devoted to the derivation of the algorithm. A detailed mathematical description of the problem is given and the necessary 'conditions, as applicable to this problem, are reviewed. The equations on which the algorithm is based are then derived. This leads to the usual linear two point boundary value problem which may be solved by the backward sweep method. Because of the jumps wh .ich must be introduced into the variations of the state and adjoint variables, it becomes necessary to jump the backward sweep matrices at the corners. A detailed summary of the steps involved in implementing the algorithm is then
5
given and an example which illustrates a possible difficulty is discussed.
In Chapter III the application of the algorithm to CVIC and bangbang problems is described. Certain modifications to the approach are recommended to avoid possible difficulty with controllability in the case of bangbang problems. This yields a simpler form of the algorithm. A numerical example is then given.
Chapter IV covers applications to SVIC problems.
Additional states are introduced in order to transform the problem into one in the class for which the algorithm was designed. Possible simplifications in the computations are discussed, an example is given, and the relation between the necessary conditions to which the algorithm converges and those of [6) is found.
Chapter V is concerned with the solution of singular problems. Success depends upon finding an expression for the desired control along the singular subarc. This can often be done by using the fact that the partial derivative of the Hamiltonian with respect to the control vanishes identically on singular subarcs.
Concluding remarks and recommendations for future work are given in Chapter VI.
CHAPTER II
DEVELOPMENT OF THE ALGORITHM
Mathematical Description of the Problem
The problems of interest here are those involving physical systems whose state vector, x, satisfies a differential equation of the form
S= fi(x,u,t) x(to) = x0 (1)
on the interval ti1 < t < t. with i = 1,2,***,m and 1 1
t At The vector functions f. and the values to and x0 are given and a dot above a quantity indicates differentiation with respect to time, t. The control, u, is in general a vector function of te[totf] and may or may not be continuous at the corner times, t.. In the usual physical situation the state is continuous so take x(t i) = x(ti +) for all i. The object is to find m values of t and a control, u(t) for all te[totf], such that m t.
J = [(x,t)]t + L (x,u,t) dt (2)
f i=1 ft
i1
is minimized subject to (1) and such that the terminal constraint
 6
7
(x,t)] = 0 (3)
tf
is satisfied. Here *, i, and the LI are given functions of their arguments, p generally being a vector and the remaining functions scalars. The subscript tf denotes evaluation at t = tf*
An iterative numerical technique for solving this problem will be developed by using the Newton Raphson method to derive an algorithm which will find a trajectory satisfying the necessary conditions of the calculus of variations [3]. The same technique may be obtained by using the approach of [4] and [5] in which a second order expansion of the cost is minimized to obtain the algorithm. In either case it should be recognized that without step size control the resulting method will find stationary trajectories rather than strictly minimizing ones. The reason inthe case of [4] and [5] is that the auxiliary problem is solved using the necessary conditions of the calculus of variations, which merely yields stationary solutions if step size control is not used. This point will be illustrated with an example in Chapter V.
It is assumed that the number of subarcs (the value of m), and the nature of the subarcs is known a priori. This is not felt to be a severe limitation since second order algorithms of this type are notorious for
 8
their tendency to diverge if not carefully employed. A good initial guess and careful application of step size control is usually required for success, so a great deal of information about the problem is apt to be required at the outset in any case. In the solution of CVIC and SVIC problems it may be possible to arrange the computations in such a way that the algorithm will determine the sequence of subarcs as the iterations are carried out. The possibility is discussed in Chapters III and IV.
Necessary Conditions
Necessary conditions for this problem are readily obtained from the calculus of variations so only a brief outline of their derivation will be given here [13]. Use Langrange multipliers X and v to write the augmented form of (2) as
m t.J = + 1 [H(x,u,X,t) 1Tx] dt (4)
i=l t. +
where
A [#(x,t) + v T (x,t)]t (5)
f
T
H(x,u,,t)' A L(x,u,t) + X f(x,u,t) (6)
The subscripts i have been dropped in the above and will not appear in subsequent expressions unless they are
 9
necessary for clarity. Taking the first variation of (4), integrating the result by parts, and using 6x = dx x dt at each time t. gives
1
m t.dJ' = [x dx + 1t dt]t + [H dt T dx]
f i=l t. +
m it..
+ Y [(H + XT)6x + H 6u] dt (7)
i=1 t. + u
i1
where subscripts t, x, and u indicate partial differentiation with respect to t, x, and u, respectively. Now use dx(t0) = 0 and dt0 = 0 to rearrange the sum in (7) and write
T m1 T
dJ' = [(4 )dx + (t + H)dt]t I D.(H dt X dx)
f i=l1
m t. .
+ ~ [(H + T)6x + H 6u] dt (8)
i=1 til+ u
i1
where the operator Di operating on some function of time, say g(t), is defined by
D.i(g) = g(t.+) g(t i) (9)
1 1 1
Then from (8) the necessary conditions (in addition to
(1) and (3)) are = HT (10)
x
 10
A (tf) = (11)
x
Di (A) = 0 (12)
D. (H) = 0 (13)
=[H + 4t]tf [L + ] 0 (14)
t tf t
Hu =0 (15)
The second form of (14) follows from the definitions of H and 4. Equations (10), (11), (14), and (15) are the conditions encountered in problems without discontinuities. The additional conditions, (12) and (13), simply require A and H to be continuous at the corners.
An algorithm to find a trajectory which satisfies these conditions can be developed in at least two ways. One approach is to solve (15) for the control and use the result to eliminate u from the remaining conditions. The resulting reduced set of conditions can then be met by an iterative method as is done by Schley and Lee [143. However in that case step size control may be more difficult to achieve so the approach of McReynolds and Bryson in [3] will be followed. The step size control and conjugate point protection schemes of [4] will then apply. The backward sweep method will be used to solve the associated linear two point boundary value problem and the results will
be put in a form that yields feedback gains for use on neighboring trajectories.
Expansion of the Necessary Conditions
Proceed as in [3] and assume the following steps are taken:
1. Guess u(t), v, and m values of ti.
2. Use these to integrate (1) forward from to to
tf to obtain a nominal trajectory.
3. Integrate (10) backwards from tf to to using
(11), (12), and necessary values from the
nominal trajectory.
Then (1) and (10) through (12) will be satisfied but in general (13). through (15) will not. Therefore consider first order expansions of the necessary conditions about the nominal trajectory with the object of finding changes, 6u and dti, which will yield a new trajectory for which
(13) through (15) are more nearly satisfied.
Expanding Hu gives
T = H x+fT 6 H u (16)
u ux u uu
In an attempt to drive Hu to zero and satisfy (15) on the next iteration take 6Hu = Hu in (16) and solve the result for 6u to obtain
6u H (H 6x+f TX+ H ) (17)
uu Ux u u
 12
To ensure that 6u is in the direction of decreasing cost, and to obtain control of the magnitude of Su, it is recommended that the step size control scheme of [4] be used. This is accomplished by adding a positive definite symmetric matrix, say N, to H in (17). The matrix N uu
should be chosen to ensure that (Huu + N) remains positive definite and that 6u remains small enough for the expansions to be sufficiently accurate for all iterations. As convergence is approached and H is driven to zero,
u
the value of N should be decreased until it reaches zero for the last few iterations. See [4] for further details. For convenience the matrix N will not be written in any of the equations which follow. For purposes of implementation one simply adds N to Huu wherever H appears in the following equations.
If this step size control scheme is implemented the term HI HT in (17) will be forced to remain small,
uu u
either because N is large for the earlier iterations, or else because Hu becomes small as convergence is approached. As a result the product H.IHT will be treated as a first uu u
order term throughout the remainder of this development. This is desirable since it results in great simplifications in the expressions which follow.
Putting (17) into first order expansion of (1) and
(10) gives
 13
6x = A6x + B6A + c 6x(to) = 0 (18)
6 = D6x A T6X + e (19)
where
A =f f H H (20)
x u uu ux
B = f H' fT (21)
u uu u
c = fHT (22)
u uu U
1!
D'= (H H HH ) (23)
xx xu uu ux
e=xuIHT (24)
e = H H H (24)
xu uu
Next expanding (11), (14), and (3) to first order gives
T
dX(tf) = [ dx + Tdv + D dt] (25)
f xx x xt t
dQ = [Qxdx + Tdv + tdt]t (26)
dj = [xdx + ltdt]t (27)
With certain modifications to follow,(25) will provide terminal conditions for (19), (26) will yield an expression for dtf, and (27) will aid in eliminating dv.
Note that on any given iteration 4) and 0 may not be zero so it will be desirable to assign values to d# and dQ which will force i and Q to be more nearly equal to zero
 14
on the next iteration. Hence d* and dQ in (26) and (27) should be regarded as known.
To obtain an expression which is useful in transforming (25) through (27) into more desirable forms use
(14) and (5) to write
= [ + L]t = [xf + D + LI (28)
t x t t
f f
whose partial derivative with respect to x is
x xx + fTxx + tx t+ Lxtf (29)
Now using (10) and (11) gives
x = [H + x]tf
x Ttf (30)
f
Next note that the final total variations in x and X are given by
dx(tf) = 6x(tf) + x(tf) dtf (31)
dX(tf) = 61(tf) + X(tf) dtf (32)
Putting (31) into (26) and solving for dtf gives
dt = [(dP x6x dv)/6] (33)
tf xt
 15
Now use (30) through (33) to eliminate dtf, dx(tf), and dX(tf) from (25) and (27) to obtain 6(tf) = (x 'Q / )6x + ( x/I) dv + pdl f
fxx x x x x x tf
(34)
d =[(x x/6)6x *T + dglt (35)
These are the desired forms for W tf) and d for use in the backward sweep solution of the two point boundary value problem given by (18) and (19).
Jump Conditions
Though (18) and (19) describe the behavior of 6x
and 6X along any subarc, correct results cannot be obtained from these unless jumps are introduced into their solutions at the corner times. The need for these is illustrated by Figure 1 which shows the behavior of one element of x(t) for the nominal trajectory and for a neighboring trajectory near a corner. Both the nominal and neighboring trajectory as well as their difference are continuous. However (18) is an expansion of (1) and (1) is discontinuous at t.. Hence (18) may have one form to the left
1
of ti and another to the right. Then if its solution is to be correct beyond ti it is necessary that the jump shown in the figure as Di (6x) be introduced into 6x(t) at t.
 16
x (t)
)xt Nominal
1I
II
a corner.
I I
I/I
' I
. .I . .Ixi Nominal
Figue i An llutraton f th jup in x~) I
I conr
 17
To determine D (6x) note that at point 1 of the figure
x(point 1) = x. + 6x. (36)
1 1
with x. = x(t.) and 6x. = 6x(t). Equation (36) may be
1 1 i 1
extrapolated to point 2 with first order accuracy as
x(point 2) = x. + 6x. + x.dt.
1 1 1 1 = x. + + f.dt. (37)
1 1 1 1
For point 3
x(point 3) = x. + 6xt .1 1
which extrapolates to point 2 as
x(point 2) = x. + 6xt + ftdt. (38)
1 1 11 (38
Equating (37) to (38) to find the required jump in 6x gives
D.i(6x) = D.i(f) dt. (39)
1 1 1
Proceeding in the same way for 61 gives
D.i(6X) = D. i(H ) dt. (40)
1 1 X 1
Note that as the algorithm converges, the dti must go to zero so the jumps will also.
 18
The Corner Times
Now condition (13), which requires the Hamiltonian to be continuous at the corners, will be used to derive expressions for computing improved values of the corner times. Changes in the tI will be made so as to force H to be more nearly continuous on the next iteration. On the neighboring trajectory the Hamiltonian will satisfy
+ T+
Next H. = [H + H Sx + H 6u + f 6X + Hdti]. (41)
1 x u 1
Next H: = [H + Hx6x + H 6u + fT 6 + i (42)
1 x u f6 dt]
Then to first order the jump in H on the neighboring trajectory will be
Next Di (H) = Di [H + H 6x + H 6u + f T6X + Hdti] (43) Using (17) to eliminate 6u gives
Next D (H) = D [H xX + f T6X + 1dt + (H HuHu H)] (44)
1 1 x u uu u
where the term H1HT has been treated as being of first
uu u
order in making the substitution.
If a and b are functions of time it may easily be verified that
D ab) = a+D. (b) + D (a)b. (45)
1D11a1)
 19
Using (39), (4), and (45) in setting (44) equal to zero gives the desired expression for the change in t. as
Tl
HD. (f) (f.) D. (H ) D.()d 11)x
+ D. (fT)6X + D. (H H H_ lHT) (46)
1 1 1 UUU U
Step size control for the corner times can be obtained by adding a positive constant to the denominator of (46). The constant should be large enough to ensure that the resulting denominator will be positive. This is analogous to the step size control scheme used for the control in [4] and [5]. The method will be illustrated with an example in Chapter V.
Solution of the Boundary Value Problem
The solution of (18) and (19) subject to the
various terminal conditions can be obtained by backward sweep using the usual Ricatti transformation
6X = P6x + Rdv + q (47)
d* = RT6x + Mdv + h (48)
where P, R, q, M, and h are time varying matrices of appropriate dimensions. In some approaches dtf is also swept backwards, as for example in [3], but this requires the evaluation of more matrices and will not be done here.
 20
Putting (47) and (18) into (19) to eliminate 6A and 6x leads to
(P + PA + ATP + PBP D) 6x + [R + (AT + PB)R]dv
T
+ q + (A + PB)q + Pc e = 0 (49)
Equating the time derivative of (48) to zero and using
(47) and (18) to eliminate 61 and 6x from the result yields
[R + (A + PB)] T6x + (M + R BR)dv + h + R (c + Bq) = 0
(50)
Then (49) and (50) are satisfied for all 6x and dv if
= PA AP PBP + D P(tf) = 0/ (51)
f xx xx
= (A + PB)R R(tf) = (*x x/i)T (52)
= (AT + PB)q Pc + e q(t ) = T dQ/6 (53)
= RTBR M(tf) = / (54)
= RT(c + Bq) (t = d= / (55)
where the terminal conditions for (51) through (55) have been selected such that (47) and (48) agree with (34) and
(35) at t = tf* By retracing the previous steps it may be verified that for problems with fixed terminal time the
 21
conditions become P(tf) = xx, R(tf) = T, q(t) = 0, M(tf) = 0, and h(tf) = 0.
Solving (48) for dv gives
dv = M1 (RT6x + h d*) (56)
so (47) can be written as
61 = (P RMRT)6x + RM (di h) + q (57)
Putting (57) into (17) then gives 6u in feedback form as
1 T 1T T 1
u = H {[Hu + f (P RM RT)]6x + f [RM (d h) +q]
uu ux u u
+ HT} (58)
while using (56) in (33) yields dtf [ (TM1RT x x)6x + dQ Ti (d h)]/t (59)
f
Note from the terminal condition for (54) that
M(tf) has a rank of at most one and will be singular if the dimension of exceeds one, if = 0, or if the terminal time is fixed. If this is the case equations
(56) through (59) are useless at t = tf. The simplest solution is to apply (56) at t = to to find dv. Then
(47) and (17) may be combined to give
6u = H [(H + f TP)6x + fT(Rdv + q) + H ] (60)
uu ux u u u
 22
which may be used to obtain 6u when dv is known. Then
(33) can be used for dtf. For the computation of optimal trajectories the results obtained from (58) and (59) should be the same as those obtained from (60) and (33) in those cases where M(tf) is nonsingular. However if the results are to be used for neighboring optimal control then (58) and (59) are the desirable equations since these make use of an up to date estimate of dv. Of course if M(tf) is singular then (58) and (59) cannot be used for neighboring optimal control exceptin those cases where the terminal time is fixed and high feedback gains can be tolerated near the terminal time. The problem can be avoided by using penalty functions to meet the terminal constraint since matrix M will then not be needed. The point will be discussed again in the following chapter.
The remaining expressions to be found are those giving the dt. in feedback form and jump conditions for
1
the matrices in (47) and (48). To obtain these use (40) to eliminate SX. from (46) and write
[(H ).D.(f) (fT ).D (H T D D(A)]dt.
x 1 1 i i X i
D (H ) 6x. + D. (fT )6X + D.(H H HiHT) (61)
F x 1 1 1nd uuu
From (47) and (39)
st= Pt~x. + (Rdv + q) 4 PS i(f)dtj (62)
 23
Putting (62) into (61) then gives an expression for dt.
I
which can be written as dt. = [gT6x + D (fT)(Rdv + q) + D (H HuHHT)]/a (63) where
a = (H)D (f) (f ) D (HT) D (I) + D (f )P+D (f) (64) a 1 i ( ( and
g = D (HT) + P+D (f) (65)
But from (45) and (47)
D (SX) = P+D. (x) + D. (P)6x. + D. (R)dv + D. (q) (66)
1 1 1 1 1 1 1
which can be rewritten as
gdt. = D.i (P)6x i + D. (R)dv + D.i(q) (67)
1 1 *1 1 1
by using (39) and (40). Substituting (63) into (67) then gives
g[g x + D.(fT )(Rd + q) + D (H H HHT
~gi6.+)(Rvi() 1 u H )]
= a[Di (P)6x. + D. (R)dv + D.i(q)] (68)
1 1 1 1
Then for (68) to be satisfied for all 6xi and dv the matrices P, R, and q must jump at the corner times by amounts
 24
D (P) = gg /a (69)
D (R) = gD (f )R+/a (70)
D (q) = g[D (f )q + D (H HuHuuHT )]/a (71)
To obtain jump conditions for M and h use (39) and
(45) to equate Di (df) as given by (48) to zero and write the result as
D. (RT)6x. + D. (M)dv + D. (h) = (R+)TD. (f)dt. (72)
1 1 1 1 1 1 1
Again using (63) for dti gives a[D. (R )6x. + D (M)dv + D.(h)]
1 1 11
+T T T +
= (R ) D (f)g 6x + D.(f )(Rdv,+ q)
11 1 1
1 T
+ D.(H H H H)] (73)
1 u uu u
Then for .(73) to be satisfied for all 6x7 and dv the jumps
1
in M and h must be
D (M) = (R )T D (f)D (f )R./a (74)
D (h) = (R+) TD (f)[D (f )qt + D (H HuHuuHT )]/a (75)
1 1 1 1 1 uuu
Notice that the jump in R given by (73) is the same as that required to satisfy (75). From (51), (54), (69), and (74) it may be seen that matrices P and M are symmetric.
 25
To derive an expression for dt. in feedback form
1
put (57) into (46) to get
[(H )+D.(f) (f ) D.(T) D. (H)]dt
X111 1 X 1 1
i i~f (fT i i
= [Di (H ) + D (f ) (P RM R )]Sx.
1 11 1
T 1 1T
+ D. (fT) [RM (d h) + q]i + D. (H H H H T) (76)
1 U UU U
In most cases M(t i) will be nonsingular but if this is not the case a useful expression for dti can be obtained by combining (47) and (46) to get
[(H ) D.(f) (f ) D.(H) D. (H)]dt
X 11 1 1 X 1 1
T T= [D (H ) + D. i(f )Pi ]6x. + D.i(f )(Rdv + q)
1 X 1 1 1 1 1
+ D.(H H HH) (77)
1 uuu
The required dv for use in (77) may be obtained from (56) using conditions at t = to. Equation (76) is the preferred relation for use in neighboring optimal control since it,
like (58) and (59), makes use of an uptodate estimate of dv.
This completes the derivation of the equations
needed for the algorithm. Implementation of the algorithm for the solution of a general problem is discussed in the next section.
 26
Implementation of the Algorithm
The algorithm may be implemented as described in
the following steps:
1. Guess u(t), v, and m values of ti.
2. Integrate (1) forward from to to tf to obtain
the nominal states. In doing so compute the
cost, J, and evaluate t and Q.
3. Integrate (10) backwards from tf to to using
(11), (12), and the nominal states to obtain
X(t).
4. Integrate (51) through (55) backwards from tf
to to to obtain the matrices P, R, q, M, and h.
At each corner time, ti, jump the matrices in
accordance with (69), (70), (71), (74), and (75). Since P and M are symmetric all their
elements need not be evaluated.
5. Choose values of d and dQ which will cause
(i + d*) and (Q + dQ) to be more nearly zero.
Using values at t = to compute dv from (56)
and replace v with (v + dv).
6. Obtain the new cost and new nominal states by
integrating (1) and (2) forward from to to tf.
For the new control use (u + 6u) where Su is given by (61). At each corner time compute
dti from (77) and jump the states by the amount Di(6x) as given by (39). Then integrate to the
next corner.
7. At t = tf compute dtf from (33) and replace all
ti by (ti + dti). Evaluate i and Q. See the
comments below for the case where dtf is positive.
8. Decide whether to continue iterating by comparing
the cost with that from the last iteration and
examining i and Q to see that they are sufficiently
near zero. If the change in cost is small enough and if and Q are satisfactory stop. To continue
go to step 3.
 27
A problem will arise if dtf computed in step 7 is positive since there will be no nominal states beyond the old t f for use in steps 3 and 4. The needed states for times beyond the old tf may be computed by integrating
(1) and (10) together forward from the old tf to the new, after using Hu = 0 to eliminate U. The starting value of X used should be the last X at the old tf plus 6X computed from (47) for conditions at the old tf*
The analytical work and programming effort involved in applying this algorithm to a given problem is likely to be quite tedious so it may be advisable to consider the use of penalty functions for meeting the terminal constraint, 4 = 0. Matrices M, R, and h will then not be needed and considerable simplification will result.
Zermello's Problem
A simple example of a discontinuous problem which
illustrates a possible difficulty is the following variation of Zermello's Problem. In Zermello's Problem the object is to find the optimum steering direction to bring a boat from some initial point to a desired final point in minimum time. The boat moves with constant speed relative to the water and there is usually a current which will be ignored in this case. The x and y coordinates of the boat will therefore be assumed to satisfy
 28
= cos (u) x(0) = 2
=sin (u) y(0) = 0
The object is to bring the boat to the terminal point
x(tf) = 1 Y(tf) = 0
in minimum time, tf. In order to introduce a discontinuity assume that the inequality constraint x2 + y2 > 1
must be satisfied. The constraint represents an island, a circle of unit radius centered at the origin, around which the boat must travel to reach its destination. The solution to the problem may readily be seen to be the path shown in Figure 2. The first subarc consists of the straight line from the initial point, (2,0), tangent to the circle at time t1. The second subarc follows the island and reaches the terminal point, (1,0), at time tf*
This problem is an SVIC problem which can be
converted to one having a discontinuity by the following artifice. Use x to replace time as the independent variable, let x, be the value of x at the corner, and replace the equations of motion by
dy/dx = u 2 < x < xi
dy/dx = x/(! x2)1/2 Xl < x < 1
 29
Y
t=tjl
= t=tf
=t, x
(1, 0)
Figure 2. Zermello's Problem with an island.
 30
where u is a new control. This choice of the second subarc differential equation has the solution
y = y(xI) + (1 x2)1/2 (1 /2
which may be written as
x2 + [y y(xI) + (1 x )1/2]2 =
This is a circle of unit radius centered on the y axis a distance
y(1) = y(xI) (1 x2)1/2
above the origin. Consequently when the terminal constraint y(l) = 0 is met the second subarc will be the desired portion of the circle. Minimum travel time may be achieved by minimizing distance traveled so take
fl= J [l (dy/dx)2]1/2 dx
2
Application of the algorithm to this problem gave the sequence of values shown in Table 1. The initial guesses used were u = 0 for the control, x, = 0.7 for the value of x at the corner, and v = 0 for the terminal value of the Lagrange multiplier. It happens that if the initial guess for the first subarc control is a constant, then the algorithm yields a sequence of constant u's for
TABLE 1
Progress of the Algorithm in Solving Zermello's Problem
Iteration u x1 V y(xl) y(1) Cost
0 0 0.7 0 0 0.714143 4.44159
1 0.510187 0.751928 0.510187 0.636750 0.022495 3.82290
2 0.557062 0.645859 0.487589 0.754340 9.2x103 3.82302
3 0.628268 0.534122 0.534122 0.920964 7.5x102 3.86544
4 0.596101 0.534868 0.512458 0.873366 2.8x102 3.84084
5 0.577285 0.511788 0.500106 0.859123 1.1x105 3.82645
6 0.577700 0.500227 0.500227 0.866418 5.2x104 3.82671
7 0.577423 0.500179 0.500047 0.866031 1.1x104 3.83650
8 0.577350 0.500000 0.500000 0.866025 1.7x10 3.82645
 32
the first subarc. Hence the control values shown in the table apply throughout the first subarc. As may be seen from the table, convergence was obtained in eight iterations. No step size control was used in this run.
This problem has a behavior that may be rather common and which may lead to difficulty if precautions are not taken. The denominator of the expression which gives the change in the corner location (dt i in general or dxl in the present case) has as a factor the term Nv xi) Note from the table that V = x, at convergence so the denominator (as well. as the numerator) of one of the equations is going to zero as the algorithm converges.
Difficulty from this source was avoided by testing the denominator al),d setting it equal to a small value (107) if it was foun d to be zero. it is believed that such a
simple precaution will be adequate in most cases since the difficulty does not occur until convergence has essentially been obtained. It is also felt that this occurs rather
frequently in discontinuous problems. For example Dyer and McReynolds in [11] devote almost an entire column of a transactions paper to a discussion of a similar problem with their algorithm. The effect seems to arise because
of smoothness of the Hamiltonian along the optimal trajectory at the corner times. As a result of this smoothness the
33
expansions for the jump in H, which were used in deriving expression (46) for the dti, break down at convergence. This point will be discussed again in connection with an example in Chapter IV.
CHAPTER III
APPLICATION TO SYSTEMS SUBJECT TO CONTROL
VARIABLE INEQUALITY CONSTRAINTS
Systems which are not otherwise subject to
discontinuities may become so if there is a requirement that a control variable inequality constraint, (CVIC), of the form
C(x,u,t) < 0 (78)
be satisfied. In general C and u may be vectors but for simplicity both will be assumed to be scalars herein. The vector case is not fundamentally more complicated although the "bookkeeping" problems might become formidable.
These problems are commonly approached by noting
that on subarcs which lie along the constraint (78) becomes an identity
C(x,u,t) = 0 (79)
which may be used to eliminate the control. After this elimination the state equations become discontinuous at the entry and exit corners of constrained subarcs and all equations become independent of u along such subarcs. The application of the algorithm to CVIC problems formulated
 34 
 35
in this way is rather straightforward but the following comments are felt to be in order.
Along constrained arcs all partials with respect to u vanish and the equations become much simpler. In particular (21) and (22) give B = 0 and c = 0 so (5.4) and
(55) become
= 0 M(tf) = iT/Q (80)
= 0 h(tf) = $dQ/2 (81)
Hence it is not necessary to integrate M and h along constrained subarcs. However it has previously been mentioned that M(tf) may be singular. If this is the case, and if the last subarc is along the constraint, then M will be singular throughout the last subarc. In the case of bangbang problems no control is involved (either because of the design of the system, or because the control has been eliminated using the maximum principle) and (80) and
(81) hold for all subarcs. Then on the last subarc M will have a rank of at most one. Because of jump condition (74) the rank of M may increase by one at each corner moving back from tf to to until M attains full rank on some earlier subarc. This reflects the physical fact that the system is not controllable when there are insufficient corner times, or switch points, to satisfy the terminal constraint,
0.
36
This behavior of M is unfortunate if results from the algorithm are to be used for neighboring optimal control, particularly in the case of bangbang problems. The algorithm may still be applied to determine the optimal trajectory by using (56) to determine dv at some time, say t', when M is nonsingular. (In the previous chapter t' = to was suggested.) Then (60), (33), and (77) can be used to determine 6~u and the dt.at times greater than t'. The same thing can be done for neighboring optimal control except that any disturbance to the system which occurs subsequent to t? cannot influence an estimate of dv which is made at t'. Then since v is the Lagrange multiplier associated with the terminal constraint, the control will be open loop after t', at least as far as meeting the terminal constraint is concerned. As previously noted equations
(58), (59), and (76), which require the inverse of M, make use of a current estimate of dv and hence are preferable to (60), (33), and (77) for controller design. In bangbang problems with terminal constraints this unfortunate situation is likely to occur quite frequently. The simplest solution is to drop the philosophy of exactly meeting terminal constraints adopted in Chapter II and use penalty functions instead. Therefore a revised version of the algorithm for bangbang problems using penalty functions to meet terminal constraints is given in the following section.
 37
BangBang ProblemsNo control, u, is involved so the only choice
variables available to the designer are the corner times. In the absence of a control (18) and (19) reduce to
6x = f 6x (82)
6 = H 6xx fT 6x (83)
xx x
If penalty functions are used to meet terminal constraints then and v do not appear, O = *, and (34) becomes
W(t) = [( xxT x/)6x + OT dQ/i] (84)
Then (47) takes the form 6X = P6x + q (85)
and P and q are the only matrices needed. These may readily be shown to be given by
= Pf fT P H P(t ) = 9 (86)
x x xx f xx x
Tq q(tf) = T (8)
x f x
with jump conditions
[D(x)+D (f + T fT p+
[D. (H ) + D. (f T)P] [D. (H ) + D. (f )P] D.(P) 1 x 1 1 1 x 1 1
H + D(T T TJ +t.f
(Hx )+D.(f) (f ).D.(H) D.(I) + D.(f )PD.(f) X(8 18)1 X 1 1 1
(88)
 38
[D. (H ) + D. (f )P] [D. (f )q+ + D. (H)] D.1 x 1 1 1 1 1
(H )+D.(f) (f ).D.(HT) Di(H) + Di(fT)P D(f)
x) i i 1f ( xi)
(89)
Equations (76) and (33) for dt. and dt reduce to
1 f
[(H )D.(f) (f ) D.(HT) D.(H)]dt.
X 11 1 1 X 1 1
T T
=[D. (H ) + D.i(f )Pi ]6xi + D.i(f )q. + D.(H) (90)
1 x 1 1 1 1 1 1
dt = [(dQ Px6x)/6]t (91)
The steps to be followed in applying this form of
the algorithm to bangbang problems become:
1. Use penalty functions to eliminate any terminal
constraints which may be present.
2. Guess m values of ti and the control polarities
for each subarc.
3. Integrate (1) and (2) forward from to to tf to
obtain the nominal states and cost. Evaluate Q.
4. Integrate (10) backwards from .tf to to using (11),
(12), and the nominal states to obtain X(t).
5. Integrate (86) and (87) backwards from tf to to
using jump conditions (88) and (89) at each
corner to obtain P(t) and q(t).
6. Obtain the new cost and new nominal states by
integrating (1) and (2) forward from to to tf.
At each corner compute dti from (90), jump the
states in accordance with (39), and continue to
the next corner.
7. At tf compute dt from (91), choosing dQ such that
the transversality condition, 0 = 0, is more nearly met. If dtf is positive continue the
integration to the new tf = tf + dtf.
 39
8. Replace all ti with ti + dti. Decide whether to
continue iterating by comparing the new cost with that from the last iteration and noting whether Q is Sufficiently near zero. If the change in cost is small enough and if Q has a satisfactory value
stop. If not continue by returning to step 4.
A BangBang Example
As an illustration of the application of the
algorithm to a bangbang problem consider the following simple example. Let the state equations be
x= x2 x1(0) = 5/4
2 = u xa(0) = 0
subject to lul < 1. Find u(t) to minimize J = Juldt
0
subject to the terminal constraint xi(3) = X2(3) = 0
This is a simple double integrator plant which is to be driven to the origin at the terminal time, tf = 3, with minimum fuel expenditure, Jlul dt, subject to a control variable inequality constraint. The optimal control is of a bangbang nature and may readily be shown to be
 40
u = 1 0 < t < t t, = 0.5 u = 0 t1 < t < t2 t2 = 2.5 u = 1 t2 < t < tf tf = 3.0
To apply the algorithm let K be the penalty function gain and add
(K/2)[x2 + X2]tf to the cost to Obtain
(K/2)1[Xl + X tf +.ftf lul dt f 0
Now as K becomes large the terminal values of x, and x2 should approach zero and approximately satisfy the terminal constraint. It can be shown that the solution to this modification of the problem is
t, = 0.5 0.875/K t2 = 2.5 + 2.375/K at least to first order in K1.
Application of the algorithm to the modified problem yielded the sequence of values shown in Table 2. A penalty function gain of K = l05 was used and the initial guesses were tj = 0 and t2 = 3. Convergence was obtained in five iterations with tj and t2 going to values that agree with
 41
TABLE 2
Progress of the Algorithm in Solving the BangBang Example
eration t1 t2 Cost
umber
0 0 3 78125
1 0.347042 2.84576 5806.88
2 0.452092 2.61776 482.656
3 0.492628 2.516.3 10.3454
4 0.499802 2.50044 1.00604
5 0.499991 2.50002 0.999984
 42
their correct values as given above. The cost shown includes the penalty term. No step size control was used in this run. Phase plane plots for the trajectories generated on the first three iterations are shown in Figure 3.
Equation (90), the feedback control equation for t, and t2, gave the following relations at convergence
dt, = 0.500136 6x,(tl) + 1.00052 6x2(t1) dt2 = 0.399938 6x (t2) + 1.00003 Sx2(t2)
These may be used for feedback control on neighboring trajectories by forming switching functions defined by
S1 (t) = t 0.5 0.5 6x (t) 6x2 (t) t < 0.5
= t 0.5 0.5 6x (t) 6x2(t1) t > 0.5
S2(t) = t 2.5 0.4 6x,(t) 6x2(t) t < 2.5
= t 2.5 0.4 6x,(t2) 6x2(t2) t > 2.5
where the gains have been rounded off to one place accuracy. The first switch should occur when S1 becomes positive and the second when S2 does. The effectiveness of this feedback control in overcoming displacements in the initial state is illustrated in Figure 4. The curves give terminal radius,
[x (tf) + x(tf)]1/2
 43
.2 X2
.2
t=0
.0 X
.2
Iteration 1
4Iteration 3
~Iteration 3
1.6
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
figure 3. Phase plane plots of the trajectories generated
in solving the bangbang example.
 44
(a) For 6x2 (0)=0
0.12
0.10 Open Loop
 0.08
AWith Feedback H0.06.
S0.04 E'0.02
0.0o
.5 .4 .3 .2 .1 0 .1 .2 .3 .4 .5
6x1 (0)
(b) For 6x1 (0)=0 Opn Loop
Open Loop
0.12 0.10 rj 0.08 "3 ith
1Feedback
rt 0.06 0.04 0.02 0.00
.5 .4 .3 .2 .1 0 .1 .2 .3 .4 .5
6x2 (0)
Figure 4. Terminal radius for feedback control of the bangbang problem.
 45
as a function of displacement in xi(0) alone and in x2(0) alone. The terminal radius which would result from open loop control (e.g., switching as a function of time alone) is also shown for comparison. Feedback control of the switch times may be seen to give about an order of magnitude improvement over a rather wide range of initial displacements.
Estimating the Number of Corners for CVIC Problems
It is difficult to give general rules for estimating the number of corners and the sequence of subarcs comprising an optimal trajectory, but a few suggestions. can be given. In the case of bangbang problems the controllability of
(18) and (19), that is the rank of matrix M(t0), may be used to obtain a probable lower bound for m, say mp, which in many.cases will be the correct value of m. It has been pointed out that M(tf) will often be singular and that in the bangbang case the rank of M may increase by one at each corner moving back from tf to to. Therefore take mp to be the number of corners required for matrix M to reach full rank, assuming its rank increases by one at each corner between tf and to. In the example just given is of dimension (2 x 1) so from (54) M is of dimension (2 x 2). Then since M(tf) = 0 it follows that mp = 2 is the probable number of corners required if M(to) is to have full rank. This is the correct value of m for the example and such an
 46
estimate may be correct in many cases. However m may exceed mp since rank of M may not increase at each corner. This can be detected by computing M(t0) and examining its rank, but at the cost of more computations relative to the penalty function approach. On the other hand m may be less than mp depending on the initial state. (A little reflection on the phase plane plot for the example will reveal that there are points in the phase plane from which the system can be driven to the origin in the available time using less than two switch points.) It is unlikely that this latter situation will be encountered very often but the possibility must be kept in mind. In any event mp is an intelligent
p
initial guess with which to begin numerical experiments to determine the correct value of M.
For CVIC problems which are not bangbang in nature it is probably possible to arrange the calculations in such a way that the algorithm can determine the sequence of subarcs as the computations are carried out. One approach is to assume a control which does not violate the constraint and begin iterating. The value of the constraint function, C(xu,t), can then be calculated at each time point on each iteration and its sign examined to detect the occurance of entry corners. Beyond each entry corner the control should be taken as that implied by C(x,u,t) = 0. To locate the corresponding exit corner the value of the control which
 47
minimizes H, say u*, should be calculated and the sign of C(x,u*,t) examined. When C(x,u*,t) goes negative the exit corner has been reached. In this way the algorithm might be made to generate the proper sequence of subarcs. In the case of bangbang problems it may be possible to accomplish the same thing by using the maximum principle to determine the sign of the control during the calculations. These measures were not investigated during the course of this work but probably deserve attention in the future.
CHAPTER IV
APPLICATION TO SYSTEMS SUBJECT TO STATE
VARIABLE INEQUALITY CONSTRAINTS
A state variable inequality constraint, (SVIC), is a constraint of the form
S(x,t) < 0 (92)
and differs from a CVIC in that the constraint function is not a function of the control. As in the case of a CVIC it is assumed that the constraint function and the control are both scalars. The comments made in the previous chapter about the vector case apply here also. As in the CVIC case, an SVIC introduces discontinuities into a problem for which the system would otherwise be described by continuous state equations.
Since the constraint function is independent of the control it often happens that some of its time derivatives are also independent of u. If the upperscript j applied to S(x,t) is taken to denote the j'th time derivative of S, the situation will be as follows:
dS/dt = s1(x,t) (93)
 48 
 49
d2S/dt2 = S2(x,t) (94)
dPS/dtp = SP(x,u,t) (95)
In these circumstances the p'th time derivative of S is the lowest order derivative to be an explicit function of the control and the constraint is said to be of order p.
Necessary conditions for these problems are given in [6] and [7].. Denham and Bryson describe a steepest ascent algorithm for numerical solution of SVIC problems in [8] and Jacobson and Lele use an interesting transformation technique in [9].
For present purposes it will be assumed that the initial state, x0, satisfies S(xo,to) < 0 and that the optimal trajectory consists of three subarcs as shown in Figure 5. On the first subarc the state starts at x0 at time to and meets the constraint boundary, S = 0, at time ti. On the second subarc the state follows the constraint from time t, to time t2, at which point it leaves the boundary and reaches the terminal manifold, p = 0, at time tf*
These problems are usually approached by observing that S is identically zero along the constrained second subarc and hence all time derivatives of S vanish along
50
A Cqonstraint
x
xc
ig 5. Th asue eqec fsua o
IV I I oIlms
 51
this subarc. Therefore (95) is set equal to zero and the result is used to eliminate the control on the second subarc. To ensure that S remains equal to zero on this subarc it is then necessary to impose the additional constraints that S and its first (p 1) time derivatives vanish at t1:
S(ti) = 0
S (ti) = 0
P1*
Sp (ti) 0 (96)
The system is'then described by discontinuous state equations but there are also the interior point constraints given by (96) to consider. The algorithm, asderived herein, cannot be directly applied to problems with such constraints. Therefore the following section is devoted to the description of methods of transforming the interior point constraints into terminal constraints, so that the algorithm may be applied to SVIC problems.
At this point a word of caution may be in order.
As will be demonstrated in a later section this applicatior of the algorithm to an SVIC problem will yield the solutior implied by the necessary conditions of [6]. It has recent] been shown that these conditions may give an incorrect
 52
result if they are applied to a problem for which the constraint is not active [17]. Therefore before proceeding in the present manner one should examine the unconstrained optimal trajectory to be certain that it does indeed violate the constraint and that the problem is a true SVIC problem. Otherwise erroneous results may be obtained.
Transforming Interior Point Constraints
into Terminal Constraints
The constraints given by (96) may be transformed into terminal constraints by adding p additional states, say yo through yp1, defined as follows: On the first subarc let the y's be solutions of
Yo = yl yo(te) = S(xo,to)
y = y2 yi (to) = S'(xo,to)
yp2 Yp1 yp2(to) = Sp2(xoto)
Yp = SP(x,u,t) Ypl(to) = Sp1(xo,to) (97)
with
y = 0 (98)
for all j on the remaining subarcs. The y's are to be continuous at t, and t2 so these states will be given by
 53
yj (t) = Sj (t) to < t < t,
= Si (ti) t, < t < t f (99)
for all j with the understanding that SO = S. Then since yj (tf) = SJ(t1) the point constraints may be satisfied by imposing the terminalconstraints
yj(tf) = 0 (100)
on the y's. In this manner the interior point constraints given by (96) may be transformed into terminal constraints and an SVIC problem reduced to one of the type for which the algorithm was designed. Hence to apply the algorithm to an SVIC problem add states given by (97) and (98) with terminal conditions (100), set (95) equal to zero and use the result to eliminate u on the second subarc, and use the algorithm to solve the resulting problem.
Some simplifications are possible. Integration of the added state equations is obviously unnecessary since the y's are given by (99) and therefore can be computed from the x states. Also integration of the Lagrange multipliers associated with the y states is not necessary and parts of the matrices P, R, q, M, and h need not be integrated on the second and final subarcs. These statements are verified in the following three sections which are devoted to detailed examinations of the situation on each of the three subarcs.
 54
The Final Subaro
Let the Lagrange multipliers associated with the y state equations be Co through Cp1 and those associated with the y terminal conditions be no through np1. For convenience define the vectors
Y = [Yo,Ya,**',YpI]
= [Yor l,''', p1
n= [no0,n ,''',np1] (101)
Then on the final subarc the x and y state equations may be written in the partitioned form x. f(xut)
y 0
and (5), (6), and (14) become
S= [4(x,t) + vT (x,t) + nTy] T'
tf
H = L(x,u,t) + X f(x,u,t)
0 = [L + $] = 0 (103)
f
Consequently from (10) and (11)
= Hx )(tf) = T (104)
S(tf) =x (105)
C=0 C (tf) =n(105)
 55
Hence on the last subarc X is unaffected by the presence of the SVIC and = n, a constant, throughout the subarc.
The behavior of the matrices P through h given by
(51) through (55) may be investigated by writing these in partitioned form to separate the elements associated with the x states from those associated with the y states. For example P may be partitioned as
Pxx xy
p = (106)
Pyx yy
where P through P are matrices containing the elements
xx yy
of P associated with x alone, y alone, and those in common with x and y. If (51) is written in partitioned form it will be found that throughout the last subarc matrix P has the form
ixx1
p [ (107)
where P is the matrix P that would apply in the absence of the SVIC. That is P xx can be computed from (51) by ignoring the SVIC.
In a similar manner it may be shown that on the final subarc the remaining matrices have the forms
 56
xx 0
R= [ J(108)
q = T 0T (109)
xx0
M= (110)
0 0
where the symbols 0 and I represent null and identity matrices of appropriate dimensions. The matrices Rxx, qx' Mxx' and hx have the values that R, q, M, and h would have in the absence of the SVIC. Note that matrix M is singular throughout the last subarc. This merely reflects the fact that the y states are uncontrollable after time t1 and introduces no additional difficulty. One may trace through the steps involved in deriving (58) and (59) using partitioned matrices and verify that these may still be used to compute Su and dtf if only the x components of the matrices are used. The x states are still controllable although Mxx(tf) will often be singular as will M(tf) in other cases.
The conclusion is that on the last subarc the
algorithm may be applied by simply ignoring the existence of the SVIC.
 57
The Constrained Subarc
On this subarc (95) is set equal to zero and the result used to eliminate the control so there is no Su to be determined. As in the case of the final subarc an examination of the partitioned forms of (1), (10), and
(12) leads to the conclusion that = T and i = non
x
this subarc, where of course H includes the effect of eliminating the control. Furthermore since there is no u after the elimination, B and c defined in (21) and (22) vanish so (54) and (55) become .M=0
h=0 (112)
and there is no need to integrate these matrices on this subarc.
Partitioning the jump conditions, (69) through (71) and (74) and (75), and using (107) through (ill) indicates that at t2 the jumps in the matrices have the form
D2(P) = (113)
0 =(
D2 (R) =(114)
0 0
 58
D2 (q) = (115)
D2 (M) = (116)
D2(h) = (117)
where D2 (Pxx) ,etc.f are the jumps the respective matrices would undergo if there were no SVIC. These jumps come about only because of the discontinuity introduced by the use of (95) to eliminate the control on this subarc and their values are not influenced by the SVIC in any other way.
If. (51) through (55) are partitioned and use is made of (107) through (111) and (113) through (117) it will be found that the matrices P through h have the forms given in (107) through (111) throughout this subarc also. Hence the constrained subarc may be treated by using (95) to eliminate u, imposing the jumps on the matrices, but otherwise ignoring the constraint. Thus the SVIC does not increase the dimensionality of the problem on this or on the last subarc. It merely introduces discontinuities at times ti and t2.
 59
The First Subarc
On the first subarc the state equations may be partitioned in the.form
x f(x,u,t)
= (118)
Wy + SP (x,u,t)v where the matrix W and the vector v are defined as
0 1 0 0 *** 0 0 0 1 0 ..*** 0 0 0 0 1 ... 0 W = (119)
* **
0 0 0 0 1
0 0 0 0 0
v = [0 0 0 0 ...0 1]T (120)
the dimensions being (p x p) for W and (p x 1) for v. The Hamiltonian may be expressed as
H = L(x,u,t) + XTf(x,u,t) + (T[wy + S (x,u,t)v] (121) Then partitioning (10) accordingly gives = HT (122)
x
 60
= W (123)
Since ( = n on the previous subarc and since the Lagrange multipliers are continuous it follows from (123) that on the first subarc
= K(t,tl)n. (124)
where K(t,tl) is the transition matrix of (123) and is given by
1 0 0 0 **
(t1t) 1 0 0 *
(tlt)2/2 (tlt) 1 0 *
K(t,tl) = (125)
(tit)'/6 (tlt)2/2 (tit) 1 ***
* *
Then the Lagrange multipliers associated with the y states can be computed from (124) rather than by integration of (123). The computation of y from (99) and ? from (124) appears to be all that can be done to avoid an increase in dimensionality along the first subarc, at least for problems with free terminal time. While (99) and (124) allow direct computation of y and C without the need of integration, they are of little use for direct computation of 6y, 6, and the elements of matrices P through h,
61
principally because (124) involves ti which is not fixed. Therefore it appears to be necessary to integrate all elements of the matrices along the first subarc if tf is free. In a later section it will be shown that if
the terminal time is fixed a 'Slight change in the approach will allow additional simplifications.
A First Order SVIC Example
In Chapter II it was mentioned that care must be exercised in using the algorithm since for some problems certain denominators may .go to zero at convergence. The following example gives further insight into th e behavior of the algorithm for such problems.
Consider a system described by
1 X2' x,(O) =7 =2 U X2(0) =0
for which it is desired to find the scalar control u which minimizes
9
J = (1/2) uldt
subject to the terminal constraints
X1(9) = X2(9) = 0
62
and to the inequality constraint x2(t) < 1
Since S = x2(t) 1 the problem is of first order, (p = 1), as S = u. It may easily be shown that the solution consists of the three subarcs given below:
u = 2(3 t)/9
xi = 7 + 3(t/3)2 (t/3)' 0 < t < tl = 3
X2 = (2 t /3)(t/3)
u= 0 1
x,= 8+ t tl < t < t2 = 6
u = 2(6 t)/9
x= 8 + t + (2 t/3)3 t2f< t < t 9
X2= 1 (t/3 2)2.
The optimum cost is J = 4/9.
Application of the algorithm to this problem in the manner just described resulted in the sequence of values shown in Table 3. This table lists the terminal values of the three Lagrange multipliers, the corner times, and the cost as a function of the iteration number. The optimal control for the first and last arcs is given by
TABLE 3
Progress of the Algorithm in Solving the First Order SVIC Problem
Iteration V1 V2 no ti. t2 Cost
0 0.000000 0.000000 0.000000 4.00000 5.00000 0.000000
1 0.187500 0.625000. 0.437500 4.00000 5.00000 0.437500
2 0.212402 0.664062 0.595215 3.28125 5.65625 0.458649
3 0.234021 0.685162 0.737163. 2.99068 6.00590 0.457924
4 0.220261 0.665405 0.652363 3.04991 5.94744 0.445034
5 0.221485 0.665929 0.661504 3.02276, 5.97781 0.444449
6 0.222219 0.666670 0.666640 3.00798 5.99224 0.444446
7 0.222222 0.666667 0.666662 3.00399 5.99614 0.444445
8 0.222223 0.666668 0.666671 3.00199 5.99808 0.444444
9 0.222222 0.666667 0.666668 3.00099 5.99904 0.444444
10 0.222222 0.666667 0.666667 3.00050 5.99952 0.444444
11 0.222222 0.666667 0.666667 3.00025 5.99976 0.444444
12 0.222222 0.666667 0.666667 3.00012 5.99988 .0.444444
13 0.222222 0.666667 0.666667 3 00006 5.99994 0.444444
14 0.222222 0.666667 0.666667 3:00003 5.99997 0.444444
15 0.222222 0.666667 0.666667 3.00002 5.99999 0.444444
16 0.222222 0.666667 0.666667 3.00001 5.99999 0.444444
17 0.222222 0.666667 0.666667 3.00000 6.00000 0.444444
 64
U = 9V V2 no + V1t 0 < t < t,
= 9vI V2 + Vit t2 < t < tf
so it may be seen that the algorithm converged to the correct values. The initial guesses used to start the run were u = 0 for the control and the values given in the table for iteration zero. The algorithm was not allowed to adjust the corner times on the first iteration since it cannot with the initial guess used. The values of the Lagrange multipliers are zero for this guess and these appear as the denominators in the matrix jump conditions and in the expressions for dt, and dt2. Therefore difficulty was avoided by not jumping the matrices and not changing the corner times on iteration one. No step size control was used in this run. Phase plane plots of the trajectories generated on the first two iterations are given in Figure 6. Since the system is linear the terminal conditions were'satisfied to within plotting accuracy on all iterations. Consequently the second subarc lay near the constraint boundary, x2 = 1, on all iterations.
Notice from the table that convergence was very
slow for this problem, particularly in contrast with that for the bangbang example of Chapter III. Ten iterations were required for convergence of the terminal values of
 65
1.2 P Iteration 1 7 1.0
Iteration 2
0.8
.0.6
X2
0.4
0.2
t= 0
0
7 6 5 4 3 2 1 0
xl
Figure 6. Phase plane plots of the trajectories generated in
solving the first order SVIC example.
 66
the Lagrange multipliers while seventeen were required for convergence of the corner times. Notice also that in the final few iterations the corner times were moved half way from their present values to their correct final values. Their final convergence was thus at the rate of one bit of accuracy per iteration. This unusual behavior may be verified as follows. It was remarked earlier that this algorithm may also be derived by minimizing a second order expansion of the cost, as for example in [5]. Therefore consider doing so for conditions existing after iteration ten, at which point correct terminal values of the Lagrange multipliers had been found. Under these conditions the terminal constraints are very nearly satisfied and the first subarc contribution to the augmented cost is approximately
tj
= (1/2)  u 2 dt
Jti
= (1/2) J [2(3 t)/9]2 dt
 2[1 + (ti/3 1)3]/9
whose second order expansion in ti is
d8= 2(ti/3 l)[(tg/3 1) dt1 + (dt1)2/31/9
 67
Setting the partial of J1 with respect to dt, equal to zero and solving for dt gives dt = (ti 3Y/2
which verifies the behavior of the values of t, shown in the table for the last few iterations. Since the problem is symmetric with respect to time it follows that t2 should behave in a like manner.
The dti coefficients of a second order expansion .for dJ, vanish at convergence and the third partial of J, with respect to t, is the lowest derivative which is not zero for the optimal trajectory. Therefore the optimal cost is such a smooth function of t, that a second order expansion is inadequate. From the point of view used in deriving the algorithm, the Hamiltonian is such a smooth function of the corner times that a first order expansion of its jump, as used in deriving (46), is inadequate for some problems. This suggests the possibility of employing higher order expansions, but quite apart from the analytical difficulty involved there is still the question of how far one should go. If higher order terms had been used in Chapter II the results for this example might have been quite different, but this of course does not mean that the dilemma would not rise again in other problems having
 68
smoother Hamiltonians. Therefore it seems simplest to recognize that the difficulty may occur and guard against division by zero as suggested in connection with the Zermello example. The price may be somewhat slower convergence, but in general faster convergence than that obtainable from gradient algorithms should result.
Alternate Necessary Conditions for SVIC Problems
The purpose of this section is to investigate the necessary conditions to which the algorithm converges when applied to an SVIC problem in the manner previously described. This set of conditions will herein be called the alternate necessary conditions. The relation between
this set and previous sets, for example those of [6], is of interest. It is shown that the first subarc Lagrange multipliers and Hamiltonian differ from those of [6], but that the two sets both imply the same solution.
Consider the first subarc necessary conditions first and note that (124) may be rewritten as (ti (126)
J =k=O
Then the necessary condition Hu = 0 may be written from (121) and (126) as
 69
L + xT f + p (t t)pk p 0 (127)
uk=0 (pk 1)! u k=0
This is the relationship from which the first subarc control
is found. Now putting (121) into (122) and using (126) and (127) to rewrite the result gives
L= LT fT + (L + XTf )(S /S (128)
x x u u x u
Then since y(t) = 0 for t > t1 the Hamiltonian at time t is
H(ti) = [L + XTf + p1SP t (129)
Note that the y states and their multipliers do not enter into the problem on subsequent subarcs.
The first and second subarc conditions are coupled by'
X(t) = X(t+)
H(ti) = H(tl+) (130)
On the second subarc the Hamiltonian becomes
H = L(x,u,t) + X Tf(x,u,t) (131)
and u must satisfy
Sp(x,u,t) = 0 (132)
so the control is a function of the state. Then since
 70
T
S= Hx, (131) and (132) imply that
= L T T X + (L + fT )(SP)/S (133)
x x U u XU
The second and final subarcs are coupled by
A(t2) = A(t2+) H(t2) = H(t2+) (134)
On the final subarc
H = L + XTf (135)
so the control is implied by
L + Tf = 0 (136)
u u
T
and I = H yields
x
SL fT x(137) x x
Terminal conditions (11) and (14) give
k(tf) = T (138)
f x
[L + ]t= 0 (139)
tf
where the definition of P given in (103) may for present purposes be replaced by
S= [#(x,t) + v T(x,t)]tf, (140)
71
The alternate set of necessary conditions to which the algorithm converges is then given by the x state equations and their terminal conditions, interior point constraints
(96), and equations (127) through (140). Note that the added states and their Lagrange multipliers have been eliminated and only the n's remain. These p constants are needed in order to satisfy the p interior point constraints.
Comparing these conditions with those of reference [6], allowing for the fact that [6] uses a Mayer formulation instead ofthe Bolza form used here, leads to the conclusion that the second and third subarc conditions are identical with those of [6]. The conditions differ only on the first subarc. Table 4 shows a comparison of the pertinent parts of both sets of conditions as they
apply to the first subarc. The first subarc Hamiltonian and Lagrange multipliers of [6] have been denoted H and X respectively. All other terms except the p's of [6] are in the present notation. Note that the Hamiltonian aiid Lagrange multipliers of [6] jump at the entry corner while those to which the algorithm converges do not. This difference comes about because of the way in which the interior point constraints were handled here. The other differences are that the differential equations satisfied
72
TABLE 4 Comparison of First Subarc Necessary Conditions
Reference [61 Present.Approach
T_ T T_ T TX)(Sp)T/Sp
X=L f x X=L x f x X+(L u +f u x u
P1 pk1
L + Tf =0 u L +X Tf + V. (tit)
u u u u L (p k k
k=O
x Sp 0 .> u
u
fi=L+ Tf H=L+*X T f+ P 1 (t _t)jk
I (j k)l
=0 k=O
Sj+1
x Tlk
fi(t )=H(tj+)[pOSt+vjS H(ti)=H(tl+)
t
+ S P1
P1 t ti
X(tl)=,%(tl+)+[POS +PIS' X (t i =X (t I +) x x
P1
+v P1 S x ti
 73
by the Lagrange multipliers are not the same and the present control equation and Hamiltonian contain the terms involving polynomials in time as shown in the table.
The relationship between the two sets of conditions may be seen by recalling that the Lagrange multipliers relate the change in cost to changes in state along the trajectory. Because of the derivation used in [6] the multiplier X relates the total change in cost to changes in x. Here however the multiplier X relates part of the change in cost to changes in x while C relates the remaining change in cost to changes in y. But since y.
3
SSj, variations in the y's satisfy
y.= SJ6x (141)
Jx
so.the total change in cost is related to changes in x by
X + CP1 jT (142)
j=0 3
This is the relationship between the two sets of first subarc multipliers. Using (126) and (130) to evaluate (142) at time tl gives
= (t1+) + njS(t)] (143)
j=0 x
 74
Comparing (143) with X(ti) as given in Table 4 leads to the conclusion that n = p. Hence the terminal values of Share the Lagrange multipliers associated with the interior point constraint in [6].
To show that X as expressed by (142) satisfies the differential equation specified in [6] differentiate (142) using (123) and write
, p1 pi
S= C j (sJ)T x (144)
j=l 1=0j 44)
But
d[S (x,t)]T /dt =S f + (S ) 0 < j < p 1 (145)
x xx xt
so (144) becomes
* p1 p1 .
= ~ j 1(S) j [S3xxf + (S t)] (146)
j=1 j=
Now use (126) and (127) to rewrite (128) as
S= L fTx p (SP) (147)
x x p1 x
Using (142) to eliminate X from the right hand side above yields
T T (S)T p1 .(j)
 T fTx (Px + ( fS (148)
j=n X
 75
Putting (148) in (146) results in
= L T f (S p 1 (
j=1
pI T T
+ g [S f + f (S ) + (SJ) ] (149)
j=0 xx x x xx
But
j+1 D. d 3 S d S (x,t)
x ax dt
S(Sif + Si) x x t
so
(Sj+l T = S f + fT(S)T + Si
x xx x x xt (150)
Therefore (149) reduces to ~T T
X = L fT (151)
x x
which is the differential equation for X given in [6].
The numerical relationship between Hamiltonians
may be obtained by substituting (142) into the expression for H in Table 4 to get p1
= L + XTf + .Sf (152)
j=0 x
By using yj = Si one can rewrite (121) as
 76
p1
H = L + Tf + p Sj+ (153)
j=0 I
Solving (153) for (L + XTf) and putting the result in (152) yields
pi
H=H + .pXC(Sxf S]+1 j=0
p1.
=H X .S (154)
j=O
As a check on the consistency of (154) use ((tj) = n and the continuity of H to evaluate (154) at time t1 to obtain
p1
H(ti) = H(ti+) I n.S3(ti) (155)
j=0 j
which agrees with the value of H(ti) from reference [6] as stated in Table 4.
The two sets of necessary conditions are thus compatible and their first subarc Lagrange multipliers and Hamiltonians are related by (142) and (154). The conditions are identical on the remaining subarcs. The alternate set may also be used to solve simpler SVIC problems by analytical means. An example taken from [6] is given in the following section.
 77
The Analytical Solution of an SVIC Problem Using
the Alternate Necessary Conditions
The system is described by
x = v x(0) = 0
v = a v(0) 1 (156)
and the object is to find the control a(t) which minimizes r1
J = [a2(t)/21 dt (157)
0
subject to the inequality constraint S(x,t) = x Z < 0 (158)
and the terminal constraints
x(1) = 0
v(1) = 1 (159)
Since S() = v and S(2) = a this is a second order SVIC problem which happens to be simple enough that an analytical solution is possible.
The interior point constraints of (96) take the
form
x(t1) = k
v(ti) = 0 (160)
 78
From (128), (133), (137), and (140) it follows that on all subarcs
x 0 x x
so
= x v = v v x(t1) (161)
v x v v x
From (127), (132), and (136)
a = X nl no(tl t) 0 < t < t1
V
0 tl < t < t2
= .t2 < t < 1 (162)
The values of H at the corners are given by (129), (131), and (135) as
H(ti) = [v x(tI 1) + n112/2
H(t1+) = H(t2) = 0
H(t2+) = [v~ (t2 1)]2/2 (163)
v x
Then continuity of H at tj and tj requires that v = V x(t2 1) v x
na = v x(t t2) (164)
But on the first subarc
 79
v(t) = 1 + (vx + r0)(t2 2tit)/2 o0 < t < t, (165) so v(ti) = 0 gives
2
9 + ro = 2/ti (166)
Likewise on the first subarc
x(t) =(t3 3tit2 + 3t t)/(3t2) 0 < t < ti (167). so x(ti) = k gives ti = 3k (168)
On the last subarc
v(t) = x(t t2)2/2 t2 < t < 1 (169)
so v(1) = 1 implies Vx = 2/(1 t2)2 (170)
Then on the last subarc
x(t) = Z (t t2)3/[3(1 t2)2] t2 < t < 1 (171)
do from x(1) = 0 it follows that t2 = 1 3 (172)
Finally from (171), (166), and (168)
 80
vx = 2/(3.)2
v = 2/(3P)
i = 2(1 6t)/(3)2
no = 4/(3.) 2 (173)
The solution then becomes a(t) = 2[1t/(3k)]/(3t) 0 < t < 3k
= 0 3 < t < (13k)
= 2[1(1t)/(3k)]/(3t) (13) < t < 1 (174)
and the trajectory satisfies
v(t) = (1t/31)2 0 < t < 3P
= 0 3 < t < (13)
=[1(1t)/(3i)]2 (13t) < t < 1
x(t) = [11(1t/3Y) ] 0 < t < 3
= P 3U < t < (13)
= k{l[1(1t)/3]} (13) < t < 1 (175)
Since t, = 3U and t2 = (1 3k) then t2 will exceed ti so long as k < (1/6), in which case the trajectory will follow the constraint for a nonzero period of time.
In this problem there is also a range of values of P for which the trajectory will simply touch the constraint boundary at a point without following the constraint for a nonzero time period. This case can also be treated using
 81
the alternate conditions. To do so one replaces t2 with tj and drops the condition SP = 0 and the second of (162) and of (163). Then the last of (163) gives H(ti+) instead of H(t2+). Comparing the first and last of (163) and recalling that the Hamiltonian must be continuous leads to the conclusion that n, = 0. Then the remaining values to be found are those of vx' v' no, and tj. These may be obtained from the conditions x(t1) = , v(t) = 0, x(1) = 0, and v(1) = 1. The resulting solution is a(t) = 8(13k)+24(14k)t 0 < t < 0.5
= 8(139)+24(14k)(lt) 0.5 < t < 1
v(t) = 18(13k)t+12(14k)t2 0 < t < 0.5
= 1+8(13k)(1t)12(149)(1t)2 0.5 < t < 1
x(t) = t4(l31)t2+4(14)t3 0 < t < 0.5
= 1t4(139)(1t)2+(14k)(1t) 0.5 < t < 1
H = 8(169)2 (176)
By analyzing the unconstrained problem one can conclude that the constraint has no effect for k > (1/4), so the above applies for (1/6) < k < (1/4).
Other Simplifications
For problems having fixed final time it is possible to modify the definitions of the added states and obtain
 82
additional simplifications. Instead of defining the y
states as in (97) and (98) one may also use
S= Wy + SPv to < t < tx
 Wy< < t < tf (177)
and impose the terminal constraint y(tf) = 0. In this case
= K(t,tf)n to < t < tf (178)
so if tf is fixed
C = K(t,tf)d to < t < tf (179)
But the partitioned forms of (47) and (48) are
6X P P 6X R R dv q
xx xy + xx xy x
= + + (180)
6P yxP yy 6 R yx RYY dn q
d@ Rx R 6x M M xy Idv h
dRxx xy xx (181)
= + + (181)
dy(t ) R R 6y M Myy d h
[dytf yx yyJ J [Myx yydJ hyJ
so comparing (179) and (180) leads to the conclusion that
P= pT = 0
xy yx
P =0
yy
R = 0 to < t < tf
yx f
qy = 0
R =0 (182)
yy
83
Furthermore from (177)
y(tf) = K t(t,tf) y(t) tf < t.< tf (183)
so if tf is fixed
dy(tf) = KT (ttf) 6y(t) t < t < tf (184)
Then from (181) and (184) it follows that on the second and final subarcs
R =0
xy
M =MT =0
. yx xy ti < t < tf
M =0
.yy
h = 0 (185)
Y
By partitioning the matrix differential equations and jump conditions it may be seen that as before the constraint does not influence the x components of the matrices except on the first subarc.
Note that these simplifications do not apply if the terminal time is free since inthat case (179) and (184) are not correct. Also all of the components of the matrices would then have to be integrated from tf to to and considerable complication would result.
One other step may be very helpful in many
instances. Instead of solving for the control one may define a new variable, say a, by
 84
a=SP(x,u,t) (186)
Then (186) may be solved for u and the result used to eliminate the control. Then a becomes a new control whose first and last subarc values may be determined by the algorithm. (Obviously a = 0 on the second subarc.) This use of (186) will in many cases simplify the matrix jump conditions and is therefore recommended.
As in the case of a CVIC problem, it may be
possible to have the algorithm determine the sequence of subarcs for an SVIC problem. If an initial guess for the control which yields a feasible trajectory can be found the sign of the constraint function, S(x,t), can be examined while iterating and the occurence of corners detected as they are generated. This possibility has not been investigated so it is not possible to say what degree of success might be expected. For conditions under which the constrained subarc will be a "touching" subarc as opposed to a "following" subarc see [17]. This information will sometimes aid in predicting the sequence of subarcs to be expected in an SVIC problem.
CHAPTER V
APPLICATION TO'PROBLEMS HAVING SINGULAR SUBARCS
For some problems the optimal trajectory has
socalled singular subarcs along which the Hamiltonian is independent of the control and Huu is singular [13], [15]. These problems require special treatment since neither the necessary condition Hu = 0, nor the maximum principle give information about the control along singular subarcs. Furthermore these problems often have nonsingular subarcs along which the Hamiltonian is linear in the control and along these subarcs the maximum principle, rather than Hu = 0, determines the control. Since Hu = 0 was used in developing the algorithm, the results of Chapter II do not apply directly to either singular subarcs or to subarcs along which H is linear in u. The algorithm may be applied to singular problems if an alternate relation, say w(x,u,X,t) = 0 (187)
can be found to take the part played by Hu = 0 in Chapter II. The determination of the function w will be discussed in the following paragraph. First, however, note that once w has been found, one may rederive the results of Chapter II, using w = 0 in place of Hu = 0. This has been done and the
 85 
 86
resulting equations are listed in the Appendix. Note that w has the same dimensions as u.
The way in which w is determined depends upon which of the three possible types of subarcs is being treated. The possibilities are
1. Nonsingular subarcs along which H = 0 determines
the control. u
2. Nonsingular subarcs along which a control constraint is active.
3. Singular subarcs along which Hu vanishes identically. In the first case one simply uses w = HT in the equations of
u
the Appendix. The second case occurs when either the application of Hu = 0 gives a control which violates a constraint, or when the Hamiltonian is linear in u. In either event, for the control to remain finite (which is the only case considered here) there must be a control constraint, perhaps of the form
T
u u < 1 (188)
Then one determines the desired control using the maximum principle by solving
Minimum H(x,u,X,t) (189)
uEU
where U is the set of admissible controls. For example, if H is linear in u and (188) applies, the desired control is u = H/11, (190)
 87
and a suitable w is
w(x,u,Xt) = u + Hu/IHu (191)
The point is that the solution to the minimization problem stated in (189) implies some relation of the form given in (187) and it is this relation that can be used to replace Hu = 0 along subarcs of the second type.
Singular subarcs may often be treated by using the fact that 1u, and hence its time derivatives, vanishes identically along such subarcs. Therefore, an expression for w for singular.subarcs can usually be obtained by setting successively higher time derivatives of Hu equal to zero until an expression containing the control appears. This expression is the desired form of w. For examples in which this is done see [13].
In some problems w willbe found to be independent of X for all subarcs. This is a particularly fortunate situation since the control can then be eliminated along all subarcs and the algorithm applied directly as described in Chapter II to find the corner times. In these cases there is no need to use the equations given in the Appendix and the situation is very similar to the bangbang case treated in Chapter III.
Comments on Symmetry
In Chapter II equations (18) and (19) were
6 = Ax + BX+ c
= Dx A T6 + e
Notice that these are symmetric in that the coefficient of Sx in the first equation is the negative transpose of the coefficient of R3 in the second and in that B and D are symmetric. Because of this symmetry and the symmetry of the jump conditions it.was possible to express (47) and (48) in the symmetric forms
SX = P6x + Rdv + q
d = RT x + Mdv + h
where P and M were symmetric. This symmetry is fortunate since because of it not all elements of P and M need be integrated and no additional coefficient matrix for 6x is needed in the second equation. The symmetry came about through the use of Hu = 0 as a necessary condition for the control at convergence. It may easily be shown that the symmetry will remain on subarcs for which H is linear in u and (191) applies. Along singular subarcs where w is found in the manner described following (191), this symmetry
 89
may not be present in some problems. If the symmetry is not present (18) and (19) will have the form x = A~x + BM) + c
6= Dx AT6X + e
where A1 34 A and B and D may no longer be symmetric. Then
(47) and (48) will have to be replaced by 6SX PSx + RdV + q
T
dl R T6x + Mdv + h
where R, is an additional matrix that will be needed.
The equations listed in the Appendix apply to the asymmetric case and will reduce to the correct symmetric form if the symmetry is present.
Two Examples of Problems Having Singular Subarce
The following examples illustrate the solution of problems having singular subarcs and give insight into the effect of the step size control method for the corner time calculations recommended in Chapter II. The first example is simple enough that the algorithm reduces to a single equation so its convergence properties with and without step size control are clearly evident. These are examples one and two from reference [15].
 90
Example 1
The system is described by
x = u x(0) = 1
where x and u are scalars. The control must satisfy lul :1
and the object is to find u(t) for 0 t 5 2 to minimize J = (x2/2) dt
0
The solution may be easily seen to be u=1 0 t l
=0 l
where the last subarc is singular.
In this case
H = x2/2 + Xu and
= x ~(2) = 0
so it follows that
 91
H =
U
H =x
U
= u
U
Then Hu = 0 gives u =0 as the desired control on the singular subarc. The equations of the algorithm may be solved analytically for this simple problem and the result found to be
dtj (1 tj) (2 tl)/(3 2tj)
It is not difficult to see that the algorithm can converge either to the minimizing value tj = 1 or to the stationary value tj = 2 since both these values are stationary equilibrium points of the above equation. Convergence will be to tj = 1 for initial guesses less than 3/2 and to tj = 2 for initial guesses greater than 3/2.
As was mentioned in Chapter II, the step size for
the changes in the corner times may be controlled by adding a positive constant, say a, to the denominator of (46). Doing so for this problem yields
dt1 = (1 ti) (2 tl)/(a + 3 2tj)
The value of a is to be zero so long as 3 2t1 > 0 and otherwise large enough to ensure that the denominator of the above expression remains positive on all iterations. It is clear from the above that the estimate, t1 = 2, is now an unstable
92
equilibrium point so that convergence to this value is very unlikely. It is also obvious that convergence to the correct t, will occur for all initial guesses less than 2, so long as a is chosen large enough on each iteration to prevent the estimate from exceeding 2. This use of step size control makes convergence to other than locally minimizing solutions quite unlikely and improves the range of convergence. In the present instance the range was increased from t1 < 3/2 to ti < 2.
Example 2
The state equations are
xi = X2 xi(0) =0
X2 = U X2(0) 1
and the scalar control must satisfy
lul <1
It is desired to find a u(t) satisfying the inequality constraint which will minimize J = x2) dt
The Hamiltonian is
2 2
H = xi + x2 + Xix2 + X2u

Full Text 
PAGE 1
COMPUTATION OF OPTIMAL CONTROLS FOR DISCONTINUOUS SYSTEMS By EDWARD R SCHUIZ A DISSERTATION PRESENTED TO THE GRADUATE COUNOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSl1Y OF FLORIDA 1970
PAGE 2
ACKNOWLEDGMENTS The author would like to express his gratitude to the members of his supervisory committee for their supervision and guidance. He specifically wishes to thank Dr. T. E. Bullock for ~timulating discussions and penetrating questions which were helpful in guiding the direction of this effort. Without the patience and under standing of the author's family this work could not have been accomplished. Thanks are al$O due to Mr. E. C. Fehner and others of the General Electric Company for their kind encouragement and support. ii
PAGE 3
TABLE OF CONTENTS ACKNOWLEDGMENTS . . . . . . . . . Page ii LIST OF LIST OF ABSTRACT Chapter I. II. TABLES FIGURES .................. ............. ............... ............ INTRODUCTION DEVELOPMENT OF THE ALGORilHM Mathematical Description of V vi vii 1 6 the Problem . . . . . . 6 Necessary Conditions 8 Expansion of the Necessary Conditions ........... ................ 11 Jump Conditions 15 The Carner Times 18 Solution of the Boundary Value Prob Zem 19 Implementation of the Algorithm 26 Zermello's Problem 27 III. APPLICATION TO SYSTEMS SUBJECT TO CONTROL VARIABLE INEQUALITY CONSTRAINTS 3 4 BangBang Problems 37 A BangBang Example . . . 39 Estimating the Number of Corners for CVIC Problems 45 IV. APPLICATION TO SYSTEMS SUBJECT TO STATE VARIABLE INEQUALITY CONSTRAINTS 48 Transforming Interior Point Constraints into Terminal Constraints 52 The Final Subarc .................. 54 iii
PAGE 4
TABLE OF CONTENTSContinued Chapter Page IV. The Constrained Su bar a 5 7 The First Subara 59 A First Order SVIC Example ........... 61 Alternate Neaessaiy Conditions for SVIC Problems . 68 The Analytiaal Solution of an SVIC Problem Using the Alternate Neaessary Conditions . . 77 Other Simplifiaations . 81 V. APPLICATION TO PROBLEMS HAVING SINGULAR SUBARCS 85 Comments on Symmetry 88 Two Examples of Problem$ Having Singular Suba~as 89 E::camp Z.e 1 .90 Example 2 . . . . . 92 VI CONCLUDING REMARKS 9 9 Conalusions 99 Reaommendations for Future Work 100 APPEND I X l O 4 REFERENCES . . . 108 BIOGRAPHICAL SKETCH . . . . . . . . 110 iv
PAGE 5
Table 1. 2. 3. LIST OF TABLES Page Progress of the Algorithm in Solving Zermello's Problem...................... 31 Progress of the Algorithm in Solving the BangBang Example 41 Progress of the Algorithm in Solving the First Order SVIC 63 4. Comparison of First Subarc Necessary s. 6. 7. Conditions . . . . . . . 7 2 Progress of the Algorithm in Solving the Second Singular ExampleRun 1 .......................... Progress of the Algorithm in Solving the Second Singular ExampleRun 2 Progress of the Algorithm in Solving the Second Singular ExampleRun 3 V 94 97 98
PAGE 6
LIST OF FIGURES Figure Page 1. An illustration of the jump in ox(t) at a corner . . . . . . . 16 2. Zermello' s Problem with an island . 29 3. Phase plane plots of the trajectories generated in solving the bangbang ex amp le . . . . . 4 3 4. Terminal radius for feedback control of the bangbang problem 44 5. The assumed sequence of subarcs for SVIC problems ............. . . . 50 6. Phase plane plo~s of the trajectories generated in solving the first order SVIC example . . 6 5 7. Final control history ~or the second singular example . . 95 vi
PAGE 7
Abstract of Disserta.t.1.on Presented to the Graduate Council of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy COMPUTATION OF OPTIMAL CONTROLS FOR DISCONTINUOUS SYSTEMS By Edward Ross Schulz December, 1970 Chairman: Dr. T. E. Bullock Major Department: Electrical Engineering Optimization problems of interest to control engineers often involve physical systems described by ordinary differential equations subject to discontinui ties. Furthermore, problems having inequality constraints and s~rygular subarcs m ay be treated as members of this class so methods of solving such problems are of great importance in engineering. Analytical solutions are possible only in the simplest cases so problems of practi cal interest must generally be solved by numerical tech niques. The work described herein is concerned with the derivation of an iterative algorithm for the numerical solution of these problems. The NewtonRaphson method is used to obtain an algorithm that will generate a vii
PAGE 8
sequence of corner time~ and controls which converges to values satisfying the necessary conditions of the calculus of variations. The method features quadratic final con vergence and yields feedback gains which may be used to find controls and corner times for trajectories near the optimum. Applications of the algorithm to control variable inequality constrained problems, state variable inequality constrained problem,s and problems havirig singular subarcs are developed and examples are given. viii
PAGE 9
CHAPTER I INTRODUCTION Numerical methods for the solution of optimum control problems have received much attention in the technical liter ature and many algorithms for their solution have been pro posed. The emphasis has generally been on problems involving systems described by ordinary differential equations not subject to discontinuities. However problems with systems having discontinuous state equations are also important, particularly since inequality constrained and singular problems may be treated as members of this class. In the present study, a Newton:Raphson algorithm for the numerical solution of discontinuous problems is developed and its application to inequality constrained and singular problems is demonstrated. One of the earlier approaches f_or systems described by ordinary differential equations not subject to discon tinuities is due to Bryson and Denham [1)~ The method, a steepest ascent or gradient technique, is relatively simple to apply and is relatively insensitive to a poor initial guess of the control history, but suffers from slow convergence as the final solution is approached. The difficulty with slow final convergence was later overcome by investigators 1
PAGE 10
2 such as Merriam [2] and McReynolds and Bryson [3]. Merriam's algorithm minimizes a second order expansion of the objective function subject to linearized constraints, the goal being to generate a sequence of trajectories with an improved objective function on each iteration. McReynolds and Bryson used linear expansions to obtain an algorithm which generates a sequence of trajectories converging to values satisfying the necessary conditions of the calculus of variations. These methods are analogous to the NewtonRaphson approach to ordinary parameter optimization problems and yield rapid final convergence, if convergence is obtained. However, for successful convergence the methods often require better initial guesses o f the control history than steepest ascent techniqu e s. Bullo c k [4] and Bullock and Franklin [5] extended these results and developed a method featuring rapid final convergenc e s op histicated step size control which improves convergence with a poor initial guess, and protection against conjugate points in the auxiliary probl&m. The prec e ding investigations are all concerned with systems not subject to discontinuities nor to inequality constraints. Necess a ry conditions for p r obl e ms with astute variabl~ inequality constraint, (SVIC), and a control variable inequality constraint, (CVIC), a:;e g i ven in [6] and [7] and I a ne w s e t o f SV I C conditions will soon b e published [17]. Denh a n t and Ery s on d e sc ribe a steep es t ascent algorith m for
PAGE 11
3 the numerical solution of SVIC and CVIC problems in [8]. The algorithm, being a steepest ascent method, suffers from the slow final convergence mentioned earlier. In later work, SVIC problems on the one hand, and CVIC problems on the other, hav~ usually been considered apart from one another and different approaches used for each. A very ingenious technique for numerically solving SVIC problems has been suggested by Jacobson and Lele [9~. A transformation is employed which makes it impossible for the system to violate the inequality constraint, but which transforms the subarc along the constraint into a singular subarc. The problem can then be solved using any existing algorithm which is not sensitive to singular subarcs. Unfor tunately the transformation increases the dimensionality of the problem and hence the number of differential equations which must be integrated. Discontinuous systems have been considered by Jacobson in [10] and by Dyer and McReynolds in [12]. The approach used in both cases is to derive algorithms which find trajectories satisfying the dynamic programming con ditions of optimality~ Jacobson allows large changes in the control to occur from iteration to iteration and obtains step size control by optimizing over a limited region of the latter part of the time interval, [to,tf]. Dyer and McReynolds are silent on the question of step size control
PAGE 12
4 but comment on the need for a good initial guess. The examples given by these authors are of the bangbang type. In a later paper, [11], Dyer and McReynolds present a mod ified version of their approach using piecewise constant feedback gains to reduce the computational burden. The approach taken in the present case is similar to that of [3]. The NewtonRaphson method is used to develop an algorithm which seeks to find a trajectory satisfying the necessary conditions of the calculus of variations. The major difference here is that the discontinuities make it necessary to introduce ju~ps into the variations of the state and adjoint variables at the corner times. The estimates of the corner times themselves are adjusted in such a way as to force the Hamiltonian to be continuous at the corners. The following chapter is devoted to the derivation of the algorithm. A detailed mathematical description of the problem is given and the necessary conditions, as appli cable to this problem, are reviewed. The equations on which the algorithm is based are then derived. This leads to the usual linear two point boundary value problem which may be solved by the backward sweep method. Because of the jumps which must be introduced into the variations of the state and adjoint variables, it becomes necessary to jump the backward sweep matrices at the corners. A detailed summary of the steps involved in implementing the algorithm is then
PAGE 13
5 given and an example which illustrates a possible difficulty is discussed. In Chapter III the application of the algorithm to CVIC and bangbang problems is described. Certain modifi cations to the approach are recommended to avoid possible difficulty with controllability in the case of bangbang problems. This yields a simpler form of the algorithm. A numerical example is then given. Chapter IV covers applications to SVIC problems. Additional states are introduced in order to transform the problem into one in the class for which the algorithm was designed. Possible simplifications in the computations are discussed, an example is given, and the relation between the necessary conditions to which the algorithm converges and those of [6] is found. Chapter Vis concerned with the solution of singular problems. Success depends upon finding an expression for the desired control along the singular subarc. This can often be done by using the fact that the partial derivative of the Hamiltonian with respect to the control vanishes identically on singular subarcs. Concluding remarks and recommendations for future work are given in Chapter VI.
PAGE 14
CHAPTER II DEVELOPMENT OF THE ALGORITHM Mathematical, Description of the Problem The problems of interest here are those involving physical systems whose state vector, x, satisfies a dif ferential equation of the form x = f. (x,u,t} 1 x(to} = Xo on the interval til < t < ti with i = 1,2,,m and (1) tf tm. The vector functions fi and the values to and x 0 are given and a dot above a quantity indicates dif ferentiation with respect to time, t. The control, u, is in general a vector function of te[to,tf] and may or may not be continuous at the corner times, t In the usual 1 physical situation the state is continuous so take x(t.} 1 = x(ti+} for all i. The object is to find m values of ti and a control, u(t} for all te[t 0 ,tf], such that m J = c~cx,t>Jt + 1 f i=l f t. 1 Li(x,u,t} dt t. 1 1is minimized subject to (1) and such that the terminal constraint 6 (2)
PAGE 15
7 [1/J(x,t)]t = 0 f (3) is satisfied. Here~' 1/J, and the L. are given functions l. of their arguments, 1/J generally being a vector and the remaining functions scalars. The subscript tf denotes evaluation at t = tf. An iterative numerical technique for solving this problem will be developed by using the Newton Raphson method to derive an algorithm which will find a trajectory satisfying the necessary conditions of the calculus of variations [3]. The same technique may be obtained by using the approach of [4] and [5] in which a second order expansion of the cost is minimized to obtain the algorithm. In either case it should be recbgnized that without step size control the resulting method will find stationary trajectories rather than strictly minimizing ones. The reason in the case of [4] and [5] is that the auxiliary problem is solved using the necessary conditions of the calculus of variations, which merely yields stationary solutions if step size control is not used. This point will be illustrated with an example in Chapter V. It is assumed that the number of subarcs (the value of m), and the nature of the subarcs is known a priori. This is not felt to be a severe limitation since second order algorithms of this type are notorious for
PAGE 16
8 their tendency to diverge if not carefully employed. A good initial guess and careful application of step size control is usually required for success, so a great deal of information about the problem is apt to be required at the outset in any case. In the solution of CVIC and SVIC problems it may be possible to arrange the compu tations in such a way that the algorithm will determine the sequence of subarcs as the iterations are carried out. The possibility is discussed inChapters III and IV. Necessa~y Conditions Necessary conditions for this problem are readily obtained from the calculus of variations so only a brief outline of their derivation will be given here [13]. Use Langrange multipliers A and v to write the augmented form of (2) as where m J' = 4> + .I i=l T [H(x,u,A,t) Ax] dt T 4> 6 [$(x,t) + v ~(x,t)]tf T H(x,u ,A,tJ 6 L(x,u,t) + A f(x,u,t) (4) (5) ( 6) The subscripts i have been dropped in the above and will not appear in subsequent expressions unless they are
PAGE 17
9 necessary for clarity. Taking the first variation of (4), integrating the result by parts, and using ox= dx x dt at each time t. gives J. m dJ' = [Ix dx + It dt]t +.I f i=l t. [H dt AT dx] 1 t. l+ J.rn + I i=l [(H + ~T)ox +Hau] dt X U (7) where subscripts t, x, and u indicate partial differentiation with respect tot, x, and u, respectively. Now use dx(t 0 ) = 0 and dt 0 = 0 to rearrange the sum in (7) and write dJ = [ (I X T ~l T A )dx + (~t + H)dt]t I D. (H dt A dx) f i=l 1 m + l. i=l [(H + ~T)ox +Hou] dt X U (8) where the operator D. operating on some function of time, J. say g(t), is defined by D. (g) = g(t.+) g(t.) J. J. J. Then from (8) the necessary conditions (in addition to (1) and (3)) are (9) (10)
PAGE 18
10 A(tf) = T X (11) D. (A) = l. 0 (12) D. (H) = l. 0 (13) S) = [H + t 1 t = [L + ] t = 0 f f (14) H = 0 u (15) The second form of {14) follows from the definitions of H and I. Equations (10), (11), (14), and (15) are the con ditions encountered in p~oblems without discontinuities. The additional conditions, (12) and (13), simply require A and H to be continuous at the corners. An algorithm to find a t~ajectory which satisfies these conditions can be developed in at least two ways. One approach is to solve (15) for the control and use the result to eliminate u from the remaining conditions. The resulting reduced set of conditions can then be met by an iterative method as is done by Schley and Lee [14). How ever in that case step size control may be more difficult to achieve so the approach of McReynolds and Bryson in (3) will be followed. The step size control and conjugate point protection schemes of [41 will then apply. The backward sweep method will be used to solve the associated linear two point boundary value problem and the results will
PAGE 19
11 be put in a form that yields feedback gains for use on neighboring trajectories. Expansion of the Necessary Conditions Proceed as in [3] and assume the following steps are taken: 1. Guess u{t), v, and m values of ti. 2. Use these to integrate (1) forward from t 0 to tf to obtain a nominal trajectory. 3. Integr'ate (10) backwards from tf to t 0 using (11), (12), and necessary values from the nominal ~raject~ry. Then (1) and (10) through (12) will be satisfied but in general (13) through (15) will not. Therefore consider first order expansions of the necessary conditions about the nominal trajectory with the object of finding changes, ou and dti' which will yield a new trajectory for which (13) through (15) are more nearly satisfied. Expanding Hu gives (16) In an attempt to drive Hu to zero and satisfy (15) on the riext iteration take oH = H in (16) and solve the result u u for ou to obtain l T T ou = H (H ox+ f oA + H) uu ux u u (17)
PAGE 20
12 To ensure that au is in the direction of decreasing cost, and to obtain control of the magnitude of au, it is recom mended that the step size control scheme of [4] be used. This is accomplished by adding a positive definite sym metric matrix, say N, to H in (17). The matrix N uu should be chosen to ensure that (H + N) remains positive uu definite and tha~ au remains small enough for the expansions to be sufficiently accurate for all iterations. As convergence is approached and H is driven to zero, u the value of N should be decreased until it reaches zero for the last few iterations. See [4] for further details. For convenience the matrix N will not be written in any of the equations which follow. For purposes of imple mentation one simply adds N to H wherever H appears uu uu in the following equations. If th:i.s step size control scheme is implemented the term H1 HT in (17) will be forced to remain small, uu u either because N is large for the earlier iterations, or else because Hu becomes small as convergence is approached. As a result the product H1 HT will be treated as a first uu u order term throughout the remainder of this development. This is desirable since it results in great simplifications in the expressions which follow. Putting (17) into first order expansion of (1) and (10) gives
PAGE 21
13 6i = A6x + B8A + c 6x(to) = 0 (18) 6A = T D6x A 6A + e (19) where A = f f H1 H X u uu ux (20) B = f HlfT u uu u (21) (22) 1 o = (H H H H ) XX XU UU UX ( 23) (24) Next expanding (11), (14), and (3) to first order gives dA(tf) [txxdx +~:av+ txtdt]tf an= [Qxdx + ~Tdv + ntdt]t f (25) (26) (27) With certain modifications to follow, (2S) will provide terminal conditions for (19), (26) will yir:ld an 0 :,.~ pres sion for dtf, and ( 27) wi 11 aid in elimii1ating dv. Note that on any given iteration~ and n may not be zero so it will be desirable to assign values to d~ and dn wh~ch will force~ and n to be more nearly equal to zero
PAGE 22
14 on the next iteration. Hence d~ and dn in (26) and (27) should be regarded as known. To obtain a~ expression which is useful in trans forming (25) through (27) into more desirable forms use (14) and (5) to write (28) whose partial derivative with respect to xis (29) Now using (10) and (11) gives (30) Next note that the final total variations in x and A are given by (31) dA(tf) = oA(tf) + A(tf) dtf (32) Putting (31) into (26) and solving for dtf gives dt = [(dn n OX ~Tdv)/Q]t f X f (33)
PAGE 23
15 Now use (30) through (33) to eliminate dtf, dx(tf), and dA(tf) from (25) and (27) to obtain n!dn/nlt f (34) (35) These are the desired forms for oA(tf) and d~ for use in the backward sweep solution of the two point boundary value problem given by (18) and (19). Jump Conditions Though (18) and (19) describe the behavior of ox and oA along any subarc, correct results cannot be obtained from these unless jumps are introduced into their solutions at the corner times. The need for these is illustrated by Figure 1 which shows the behavior of one element of x(t) for the nominal trajectory and for a neighboring trajec tory near a corner. Both the nominal and neighboring trajectory as well as their difference are continuous. However (18) is an expansion of (1) and (1) is discon tinuous at t .. Hence (18) may have one form to the left 1 oft. and another to the right. Then if its solution is 1 to be correct beyond t. it is necessary that the jump shown 1 in the figure as D. (ox} be introduced into ox(t) at t 1 1
PAGE 24
X (t) ox:1. 1 I I I 16 I D.(ox) I i I _,,,,....3 + ox. 1. Neighbor Nominal _______ __. __ ____ ~t. 1. t.+dt. 1. l. Figure 1 An illustration of the jump in ox(t) at a corner.
PAGE 25
17 To determine Di(ox) note that at point 1 of the figure x(point 1) = x. + ox. l. l. (36) with x. x(t.) and ox. ox(t.). Equation (36) may be l. l. l. l. extrapolated to point 2 with first order accuracy as For x(point 2) ox:= x. + + x.dt. l. l. l. l. = x. + ox:+ f:dt. l. l. l. l point 3 + x(point 3) = x. + ox l l which extrapolates to point 2 as + + x(point 2) = x. + ox. + f.dt. l l l l (37) (38) Equating (37) to (38) to find the required jump in ox gives D. (ox) = D. (f) dt. l l. l (39) Proce e ding in the same way for 6). gives D. (o>.) = D. (H ) dt. l l X l (40) Note that as the algorithm converges,the dti must go to zero so the jum p s will also.
PAGE 26
18 The Corner Time.CJ Now condition (13), which requires the Hamiltonian to be continuous at the corners, will be used to derive expressions for computing improved values of the corner times. Changes in the t. will be made so as to force H 1. to be more nearly continuous on the next iteration. On the neighboring trajectory the Hamiltonian will satisfy + fT~, + Next H. = [H + H ox+ Hou+ uA + Hdt.]. l. X U l. l. Next H: = [H + H ox+ HOU+ fro;\+ Hdt.1: l. X U l. l. Then to first order the jump in Hon the neighboring trajectory will be Using (17) to eliminate ou gives ( 41) ( 42) where the term has been treated as being of first order in making the substitution. If a and bare functions of time it may easily be verified that D. (ab) = a:o. (b) + o. (a)b: l. l. l. l. l. (45)
PAGE 27
19 Using (39), (4), and (45) in setting (44) equal to zero gives the desired expression for the change int. as l. [H+D. (f) (f":)TD. (1?) D. (H)]dt. = D. (H ) ox:x l. l. l. X l. l. l. X l. Step size control for the corner times can be obtained by adding a positive constant to the denominator of (46). The constant should be large en6ugh to ensure that the resulting denominator will be positive. This is analogous to the step size control scheme used for the control in 14) and [5]. The method will be illustrated with an example in Chapter V. Solution of the Boundary Value Problem The solution of (18) and (19) subject to the various terminal conditions can be obtained by backward sweep using the usual Ricatti transformation o~ =Pox+ Rdv + q ( 4 7) T d~ =Rox+ Mdv + h ( 4 8) where P, R, q, M, and hare time varying matrices of appropriate dimensions. In some approaches dtf is also swept backwards, as for example in (3), but this requires the evaluation of more matrices and will not be done here.
PAGE 28
20 Putti~g (4 7) and (18) into (19) to eliminate OA and ox leads to ATP [R + (AT (P +PA+ + PBP D) ox + + PB)R]dv (AT + + q + PB)q + Pc e = 0 Equating the time derivative of (48) to zero and using (47) and (18) to eliminate OA and o~ from the result yields Then (49) and (50) are satisfied for all ox and dv if ATP n~nx/n p = PA PBP + D P(tf) = xx (AT + PB)R R(tf) (i ~Qx/n)T R = = X (AT+ PB)q q(tf) n;dn/s'i q = Pc + e = (49) (50) (51) (52) (53) M = RTBR M (tf) = ~~T/Q (54) T h(tf) = ~dn/t2 (55') h = R (c + Bq) where the terminal conditions for (51) through (55) have been selected such that (47) and (48) agree with (34) and (35) at t = tf. By retracing the previous steps it may be verified that for problems with fixed terminal time the
PAGE 29
21 conditions become P(tf) = ~xx' R(tf) M(tf) = O, and h(tf) = 0. Solving (48) for dv gives 1 T dv = M (Rox+ h d~) so (47) can be written as (56) (57) Putting (57) into (17) then gives ou in feedback form as (58) while using (56) in (33) yields Note from the terminal condition for (54) that M(tf) has a rank of at most one and will be singular if the dimension of~ exceeds one, if i = O, or if the terminal time is fixed. If this is the case equations (56) through (59) are useless at t = tf. The simplest solution is to apply (56) at t = t 0 to find dv. Then (47) and (17) may be combined to give
PAGE 30
22 ~hich may be used to obtain ou when dv is known. Then (33} can be used for dtf. For the computation of optimal trajectories the results obtained from (58} and (59) should be the same as those obtained from (60) and (33) in those cases where M(tf) is nonsingular. However if the results are to be used for neighboring optimal control then (58) and {59) are the desirable equations since these make use of an up to date estimate of dv. Of course if M(tf} is singular then (58) and (59) cannot be used for neighboring optimal control except in those cases where the terminal time is fixed and high feedback gains can be tolerated near the terminal time. The problem can be avoided by using penalty functions to meet the terminal constraint since matrix M will then not be needed. The point will be discussed again in the following chapter. The remaining expressions to be found are those giving the dt. in feedback form and jump conditions for 1 the =natrices in {47) and (48). To obtain these use (40) to eliminate o:>i.:from (46) and write 1 = D ( ) $: D ( fT) $: : + D ( H H H1 HT) i Hx uXi + 1 UA1 1 u uu u (61) From (47) and (39) (62)
PAGE 31
23 Putting ( 62) into ( 61) then gives an expression for dt. l. which can be written as dt. = T + Di (fT) (Rdv + H H1 HT)]/ [g oxi + q). + D. (H (63) l. l. l. u uu u a where and T + g = D, (H ) + P.D. (f) l. X l. l. (65) ~ut from (45) and (47) DJ.. (o).) = P":D. (ox) + D. (P) ox:+ D. (R)dv + D. (q) (66) l. l. 1 l. l. 1 which can be rewritten as gdt. = D. (P) ox:+ D. (R)dv + D 1 (q) 1 l. l. l. (67) by using (39) and (40). Substituting (63) into (67) then gives = a[D. (P) ox:+ D, (R)dv + DJ.. (q)] l. l. 1 Then for (68) to be satisfied for all ox:and dv the J. matrices P, R, and q must jump at the corner times by amounts (6 8)
PAGE 32
24 T D. (P) = gg /a l. T + D. (R) = gD. (f }R./a l. l. J. T + 1 T D. (q) = g [D. (f ) q. + D. (H H H H ) ] /a l. l. l. 1 U uu U (69) (70) (71) To obtain jump conditions for Mand h use (39) and (45) to equate Di (d~) as given by (48) to zero and write the result as T + T D. (R ) ox. + D. (M)dv + D. (h) = (R.) D. (f)dt. (72) l. l. l. l. 1 J. 1 Again using (63) for dt. gives l. T a[D. (R )ox. + D. (M)dv + D. (h)] J. J. J. l. +T T T + = (R. ) D. ( f) [ g ox. + D. ( f ) (RdV + q) J. 1 J. J. J. (7 3) Then for (73) to be satisfied for all ox~ and dv the jumps 1 in Mand h must be + T T + ( ) D 1 (M) = (R.) D, (f)D, (f )R./a 74 l. J. J. J. Notice that the jump in R given by (73) is the same as that required to satisfy (75). From (51), (54), (69), and (74) it may be seen that matrices P and Mare symmetric.
PAGE 33
25 To derive an expression for dt. in feedback form 1 put (57) into (46) to get [(H ):D.(f) X 1 1 CfT> :n. er?> n. (ii> 1at. 1 1 X 1 1 = [D. (H ) + D. (fT) (P 1 X 1 _1 T RM R ) ] ox. 1 1 + D. (fT) [RM1 (dl/J h) + q]:+ D. (H H H1 HT) (76) 1 1 1 U uu U In most cases M(t.) will be nonsingular but if this is not 1 the case a usef ul expression for dti can be obtained by combining (47) and (46) to get [ (H ) '. D. ( f) X 1 1 (7 7) The required dv for use in (77) may be obtained from (56) using conditions at t = t 0 Equation (76) is the preferred relation for use in neighboring optimal control since it, like (58) and (59), makes use of an uptodate estimate of dv. This completes the derivation of the equations needed for the algorithm. Implementation of the algorithm for the solution of a general problem is discussed in the next section.
PAGE 34
26 Implementation of the Algorithm The algorithm may be implemented as described in the following steps: 1. Guess u(t), v, and m values of ti. 2. Integrate (1) forward from t 0 to tf to obtain the nominal states. In doing so compute the cost, J, and evaluate~ and n. 3. Integrate (10) backwards from tf to t 0 using (11), (12), and the nominal states to obtain A (t) 4. Integrate (51) through (55) backwards from tf to t 0 to obtain the matrices P, R, q, M, and h. At each corner time, t~, jump the matrices in accordance with (69), l70), (71), (74), and (75). Since P and Mare symmetric all their elements need not be evaluated. 5. Choose values of d~ and an which will cause (w + d~) and (n + dn) to be more nearly zero. Using values at t = t 0 compute dv from (56) and replace v with (v + dv). 6. Obtain the new cost and new nominal states by integrating (1) and (2) forward from to to tf. For the new control use (u + ou) where ou is given by (61). At each corner time compute dti from (77) and jump the states by the amount Di(ox) as given by (39). Then integrate to the next corner. 7. At t = tf compute dtf from (33) and replace all ti by (ti+ dti> Evaluate~ and n. See the comments below for the case where dtf is positive. 8. Decide whether to continue iterating by comparing the cost with that from the last iteration and examining~ and n to see that they are sufficiently near zero. If the change in cost is small enough and if wand n are satisfactory stop. To continue go to step 3.
PAGE 35
27 A problem will arise if dtf computed in step 7 is positive since there will be no nominal states beyond the old tf for use in steps 3 and 4. The needed states for times beyond the old tf may be computed by integrating (.1) and ( 10) together forward from the old tf to the new, after using Hu= 0 to eliminate u. The starting value of A used should be the last A at the old tf plus OA computed from (47) for conditions at the old tf. The analytical work and programming effort involved in applying this algorithm to a given problem is likely to be quite tedious so it may be advisable to consider the use of penalty functions for meeting the terminal con straint, w = 0. Matrices M, R, and h will then not be needed and co11siderable simplification will result. ZermeZZo's Problem A simple example of a discontinuous problem which illustrates a possible difficulty is the following variation of Zermello's Problem. In Zermello's Problem the object is to find the optimum steering direction to bring a boat from some initial point to a desired final point in minimum time. The boat moves with constant speed relative to the water and there is usually a current which will be ignored in this case. The x and y coordinates of the boat will there fore be assumed to satisfy
PAGE 36
X = COS (u) y = sin (u) 28 x(O) = 2 y(O) = 0 The object is to biing the boat to the terminal point in minimum time, tf. In order to introduce a discontinuity assume that the inequality constraint x2 + y2 .::_ 1 must be satisfied. The constraint represents an island, a circle of unit radius centered at the origin, around which the boat must travel to reach its destination. The solution to the problem may readily be seen to be the path shown in Figure 2. The first subarc consists of the straight line from the initial point, (2,0), tangent to the circle at time t 1 The second subarc follows the island and reaches the terminal point, (1,0), at time tf. This problem is an SVIC problem which can be converted to one having a discontinuity by the following artifice. Use x to replace time as the independent variable, let x 1 be the value of x at the corner, and replace the equations of motion by dy/dx = u 2 < X < X1 dy/dx = x/(1 x 2 ) 1 / 2 X1 < X < 1
PAGE 37
29 y (1,0) Figure 2. Zermello's Problem with an island.
PAGE 38
30 where u is a new control. This choice of the second subarc differential equation has the solution which may be written as This is a circle of unit radius centered on the i axis a distance y(l) = y(xi) (1 x~) 1 / 2 above the origin. Consequently when the terminal constraint y(l) = 0 is met the second subarc will be the desired portion of the circle. Minimum travel time may be achieved by minimizing distance traveled so take J = J 1 [l (dy/dx) 2 J 1 / 2 dx Application of the algorithm to this problem gave the sequence of values shown in Table 1. The initial guesses used were u = 0 for the control, x 1 = 0.7 for the value of x at the corner, and v = 0 for the terminal value of the Lagrange multiplier. It happens that if the initial guess for the first subarc control is a constant, then the algorithm yields a sequence of constant u's for
PAGE 39
TABLE 1 Progress of the Algorithm in Solving Zermello's Pr oblem Iteration u X1 V y(x1) y(l) Cost 0 0 0.7 0 0 0.714143 4.44159 1 0.510187 0.751928 0.510187 0.636750 0.022495 3.82290 2 0.557062 0.645859 0.487589 0.754340 9.2Xl0 3 3.82302 w 3 0.628268 0.534122 0.534122 0.920964 7.5xl02 3.86544 I' 4 0.596101 0.534868 0.512458 0.873366 2.8xlo2 3.84084 5 0.577285 0.511788 0.500106 0.859123 1.1x105 3.82645 6 0.577700 0.500227 0.500227 0.866418 5.2x104 3.82671 7 0.577423 0.500179 0.500047 0.866031 l.lxlo4 3.83650 8 0.577350 0.500000 0.500000 0.866025 1. 7xl08 3.82645
PAGE 40
32 the first subarc. Hence the control values shown in the table apply throughout the first subarc. As may be seen from the table, convergence was obtained in eight iterations. No step size control was used in this run. This problem has a behavior that may be rather common and which may lead to difficulty if precautions are not taken. The denominator of the expression which gives the change in the corner location (dt. in general l. or dx1 in the present case) has as a factor the term (\> X1). Note from the table that\>= x 1 at convergence so the denominator (as well as the numerator) of one of the equations is going to zero as the algorithm converges. Difficulty from this source was avoided by tes ting the denominator an,d setting it equal to a small value (107 ) if it was found to be zero. It is believed that such a simple precaution will be adequate in most cases since the difficulty does not occur until convergence has essentially been obtained. It is also felt that this occurs rather frequently in discontinuous problems. For example Dyer and McReynolds in [11] devote almost an entire column of a transactions paper to a discussion of a similar problem with their algorithm. The effect seems to arise because of smoothness of the Hamiltonian along the optimal trajectory at the corner times. As a result of this smoothness the
PAGE 41
33 expansions for the jump in H, which were used in deriving expression (46) for the dti, break down at convergence. This point will be discussed again in connection with an example in Chapter IV.
PAGE 42
CHAPTER III APPLICATION TO SYSTEMS SUBJECT TO CONTROL VARIABLE INEQUALITY CONSTRAINTS Systems which are not otherwise subject to discontinuities may become so if there is a requirement that a control variable inequality constraint, (CVIC), of the form C(x,u,t) .s_ 0 (7 8) be satisfied. In general C and u may be vectors but for simplicity both will be assumed to be scalars herein. The vector case is not fundamentally more complicated although the "bookkeeping" problems might become formidable. These problems are commonly approached by noting that on subarcs which lie along the constraint (78) becomes an identity C(x,u,t) = 0 (79) which may be used to eliminate the control. After this elimination the state equations become discontinuous at the entry and exit corners of constrained subarcs and all equations become independent of u along such subarcs. The application of the algorithm to CVIC problems formulated 34
PAGE 43
35 in this way is rather straightforward but the following comments are felt to be in order. Along constrained arcs all partials with respect to u vanish and the equations become much simpler. In particular (21) and (22) give B = 0 and c = 0 so (54) and (55) become M = o (80) ii = o (81) Hence it is not necessary to integrate Mand h along constrained subarcs. However it has previously been men tioned that M(tf) may be singular. If this is the case, and if the last subarc is along the constraint, then M will be singular throughout the last subarc. In the case of bangban g p r oblems no control is involved (either because of the design of th e system, or because the control has been eliminated using the maximum principle) and (80) and (81) hold for all subarcs. Then on the last subarc M will have a rank of at most one. Because of jump condition (74) the rank of M may increase by one at each corner moving back from tf to t 0 until M attains full rank on some earlier subarc. This reflects the physical fact that the system is not controllable when there are insufficient corner times, or switch points, to satisfy the terminal constraint, 1jJ = 0.
PAGE 44
36 This behavior of Mis unfortunate if results from the algorithm are to be used for neighborjng optimal control, particularly in the case of bangbang problems. The algorithm may ~till be applied to determine the optimal trajectory by using (56) to determine dv at some time, say t', when Mis nonsingular. (In the previous chapter t' = to was suggested.) Then (60), (33}, and (77} can be used to determine 8u and the dt at times greater 1 than t'. The same thing can be done for neighboring optimal control except that any disturbance to the system which occurs subsequent. tot' cannot influence an estimate of dv which is made at t'. Then since vis the Lagrange multiplier associated with the terminal constraint, the control will be open loop after t', at least_ as far as meeting the terminal constraint is concerned. As previously noted equations (58), (59}, and (76}, which require the inverse of M, make use of a current estimate 0 dv and hence are preferable to (60), (33), and (77) for controller design. In bangbang problems with terminal constraints this unfortunate situation is likely to occur quite frequently. The simplest solution is to drop the philosophy of exactly meeting terminal constraints adopted in Chapter II and use penalty functions instead. Therefore a revised version of the algorithm for bangbang problems using penalty functions to meet terminal constraints is given in the following section.
PAGE 45
37 BangBang Problems No control, u, is involved so the only choice variables availabl_e to the designer are the corner times. In the absence of a control (18) and (19) reduce to ox= f OX X ( 82) (83) If penalty functions are used to meet terminal constraints then~ and v do not appear,=~' and (34) becomes Then ( 4 7) takes the form o;\ =Pox+ q and P and q are the only matrices needed. These may readily be shown to be given by fTP ~xxn~nx;n p = Pf H p (tf) = X X xx T n~an;n q f q q (tf) = X with jump conditions ,,, + T [D. (H ) + D. (f ... )P.] [D. (H ) l X l l J. X T + + D. (f )P.] l. 1 (84) (85) (86) (87) D. (P) = J. (H ):D. (f) {fT):D. (I ?) D. (If) + X 1 J. J. l X 1 T + D. (f )P.D. (f) l. 1 1 ( 88)
PAGE 46
...: 38 '1 1 +T T + Di ( q) = =[ D_1._ _(H_x_)_+=D_i_(_f_) =P_i_J _lD_ 1 _. _( f_ )_q_i_+_D_i_(_H_) _J + TT T + (Hx)iDi(f) (f )iDi(Hx) Di(H) + Di(f )PiDi(f) Equations (76) and (33) for dti and dtf reduce to [CH )":n. (f) (fT):n. (HT) n. (H)]dt. X l. l. l. l. X l. l. T T =[D. (H) + D, (f )P.]ox. + D. (f )q. + D. (H) l. X l. l. l. l. l. l. (89) (90) (91) The steps to be followed in applying this form of the algorithm to bangbang problems become: 1. Use penalty functions to eliminate any terminal constraints which may be present. 2. Guess m values of ti and the control polarities for each subarc. 3. Integrate (1) and (2) forward from t 0 to tf to obtain the nominal states and cost. Evaluate n. 4. Integrate (10) backwards from tf to t 0 using (11), (12), and the nominal states to obtain A(t). 5. Integrate (86) and (87) backwards from tf to to using jump conditions (88) and (89) at each corner to obtain P(t) and q(t). 6. Obtain the new cost and new nominal states by integrating (1) and (2) forward from to to tf. At each corner compute dti from (90), jump the states in accordance with (39), and continue to the next corner. 7. At tf compute dtf from (91), choosing an such the transversality condition, n = O, is more nearly met. If dtf is positive continue the integration to the new tf = tf + dtf. that
PAGE 47
39 8. Replace all ti with ti+ dti. Decide whether to continue iterating by comparing the new cost with that from the last iteration and noting whether Q is ~ufficiently near ze~o. If the ch~nge in cost is small eriough and if Q has a satisfactory value stop. If not continue by returning to step 4. A BangBang Example As an illustration of the application of the algorithm to a bangbang problem consider the following simple example. Let the state equations be X1 = X2 X1 (0) = 5/4 X2 = u X2 (0) = 0 subject to lul < 1. Firtd u(t) to minimize J = J: luldt subject to the terminal constraint X1 (3) = X2 (3) = 0 This is a simple double integrator plant which is to be driven to the origin at the terminal time, tf = 3, with minimum fuel expenditure, J!ul dt, subject to a control variable inequality constraint. The optimal control is of a bangbang nature and may readily be shown to be
PAGE 48
40 u = 1 0 < t < t1 u = 0 u = 1 To apply the algorithm let K be the penalty function gain and add to the cost to obtain J = (K/2) [xi + x~ lt + Jtf I u I dt f 0 Now as K becomes large the terminal values of x 1 and x 2 should approach zero and approximately satisfy the terminal constraint. It can be shown that the solution to this modi fication of the problem is t1 = 0.5 0.875/K t2 = 2.5 + 2.375/K 1 at least to first order in K Application of the algorithm to the modified problem yielded the sequence of values shown in Table 2. A penalty function gain of K = 10 5 was used and the initial guesses were t1 = 0 and t 2 = 3. Convergence was obtained in five iterations with t 1 and t 2 going to values that agree with
PAGE 49
41 TABLE 2 Progress of the Algorithm in Solving the Bang~Bang Example ======================== ==== eration umber 0 1 2 3 4 5 0 0.347042 0.452092 0.492628 0.499802 0.499991 3 2.84576 2.61776 2.5161.3 2.50044 2.50002 Cost 78125 5806.88 482.656 10.3454 1. 00604 0.999984
PAGE 50
42 their correct values as given above. The cost shown includes the penalty term. No step size control was used in this run. Phase plane plots for the trajectories generated on the first three iterations are shown in Figure Equation (90), the feedback control equation for t 1 and t2, gave the following relations at convergence dt1 = 0.500136 ox1 (ti) + 1.00052 ox2 (ti) These may be used for feedback control on neighboring trajectories by forming switching functions defined by S1 (t) = t 0.5 0.5 OX1 (t) OX2 (t) t < 0.5 = t 0.5 0.5 OX1 (t1) OX2 (t1) t > 0.5 S2 (t) = .... l2.5 0.4 OX1 (t) OX2 (t) t < 2.5 = t 2.5 0.4 OX1 (t2) OX2 (t2) t > 2.5 3. where the gains have been rounded off to one place accu racy. The first switch should occur when S 1 becomes positive and the second when S 2 does. The effectiveness of this feedback control in overcoming displacements in the initial state is illustrated in Figure 4. The curves give terminal radius,
PAGE 51
.2 .o I 2 I. 6 43 Iteration 1 Iteration 2 Iteration 3 o.o 0.2 0.4 0.6 0.8 1.0 t=O , Tx1 1 1. 2. r 1.4 P :i. gure 3. Phase plane plots of the trajectories generated in solving the bangbang example.
PAGE 52
(I) 0 .12 0.10 ::.1 ;a0.08 11$ p:; r10.06 Ill ~ 0. 04 Q) 8 0. 02 0.12 0.10 ti) ::.1 rtj0.08 !\l r10.06 rd E o. 04 Q) 44 (a) For ox2(0)=0 ~.s .4 .3 .2 .1 o (b) For 0~1 (0)=0 8 0. 02 ] 0.00 .5 .4 .3 .2 .1 O ox2 co > ~Open Loop ~h Feedback .1 .2 .3 .4 .s 
PAGE 53
45 as a function of displacement in x1 (0) alone and in x 2 (0) alone. The terminal radius which would result from ?Pen loop control {e.g., switching as a function of time alone) is also shown for comparison. Feedback control of the switch times may be seen to give about an order of magnitude improvement over a rather wide range of initial displacements. Estimating the Number of Corners for CVIC ProbZems It is difficult to give general rules for estimating the number of corners and the sequence of subarcs comprising an optimal trajectory, but a few suggestions can be given. In the case of bangbang problems the controllability of (18) and (19), that is the rank of matrix M(to), may be used to obtain a probable lower bound form, say mp' which in many.cases will be the ciorrect value of m. It has been pointed out that M(tf) will often be singular and that in the bangbang case the rank of M may increase by one at each corner moving back from tf to t 0 Therefore take mp to be the number of corners required for matrix M to reach full rank, assuming its rank increases by one at each corner between tf and t 0 In the example just given \Ji is of dimen sion (2 x 1) so from (54) Mis of dimension (2 x 2). Then since M(tf) = 0 it follows that mp= 2 is the probable number of corners required if M(t 0 ) is to have full rank. This is the correct value of m for the example and such an
PAGE 54
46 estimate may be correct in many cases. However m may exceed m since rank of M may not increase at each corner. This p can be detected by computing M(t 0 ) and examining its rank, but at the cost of more computations relative to the penalty function approach. On the other hand m may be less than m depending on the initial state. (A little reflection on p the phase plane plot for the example will reveal that there are points in the phase plane from which the system can be driven to the origin in the available time using less than two switch points.) It is unlikely that this latter situ ation will be encountered very often but the possibility must be kept in mind. In any event m is an intelligent p initial guess with which to begin numerical experiments to determine the correct value of m. For CVIC problems which are not bangbang in nature it is pr"obably possible to arrange the calculations in such a way that the algorithm can determine the sequence of subarcs as the computations are carried out. One approach is to assume a control which does not violate the constraint and begin iterating. The value of the constraint function, C(x,u,t), can then be calculated at each time point on each iteration and its sign examined to detect the occurance of entry corners. Beyond each entry corner the control should be taken as that implied by C(x,u,t) = 0. To locate the corresponding exit corner t}).e value of the control which
PAGE 55
47 minimizes H, say u*, should be calculated and the sign of C ( U t) examined. Xr I When C(x,u*,t) goes negative the exit corner has been rea ched. In this way the algorithm might be made to generate the proper sequence of subarcs. In the case of bangbang problems it may be possible to accom plish the same thing by using the maximum principle to determine the sign of the control during the calculations. These measures were not investigated during the course of this work but probably deserve attention in the future.
PAGE 56
CHAPTER IV APPLICATION TO SYSTEMS SUBJECT TO STATE VARIABLE INEQUALITY CONSTRAINTS A state variable inequality constraint, (SVIC), is a constraint of the form S(x,t) 0 (92) and differs from a CVIC in that the constraint function is not a function of the control. As in the case of a CVIC it is assumed that the constraint function and the control are both scalars. The comments made in the previous chapter about the vector case apply here also. As in the CVIC case, an SVIC introduces discontinuities into a problem for which the system would otherwise be described by continuous state equations. Since the constraint function is independent of the control it often happens that some of its time derivatives are also independent of u. If the upperscript j applied to S(x,t) is taken to denote the j'th time derivative of S, the situation will be as follows: dS/dt = s 1 (x, t) (9 3) 48
PAGE 57
49 (9 4) (95) In these circumstances the p'th time derivative of sis the lowest order derivative to be an explicit function of the control and the constraint is said to be of order p. Necessary conditions for these problems are given in [6] and [7) Denham and Bryson describe a steepest ascent algorithm for numerical solution of SVIC problems in [8] and Jacobson and Lele use an interesting transfor mation technique in [9]. For present purposes it will be assumed that the initial state, x 0 satisfies S(x 0 ,to) < 0 and that the optimal trajectory coi1sists of three subarcs as shown in Figure 5. On the first subarc the state starts at x 0 at time t 0 and meets the constraint boundary, S = O, at time t1. On the second subarc the state follows the constraint from time t 1 to time t 2 at which point it leaves the boundary and reaches the terminal manifold,~= O, at time tf. These problems are usually approached by observing that Sis identically zero along the constrained second subarc and hence all time derivatives of S vanish along
PAGE 58
X Xo 50 A rconstraint I I I I I I I I I I I l/J = 0 I I I I I I I I I I I I I I I I I I I I I I I I to t1 t2 tf Figure 5. The assumed sequence of subarcs for SVIC problems. ..
PAGE 59
51 this subarc Ther ef o r e (95) is set equal to zero and the result i s used to eli m inate the control on the second subarc. To ensur e that S remains equal to zero on this subarc it is then necessary to impose the additional con straints that Sand its first (p 1) time derivatives vanish at t1: S (t l) = 0 S 1 (t l) = 0 spl Ct i> = 0 (96) The system is then described by discontinuous state equations but there are also the interior point constraints given by (96) to consider. The algorithm, as derived herein, cannot be directly applied to problems with such constraints. Therefore the following section is devoted to the description of methods of transforming the interior point co n straints into terminal constraints, so that the algorithm may be applied to SVIC problem s At this point a word of caution may be in ord e r. As will be demonstrated in a later section this applicatior of the algorithm to an SVIC problem will yield th ~ solutior impli ed by the n ec es s ary conditions of [6]. It has r ecent] been s hovm th a t the s e condition s may give an inco rr ect
PAGE 60
52 result if they are applied to a problem for which the constraint is not active [17]. Therefore before proceeding iri the present man~er one should examine the unconstrained optimal trajectory to be certain that it does indeed vio late the constraint and that the problem is a true svrc problem. Otherwise erroneous results may be obtained. Transforming Interior Point Constraints into Terminal Constraints The constraints given by (96) may be transformed into terminal constraints by adding p additional states, say y 0 through y 1 defined as follows: On the first psubarc let the y's be solutions of Yo = Yl Yp2 = Yp1 with Yl Y2 Yp1 sP(x,u,t) yo(t6) = S(xo,to) Y1 (to) = S 1 (xo ,to) Yp_ 2 Cto) = sP2 (xo,to) Yp_ 1 Cto) = sP1 (xo,to) (9 7) y. = O (98) J for all j on the remaining subarcs. The y's are to be continuous at t 1 and t 2 so these states will be given by
PAGE 61
y. Ct) = sj Ct) J 53 to < t < t1 (99) for all j with the understanding that s 0 = S. Then since yj(tf) = Sj(t 1 ) the point constraints may be satisfied by imposing the terminal constraints (100) on the y's. In this manner the interior point constraints given by (96) may be transformed into terminal constraints and an SVIC problem reduced to one of the type for which the algorithm was designed. Hence to apply the algorithm to an SVIC problem add states given by (97) and (98) with terminal conditions (100), set (95) equal to zero and use the result to eliminate u on the second subarc, and use the algorithm to solve the resulting problem. Some simplifications are possible. Integration of the added state equations is obviously unnecessary since the y's are given by (99) and therefore can be computed from the x states. Also integration of the Lagrange multipliers associated with they states is not necessary and parts of the matrices P, R, q, M, and h need not be integrated on the second and final subarcs. These statements are verified in the following three sections which are devoted to detailed examinations of the situation on each of the three subarcs.
PAGE 62
54 The Final, Suba:ric Let the Lagrange multipliers associated with the y state equations be ~o through ~pl and those associated with they termi:n.al conditions be Tlo through Tlpl For convenience define the vectors y = T [Yo ,Y1, ,ypl 1 T = C~o,~1,,~p~ll T) = [ T) O T) l T)p1] T (101) Then on the final subarc the x and y state equations may be written in the partitioned form and (5), (6), and (14) become [ (x,t) T T = + v 1.JJ (x,t) + n y]t f L(x,u,t) T H = + A f(x,u,t;.) Q = [L + ~] t = 0 f (103) Consequently from (10) and (11) HT A = X A(tf) ~T = X (104) ... ,ctf> = 0 = n (105)
PAGE 63
55 Hence on the last subarc A is unaffected by the presence of the SVIC and~= n, a constant, throughout the subarc. The behavior of the matrices P through h given by (51) through (55) may be investigated by writing these in partitioned form to separate the elements associated with the x states from those associated with they states. For example P may be partitioned as p = p yx p xy p yy (106) where P through P are matrices containing the elements xx yy of P associated with x alone, y alone, and those in common with x and y. If (51) is written in partitioned form it will be found that throughout the last subarc matrix P has the form Pxx 0 p = (107) 0 0 where P is the matrix P that would apply in the absence xx of the SVIC. That is P can be computed from (51) by xx ignoring the SVIC. In a similar manner it may be shown that on the final subarc the remaining matrices have the forms
PAGE 64
56 R = [:xx :J q = [q;, 0JT M = [:xx :] h = [hT, X o]T where the symbols O and I represent null and identity matrices of appropriate dimensions. The matrices Rxx' (108) (109) (110) (111) qx' Mxx' and hx have the values that R, q, M, and h would have in the absence of the SVIC. Note that matrix Mis singular throughout the last subarc. This merely reflects the fact that they states are uncontrollable after time t1 and introduces no additional difficulty. One may trace through the steps involved in deriving (58) and (59) using partitioned matrices an~ verify that these may still be used to compute au and dtf if only the x components of the matrices are used. The x states are still controllable although Mxx(tf) will often be singular as will M(tf) in other cases. The conclusion is that on the last subarc the algorithm may be applied by simply ignoring the existence of the'SVIC.
PAGE 65
57 The Constrained Subarc On this subarc (95) is set equal to zero and the result used to eliminate the control so there is no ou to be determined. As in the case of the final subarc an examination of the partitioned forms of (1), (10), and T (12) leads to the conclusion that~= Hx and t =non this subarc, where of course H includes the effect of eliminating the control. Furthermore since there is no u after the elimination, Band c defined in (21) and (22) variish so (54) and (55) become M = o h = 0 (112) and there is no need to integrate these matrices on this subarc. Partitioning the jump conditions, (69) through (71) and (74) and (75), and using (107) through (111) indicates that at t 2 the jumps in the matrices have the form [D 2 <:xx) [D' <:xx) :] :] (113) (114)
PAGE 66
58 (115) [ D2 (M ) xx D2 (M) = O :] (116) (117) where D2(Pxx> ,etc.,are the jumps the respective matrices would undergo if there were no SVIC. These jumps come about only because of the discontinuity introduced by the use of (95) to eliminate the.control on this subarc and their values are not influenced by the SVIC in any other way. If (51) through (55) are partitioned and us~ is made of (107) through (111) and (113) through (117) it will be found that the matrices P through h have the forms given in (107) through (111) throughout this subarc also. Hence the constrained subarc may be treated by using (95) to eliminate u, imposing the jumps on the matrices, but otherwise ignoring the constraint. Thus the SVIC does not increase the dimensionality of the problem on this or on the last subarc. It merely introduces discontinuities at times t 1 and t2.
PAGE 67
59 The First Subara On the first subarc the state equations may be partitioned in the.form [ x] [f(x,u,t) ] Y = Wy + SP(x,u,t}v where the matrix Wand the vector v are defined as 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 w = 0 0 0 0 1 0 0 0 0 0 V = [O O O O O l]T (118) (119) (120) the dimensions being (p x p) for Wand (p x 1) for v. The Hamiltonian may be expressed as T T P ) ] H = L ( x u t) + A f ( x u t ) + I:; [ Wy + S ( X U t V (121) Then partitioning (10) accordingly gives (122)
PAGE 68
60 T l'; = W Z'; (123) Since r; =non the previous subarc and since the Lagrange multipliers are continuous it follows from (123) that on the first subarc l'; = K(t,ti)n where K(t,t1) is the transition matrix of (123) and is given by 1 0 0 0 (t1t) 1 0 0 ... (t1t) 2 /2 (t l t) 1 0 K (t, ti) = (t l t) 3 /6 (t1t) 2 /2 (t l t) 1 (124) (125) Then the Lagrange multipliers associated with they states can be computed from (124) rather than by integration of (123). The computation of y from (99) and~ from (124) appears to be all that can be done to avoid an increase in dimensionality along the first subarc, at least for problems with free terminal time. While (99) and (124) allow direct computation of y and r; without the need of integration, they are of little use for direct computation of 8y, &~, and the elements of matrices P through h,
PAGE 69
61 principally because (124) involves t 1 which is not fixed. Therefore it appears to be necessary to integrate all elements of the matrices along the first subarc if tf is free. In a later section it will be shown that if the terminal time is fixed a slight change in the approach will allow additional simplifications. A First Order SVIC Example In Chapter II it was mentioned that care must be exercised in using the algorithm since for some problems certain denominators may go to zero at convergence. The following example gives further insight into the behavior of the algorithm for such problems. Consider a system described by X1 = X2 X 1 (Q) = 7 X2 = u X2(0) = 0 for which it is desired to find the scalar control u which minimizes 9 J = (1/2) f 0 u 2 dt subject to the terminal constraints X1(9) = X2(9) 0
PAGE 70
62 and to the inequality constraint X2 (t) < 1 Since S = x2(t) 1 the problem is of first order, (p = 1), as S 1 = u. It may easily be shown that the solution con sists of the three subarcs given below: u = 2(3 t)/9 X1 = 7 + 3(t/3) 2 (t/3) 3 X2 = (2 t /3) (t/3) u = 0 X1 = 8 + t X2 = 1 u = 2 (6 t)/9 X1 = 8 + t + (2 t/3) 3 X2 = l (t/3 2) 2 The optimum cost is J = 4/9. 0 < t < t1 = 3 Application of the algorithm to this problem in the manner just described resulted in the sequence of values shown in Table 3. This table lists the terminal values of the three Lagrange multipliers, the corner times, and the cost a.s a function of the iteration number. The optimal control for the first and last arcs is given by
PAGE 71
TABLE 3 Progress of the Algorithm in Solving the First Order SVIC Problem Iteration V1 V2 no t1 t2 Cost 0 0.000000 0.000000 0.000000 4.00000 5.00000 0.000000 1 0.187.500 0. 625000 0.437500 4.00000 5.00000 0.437500 2 0.212402 0.664062 0.595215 3.28125 5.65625 0.458649 3 0.234021 0.685162 o. 737163 2.99068 6.00590 0.457924 4 0.220261 0.665405 0.652363 3.04991 5.94744 0.445034 5 0.221485 0.665929 0.661504 3.02276 5.97781 0.444449 c, 6 0.222219 0.666670 0.666640 3.00798 5.99224 0.444446 w 7 0.222222 0.666667 0.666662 3.00399 5.99614 0.444445 8 0.222223 0.666668 0.666671 3.00199 5.99808 0.444444 9 0.222222 0.666667 0.666668 3.00099 5.99904 0.444444 10 0.222222 0.666667 0.666667 3.00050 5.99952 0.444444 11 0.222222 0.666667 0.666667 3.00025 5.99976 0.444444 12 0.222222 0.666667 0.666667 3.00012 5.99988 0.444444 13 0.222222 0.666667 0.666667 3.00006 5.99994 0.444444 14 0.222222 0.666667 0.666667 3.00003 5.99997 0.444444 15 0.222222 0.666667 0.666667 3.00002 5.99999 0.444444 16 0.222222 0.666667 0.666667 3.00001 5.99999 0.444444 17 0.222222 0.666667 0.666667 3.00000 6 00000 0.444444
PAGE 72
64 u = 9V1 V2 no+ V1t = 9V1 V2 + V1t so it may be seen that the algorithm converged to the correct values. The initial guesses used to start the run were u = 0 for the control and the values given in the table for iteration zero. The algorithm was not allowed to adjust the corner times on the first iteration since it cannot with the initial guess used. The values of the Lagrange multipliers are zero for this guess and these appear as the denominators in the matrix jump conditions and in the expressions for dt 1 and dt 2 There fore difficulty was avoided by not jumping the matrices and not changing the corner times on iteration one. No step size control was used in this run. Phase plane plots of the trajectories generated on the first two iterations are given in Figure 6. Since the system is linear the terminal conditions were satisfied to within plotting accuracy on all iterations. Consequently the second subarc lay near the constraint boundary, X2 = 1, on all iterations. Notice from the table that convergence was very slow for this problem, particularly in contrast with that for the bangbang example of Chapter III. Ten iterations were required for convergence of the terminal values of
PAGE 73
65 1.2 ,Iteration 1~ _____ ,,__ P:~:1>.. ..... ___ 1.0 \_ Iteration 0.8 0.6 0.4 0.2 ,t=O l> ~",rr.,.~r,.!0 7 6 5 4 3 2 1 0 Figure 6. Phase plane plots of the trajectories generated in solving the first order SVIC example.
PAGE 74
66 the Lagrange multipliers while seventeen were required for convergence of the corner times. Notice also that in the final few iterations the corner times were moved half way from their present values to their correct final values. Their final convergence was thus at the rate of one bit of accuracy per iteration. This unusual behav~or may be verified as follows. It was remarked earlier that this algorithm may also be derived by minimizing a second order expansion of the cost, as for example in [5]. Therefore consider doing so for conditions existing after iteration ten, at which point correct terminal values of the Lagrange multipliers had been found. Under these conditions the terminal constraints are very nearly satisfied and the first subard contribution to the augmented cost is approximately f t 0 1 J1 = (1/2) u 2 dt = (1/2) 1: 1 [2 (3 t) /9] 2 dt = 2(1 + (ti/3 1) 3 ]/9 whose second order expansion in t1 is
PAGE 75
67 Setting the partial of J1 with respect to dt 1 equal to zero and solving for dt1_gives dt1 = Ct1 3Y/2 which vertfies the behavior of the values of t 1 shown in the table for the last few iterations~ Since the problem is symmetric with respect to time it follows that t 2 should behave in a like manner. The dti coefficients of a second order expansion for dJ 1 vanish at convergence and the third partial of J 1 with respect to t 1 is the lowest derivative which is not zero for t he optimal trajectory. Therefore the optimal cost is such a smooth function of t 1 that a second order expansion is inadequate. From the point of view used in deriving the algorithm, the Hamiltonian is such a smooth function of the corner times that a first order expansion of its jump, as used in deriving (46), is inadequate for some problems. This suggests the possibility of employing higher order expansions, but quite apart from the analytical difficulty involved there is still the question of how far one should go. If higher order terms had been used in Chapter II the results for this example might have been quite different, but this of course does not mean that the dilemma would not ri'se again in other problems having
PAGE 76
68 smoother Hamiltonians. Therefore it seems simplest to recognize that the difficulty may occur and guard against division by zero as suggested in connection with the Zermello example. The price may be somewhat slower con vergence, but in general faster convergence than that obtainable from gradient algorithms should result. Alternate Necessary Conditions for SVIC Problems The purpose of this section is to investigate the necessary conditions to which the algorithm converges when applied to an SVIC problem in the manner previously described. This set of conditions will herein be called the alternate necessary conditions. The relation between this set and previous sets, for example those of [6), is of interest. It is shown that the first subarc Lagrange multipliers and Hamiltonian differ from those of [6), but that the two sets both imply the same solution. Consider the first subarc necessary conditions first and note that (124) may be rewritten as j z;. = l J k=O (t1 t) jk (jk)! Tlk. (126) Then the necessary condition H = 0 may be writteri from u (121) and (126) a.s
PAGE 77
69 T p1 (t1 t)~k1 n sP Lu+ A fu + l ~~= O k=O (p k l)! k u (127) This is the relati9nship from which the first subarc control is found. Now putting (121) into (122) and using (126) and (127) to rewrite the result gives (128) Then since y(t) = 0 fort> t 1 the Hamiltonian at time (129) Note that they states and their multipliers do not enter into the problem on subsequent subarcs. The first and second subarc conditions are coupled by A(t1) = A(t1+) H(t1) H(t1+) On the second subarc the Hamiltonian becomes T H = L(x,u,t) + A f(x,u,t) and u must satisfy (130) (131) sP(x,u,t) = 0 (132) so the control is a function of the state. Then since
PAGE 78
70 i = H~, (131) and (132) imply that The second and final subarcs are coupled by ).(t2) = ).(t2+) H(t2) = H(t2+) On the final subarc so the control is implied by and~ T H yields X = L X Terminal conditions (11) and (14) give = T X [L + ~]t = 0 f ( 133) (134) (135) (136) (137) (138) (139) where the definition of given in (103) may for present purposes be replaced by (140)
PAGE 79
71 The alternate set of necessary conditions to which the algorithm converges is then given by the x state equations and their terminal conditions, interior point constraints (96), and equations (127) through (140). Note that the added states and their Lagrange multipliers have been eliminated and only the n's remain~ These p constants are needed in order to satisfy the p interior point constraints. Comparing these conditions with those of reference [6], allowing for the fact that [6] uses a Mayer formu lation instead of the Bolza form used here, leads to the conclusion that the second and third subarc conditions are identical with those of [6]. The conditions differ only on the first subarc. Table 4 shows a comparison of the pertinent parts of both sets of conditions as they apply to the first subarc. The first subarc Hamiltonian ~ and Lagrange multipliers of [6] have been denoted Hand ~ A respectively. All other terms except the 's of [6] arP. in the present notation. Note that the Hamiltonian and La.grange multipliers of [6] jump at the entry corner while those to which the algorithm converges do not. This difference comes about because of the way in which the interior point constraints were handled here. The other differences are that the differential equations satisfied
PAGE 80
72 TABLE 4 Comparison of First Subarc Necessary Conditions Reference [6] ~T L +A f =O => u u u ~ ~T H=L+A f p1 + + s ] p1 t t1 + p1 + 1 8 J t pX l Present Approach x sP = o => u u x sj+l nk
PAGE 81
73 by the Lagrange multipliers are not the same and the present control equation and Hamiltonian contain the terms involving polynomials in tim~ as shown in the table. The relationship between the two sets of conditions may be seen by recalling that the Lagrange multipliers re late the change in cost to changes in state along the trajectory. Because of the derivation used in [6] the multiplier f relates the total change in cost to changes in x. Here however the multiplier A relates part of the change in cost to changes in x while r; relates the remaining change in cost to changes in y. But since Y. = sj' J variations in the y's satisfy (141) sothe total change in cost is related to changes in x by .. A = A + This is the relationship between the two sets of first subarc multipliers. Using (126) and (130) to evaluate (142) at time tigives (142) (143)
PAGE 82
74 ~ Comparing (143) with A(t1) as given in Table 4 leads to the conclusion that n = Hence the terminal values of z; are the Lagrange multipliers associated with the interior point constraint in [6]. ~ To show that A as expressed by (142) satisfies the differential equation specified in [6] differentiate (142) using (123) and write p~l p1 = l z;. (Sj) T + l z;. (Sj) T j=l J1 X j=O J x (144) But 0 < j p 1 (145) so (144) becomes p1 z;. (Sj)T p1 z;jrstxf (Sj ) Tl A = A I + I + J1 X xt j=l j=O (146) Now use (126) and (127) to rewrite (128) as T fTA z; csP> T A= L X X p1 X (14 7) Using (142) to eliminate A from the right hand side above yields LT fT~ z; (Sp) T p1 z;.fT(Sj)T A = + I (148) X X p1 X j=O J X X
PAGE 83
75 Putting (148) in (146) results in ~ A = p1 + f l;;. [Sj f + fT(Sj)T + (Sj )T] j~O J xx X X xx But sj+l = a .. d sj (x,t) X ax dt so Therefore (149) reduces to T A= L X ~ (149) (150) (151) which is the differential equation for A given in [6]. The numerical relationship between Hamiltonians may be obtained by substituting (142) into the expression ~ for Hin Table 4 to get By using y. = sj one can rewrite (121) as J (152)
PAGE 84
76 Solving (153) for (L + ATf) and putting the result in (152) yields ~ p1 tj
PAGE 85
77 The Analytical Solution of an SVIC Problem Using the Alternate Necessary Conditions The system is described by X = V x(O) = 0 V = a v(O) 1 (156) and the object is to find the control a(t) which minimizes J = f 1 [a 2 (t)/2] dt 0 subject to the inequality constratnt S(x,t) = x I< 0 and the terminal constraints x(l) = 0 v(l) = 1 (157) (158) (159) Since s< 1 > = v and sC 2 ) = a this is a second order SVIC problem which happens to be simple enough that an ana lytical solution is possible. The interior point constraints of (96) take the form X (ti) = I V (ti) = 0 ( 160)
PAGE 86
78 From (128), (133) (137), and (140) it follows that on all subarcs 0 A = A = \) X X X so A = A A = \) v (t1) (161) V X V V X From (127) ( 13 2) and (136) a = A n1 n O (t l t) 0 < t < t1 V = 0 t1 < t < t2 = A V t2 < t < 1 (162) The values of Hat the corners are given by (129), (131), and (135) as H(t1) = [v vx(t1 1) + n1l 2 /2 V H(t1+l = H(t2) = 0 H(t2+) = [v 'V (t 2 1)) 2 /2 V X (163) Then continuity of H at t1 and t, .. requires that \) = \) (t2 1) V X (164) But on the first subarc
PAGE 87
79 2 v(t) = 1 + (v + no) (t 2tit)/2 X so v(t1) = 0 gives vx +no= 2/t~ Likewise on the first subarc x(t) =(t 3 3t1t 2 + 3tft)/(3tf) so x(t1) = t gives On the last subarc 2 v(t) = V (t t2) /2 X so v(l) = 1 implies Then on the last subarc so from x(l) = 0 it follows that Pinally from (171), (166), and (168) 0 t t1 (165) (166) (167) (168) (169) (170) (171) (172)
PAGE 88
80 "x = 2/ ( 3.t ) 2 "v = 2/(3.t) T) 1 = 2 (1 6.t) / (3.t) 2 T) 0 = 4/(3.t) 2 (173) The solution then becomes a(t) = ~2[1t/(3.t))/(31) 0 < t < 31 = 0 31 < t < (131) = 2[1(it)/(31))/(31) (131) < t < 1 (174) and the trajectory satisfie~ V (t) = (lt/31) 2 o < t < 31 = 0 31 < t < (131) = [1(1t) / (3.Q.)] 2 (139..) < t 2 1 X (t) = .t[l(1t/31) 3 ) 0 < t < 31 = 1 31 < t < (131) = i{l[1(lt)/31] 3 } {13.t) < t < 1 (175) Since t1 = 31 and t2 = (1 31) then t2 will exceed t1 so long as 1 < (1/6), in which case the trajectory will follow the constraint for a nonzero period of time. In this problem there is also a range of values of 1 for which the trajectory will simply touch the constraint boundary at a point without following the constraint for a nonzero time period. This case can also be treated using
PAGE 89
81 the alternate conditions. To do so one replaces tz with t 1 and drops the condition sP = 0 and the second of (162) and of (163). Then the last of (163) gives H(t 1 +) instead of H(t2+). Comparing the first and last of (163) and re calling that the Hamiltonian must be continuous leads to the conclusion that n1 = 0. Then the remaining values to be found are those of vx' vv' no, and t 1 These may be obtained from the conditions x(t 1 ) = 1, v(t 1 ) = 0, x(l) = 0, and v(l) = 1. The resulting solution is a(t) = 8(131)+24(141)t = 8 (131) +24 (141) (1t) v(t) = l8(131)t+12(141)t 2 = 1+8 (131) (1t)12 (141) (1t) 2 x(t) = t4(131)t 2 +4{141)t 3 = lt4 (131) (1t) 2 + (141) (1t) 3 H = 8(161) 2 0 < t < 0.5 0.5 < t < 1 0 < t < 0.5 0.5 < t < 1 0 < t < 0.5 0.5 < t < 1 (176) By analyzing the unconstrained problem one can conclude that the constraint has no effect for 1 > (1/4), so the above applies for (1/6) 1 (1/4). Other Simplifications For problems having fixed final time it is possible to modify the definitions of the added states and obtain
PAGE 90
82 additional simplifications. Instead of defining the y states as in (97) and (98) one may also use Wy + sPv to < t < t1 y = = Wy t1 < t .::. tf (177) and impose the terminal constraint y(tf) = o. In this case to< t < tf (118) so if tf is fixed (179) But the partitioned forms of (47) and (48) are (180) so comparing (179) and (180) leads to the conclusion that Pxy = PT yx = 0 pyy = 0 R = 0 to < t < tf yx qy = 0 R = 0 (182) yy
PAGE 91
83 Furthermore from (177) so if tf is fixed t < t < t f (183) (184) Then from (181) and (184) it follows that on the second and final subarc~ R = 0 xy M MT = 0 yx xy t1 < t < tf M = 0 yy h = 0 (185) y By partitioning the matrix differential equations and jump conditions it may be seen that as before the constraint does not influence the x components of the matrices except on the first subarc. Note that these simplifications do not apply if the terminal time is free since inthat case (179) and (184) are not correct. Also all of the components of the matrices would then have to be integrated from tf to to and considerable complication would result. One other step may be very helpful in many instances. Instead of solving for the control one may define a new variable, say a, by
PAGE 92
84 (186) Then (186) may be solved for u and the result used to eliminate the control. Then a becomes a new control whose first and last subarc values may be determined by the algorithm. (Obviously a= 0 on the second subarc.) This use of (186) will in many cases simplify the matrix jump conditions and is therefore recommended. As in the case of a CVIC problem, it may be possible to have the algorithm determine the sequence of subarcs for an SVIC p~oblem. If an initial guGss for the control which yields a feasible trajectory can be found the sign of the constraint function, S(x,t), can be examined while iterating and the occurence of corners detected as they are generated. This possibility has not been investigated so it is not possible to say what degree of success might be expected. For conditions under which the constrained subarc will be a "touching" subarc as opposed to a "following" subarc see [17). This information will somE times aid in predicting the sequence of subarcs to be expected in an SVIC problem.
PAGE 93
CHAPTER V APPLICATION TO PROBLEMS HAVING SINGULAR SUBARCS For some problems the optimal trajectory has socalled singular subarcs along which the Hamiltonian is independent of the control and Huu is singular [13], [15]. These problems require special treatment since neither the necessary condition H = 0, nor the maximum principle give u information about the control along singular subarcs. Furthermore these problems often have nonsingular subarcs along which the Hamiltonian is linear in the control and along these subarcs the maximum principle, rather than Hu= 0, determines the control. Since Hu= 0 was used in developing the algorithm, the results of Chapter II do not apply directly to either si?gular subarcs or to subarcs along which His linear in u. The algorithm may be applied to singular problems if an alternate relation, say w(x,u,A,t) = 0 (187) can be found to take the part played by Hu= 0 in Chapter II. The determination of the function w will be discussed in the following paragraph. First, however, note that once w has been found, one may rederive the results of Chapter II, using w = 0 in place of Hu= 0. This has been done and the 85
PAGE 94
86 resulting equations are listed in the Appendix. Note that w has the same dimensions as u. The way in which w is determined depends upon which of the three possible types of subarcs is being treated. The possibilities are 1. Nonsingular subarcs along which Hu= 0 determines the control. 2. Nonsingular subarcs along which a control con straint is active. 3. Singular subarcs along which Hu vanishes identically. In the first case one simply uses w = HT in the equations of u the Appendix. The second case occurs when either the application of Hu= 0 gives a control which violates a constraint, or when the Hamiltonian is linear in u~ In either event, for the control to remain finite (which is the only case considered here} there must be a control constraint, per, haps of the form T u u < 1 (188} Then one determines the desired control using the maximum principle by solving Minimum H(x,u,A,t} (189} UU where u is the set of admissible controls. For example, if His linear in u and (188) applies, the desired control is (190}
PAGE 95
87 and a suitable w is (191) The point is that the solution to the minimization problem stated in (189) implies some relation of the form given in (187) and it is this relation that can be used to replace Hu= 0 along subarcs of the second type. Singular subarcs may often be treated by using the fact that Hu' and hence its time derivatives, vanishes identically along such subarcs. Therefore, an expression for w for singular subarcs can usually be obtained by setting successively higher time derivatives of Hu equal to zero until an expression containing the control appears. This expression is the desired form ot w. For examples in which this is done see [13]. In some problems w will be found to be independent of~ for all subarcs. This is a partidularly fortunate situation since the control can then be eliminated along all subarcs and the algorithm applied directly as described in Chapter II to find the corner times. In these cases there is no need to use the equations given in th~ Apperidix and the situation is very similar to tl1e bangbang case treated in Chapter III.
PAGE 96
88 Comments on Symmetry In Chapter II equations (18) and (19) were ~x = Aox + BoA + c T oA = Dox A oA + e Notice that these are symmetric in that the coefficient of ox in the first equation is the negative transpose of the coefficient of oA in the second and in that Band Dare symmetric~ Because of this symmetry and the symmetry of the jump conditions it was possible to express (47) and (48) in the symmetric forms oA =Pox+ Rdv + q where P and M were symmetric This symmetry is fortunate since because of it not all elements of P and M need be integrated and no additional coefficient matrix for ox is needed in the second equation. The symmetry came about through the use of H = O an a necessary condition for the u coritrol at cionvergence. It may easily be shown that the symmetry will remain on subarcs for which His linear in u and (191) applies. Along singular subarcs where w is found in the manner
PAGE 97
89 maY_ not be present in some problems. If the symmetry is not present (18) and (19) will have the orm oi = Aox +BOA+ c where A1 A and Band D may no longer be symmetric. Then (47) and (48) will have to be replaced by OA =Pox+ Rdv + q where R1 is an additional matrix that will be needed. The equations listed in the Appendix apply to the asymmetric case and will reduce to the correct symmetric form if the symmetry is present. Two Examples of Problems Having Singular Subaras The following examples illustrate the solution of problems having singular subarcs and give insight into the effect of the step size control method for the corner time calculations recommended in Chapter II. The first example is simple enough that the algorithm reduces to a single equation so its convergence properties with and without step size control are clearly evident. These are examples one and two from reference [15].
PAGE 98
90 E::camp le 1 The system is described by x = u x(O) = 1 where x and u are scalars. The control must satisfy and the object is to find u (t) for 0 ~ t ~ 2 to minimize J = J2 (x 2 /2) dt 0 The solution may be easily seen to be u = 1 0 s: t s: 1 = 0 1 < f s; 2 where the last subarc is singular. In this case H = x 2 /2 + Au and A= x A(2) = 0 so it follows that
PAGE 99
.. 91 H = A u H = x u .. H = u u Then H = 0 gives u = 0 as the desired control on the singular u subarc. The equations of the algorithm may be solved analytically for this simple problem and the result found to be dt1 = (1 ti) (2 ti)/(3 2ti) It is not difficult to see that the algorithm can converge either to the minimizing value t 1 = 1 ot to the stationary value t 1 = 2 since both these values are stationary equi librium points of the above equation. Convergence will be to t 1 = 1 for initial guesses less than 3/2 and to t 1 = 2 for initial guesses greater than 3/2. As was mentioned in Chapter II, the step size for the changes in the corner times may be controlled by adding a positive constant, say a, to the denominator of (46). Doing so for this problem yields dt 1 = (1 ti) (2 ti)/(a + 3 2ti) The value of a is to be zero so long as 3 2t1 > 0 and other wise large enough to ensure that the denominator of the above expression remains positive on all iterations. It is clear fr6m the above that the estimate, t1 = 2, is now an unstable
PAGE 100
92 equilibrium point so that convergence to this value is very unlikely. It is also obvious that convergence to the correct t 1 will occur for all initial guesses less than 2, so long as a is chosen large enough on each iteration to prevent the estimate from exceeding 2. This use of step size control makes convergence to other than locally minimizing solutions quite unlikely and improves the range of convergence. In the present instance the range was increased from t1 < 3/2 to t1 < 2. Example 2 The scate equations are X1 (0) =. 0 X2 = U X2(0) = 1 and the scalar control must satisfy , Jul < 1 It is desired to find a u(t) satisfying the inequality constraint which will minimize The Hamiltonian is 2 2 H X1 + X2 + ~1X2 +. ~2U
PAGE 101
93 so )q(S) = 0 Then setting successively higher time derivatives of H u equal to zero gives the singular subarc conditions H = A2 = 0 u H = 2x2 Al = 0 u ii u 2u + 2x 1 = 0 Consequently the desired control for the singular subarc is the feedback control and the sequence of subarcs for this problem is u = 1 Three runs were made for this problem. Run 1, shown in Table 5, was made starting with the initial guess t 1 = 0 and convergence was obtained in six iterations. No step size control was used in this run. The control history obtained at convergence is plotted in Figu r e 7. This
PAGE 102
Iteration 0 1 2 3 4 5 6 94 TABLE 5 Progress of the Algorithm in Solving th~ Second Singular ExampleRun 1 Cost t1 U (t 1 +) 5506.616 0.000000 0.0000000 186.8779 0 999909 0.5000000 22.60283 1.248885 0.4690281 1.841486 1.373948 0.4300813 0.781201 1.407328 0.4170418 0.766057 1.409472 0.4161664 0.766043 1.409474 0.4161654 u (5) 74.20321 13.65783 4.694012 1.064997 0.187793 0.132734 0.132672
PAGE 103
95 0.6 0. 4 0.2 0.0 ,+r1 2 0. 2 s:: 0 u 0.4 0.6 0.8 1. 0 1' ~.P, 0 1 2 3 4 5 Time Figure 7. Final control history for the second singular example.
PAGE 104
96 result differs somewhat from that given in [15] (particularly near the terminal time), probably due to the rather crude Euler integration scheme used there. In the present case the equations were integrated analytically so there was no integration error. Run 2, shown in Table 6, was started with the initial guess t1 = 2 and converged in six iterations. No step size control was used in this run so for this initial guess con vergence was to a local maximum, rather than to the desired solution. Consequently the run was repeated as run 3 of Table 7, this time using step size control. As may be seen from the table successful convergence to the desired solution was obtained.
PAGE 105
Iteration 0 1 2 3 4 5 6 97 TABLE 6 Progress of the Algorithm in. Solving the Second Singular ExampleRun 2 Cost t1 U (t 1 +) 101.7899 2.000000 0.000000 166.4058 3.034519 1.569634 174.8824 2.712313 0.966008 174.6804 2.779922 1.084062 174.6280 2.784859 1.092861 174.6266 2.784977 1.093071 174.6266 2.784979 1.093075 u (5) 10 01787 12.83128 13.15570 13.14086 13.14608 13.14603 13.14603
PAGE 106
Iteration 0 1 2 3 4 5 6 98 TABLE 7 Progress of the Algorithm in Solving the Second Singular ExampleRun 3 Cost t1 U (t1 +) 101.7899 2.000000 0.0000000 3.061673 1.475717 0.3868466 0.8953464 1.399174 0.4203300 0.7670177 1.409305 0.4162348 0.7660404 1.409475 0.4161652 0.7660430 1.409474 0.4161654 0.7660430 1.409474 0.4161654 u (5) 10. 0178700 1.4949520 0.3986386 0.1370205 0.1326604 0.1326723 0.1326723
PAGE 107
CHAPTER VI CONCLUDING REMARKS Conclusions An algorithm for the numerical solution of optimal control problems having discontinuous state equations has been developed by applying the NewtonRaphson method to the necessary conditions of the calculus of variations. The algorithm may be regarded as an extension of that of [3]. The major differences are that the discontinuities make it necessary to introduce jumps into the variations of the state and adjoint variables at the corner times. These jumps then make it necessary to jump the matrices of the backward sweep m~thod as they are integrated backward past the corners. The values of the corner times are found from the necessary condition which requires that the Hamiltonian be continuous at the corners. To apply the algorithm one must know the sequence of subarcs in advance, or else program the method to detect and treat corners as they are generated. A possible difficulty which may occur because of smoothness of the Hamiltonian at the corners has been pointed out and a remedy suggested. Applications of the method to control and state variable inequality constrained problems and to singular 99
PAGE 108
100 problems have been given. The relations between the necessary conditions to which the algorithm converge" in solving an SVIC problem and those of [6] have been found. Recommendations for Future Work Noton, Dyer, and Markland in [16] have suggested an approach to the computation of optimal controls for con tinuous systems which has apparently not received the attention it may well deserve. Given the problem of minimizing J tf J = L(x,u,t) dt to (192) subject to x = f(x,u,t) x(to) = Xo (193) they derive an algorithm by minimizing a second order ex pansion of (192) subject to the constraint obtained by linearizing (193). The more common approach, used in [4), [5], and elsewhere (and equivalent to that used here), is to minimize the augmented form of (192), namely J' = Jtf [L + AT(f x)] dt to subject to linearized constraints. The Noton, Dyer and (194) Markland approach leads to a problem of smaller dimensions, requires fewer integrations, and is much easier to implement, both from the standpoint of the numerical work involved and
PAGE 109
101 the analytical preparation for a solution. It may be argued that the method is not a full second order method but the basic ideas involved are intuitively appealing and the algorithm will achieve one step convergence with problems having linear state equations and quadratic costs. In the earlier phases of the present work the method was modified to treat problems with terminal constraints and the result was compared with the method described in [5]. Bullock and Franklin in [5] give two example problems which were solved using a full second order method. For successfui convergence the second order technique required 1. Step size control of the change in the control. 2. Step size control of the called~for change in the terminal" constraint. 3. Protection against conjugate points in the auxiliary problem. The modified Noton, Dyer, and Markland algorithm was used to solve each of these example problems starting from the same initial guesses used by Bullock and Franklin. Convergence was easily attained in both cases in essentially the same number of iterations required in [5] The remarkable thing was that this was accomplished without using any of the three protective measures listed above. Thus it appears that full second order methods may not be so desirable as has sometimes been supposed. A valid objection to the Noton, Dyer, and Markland approach is that it breaks down for problems in which L(x,u,t)
PAGE 110
102 does not have a second order expansion. (For example, in a minimum time problem L = 1 and no expansion is possible.) Recently however, Jacobson, Gershwin, and Lele in [15] have described what they call the algorithm and the c a algorithm for the solution of a class of singular problems. The algorithm is obtained by adding a term, cu~, to the cost and solving the resulting problem for successively smaller values of until numerical difficulties occur. The solution for the smallest value of successfully used, say cN' is then used for an initial guess to start the c a algorithm. The a algorithm is obtained by adding the term cN(u a) 2 to the cost and repeatedly solving the re sulting problem. In this case Eis held fixed at the value N and for each solution a is the control from the previous solution. It is shown in [15] that this artifice leads to the correct solution of the singular problem. It seems likely that the same or a similar artifice might be employed to extend the Noton, Dyer, and Markland algorithm to the solution of problems for which L does not have a second order expansion. It should then be a relatively simple matter to develop an algorithm for solving discontinuous problems based on their approach. The result should be a single algorithm with a wide range of convergence which can treat CVIC problems, SVIC problems, and singular problems. Furthermore, that algorithm will require the integration of fewer equations
PAGE 111
103 than is usually required, it will be simpler to implement, and the analytical work required for setup of a given problem will be much simpler.
PAGE 112
APPENDIX EQUATIONS FOR SINGULAR SUBARCS AND SUBARCS ALONG WHICH THE HAMILTONIAN IS LINEAR IN THE CONTROL This appendix lists the modified forms of the pertinent equations of the algorithm as applicable to singular subarcs and to subarcs along which the Hamiltonian is linear in the control. Their application is discussed in Chapter V. Equations (17) (18) and (19) become OU 1 + WAOA + w) = w (w ox U X ox = Aox + BoA + C Dex T oA = A1 o >\ + e where the matrices A through e are now 1 A= fx f w w U U X 1 C = f W W u u 1 D = (H H Wu wx) XX XU 1 e = Hxuwu w 10,1 (A1) (A2) (A3) (A4) (A5) (A6) (A7) (A8) (A9}
PAGE 113
105 Equations (39} and {40) remain Di.{ox} = D. (f}dt. l. 1 D. (o;l.} l. T = D. (H } dt. l. X 1 ... (A10) (A11) Because of the possible as ymmetry mentioned in Chapter V equations (47) and (48} take the form o;l. = Pox+ Rd\)+ q (A12) T d~ = R1ox +Md\)+ h (A13} and matrices P thro~gh h satisfy T p (tf >' = ~xxn!nx/n p = PAA PPBP+D 1 (A14} T R(tf) (lJ> ~Q /r>.)T R = (A1+PB)R = X X (A15) T q(tf> = n;an;n q = (A1+PB)qPc+e (A16) (AT+PTBT)R1 R1 (tf) (ljJX~Qx/r>.)T R1 = = (A17) T M(tf) ~~T/Q M= R1BR = (A18) T h(tf) ~dQ/Q h = R1 (c+Bq) = (A19} Equations (46) and (56) become [ ( } + ( ) ( fT) + ( ) D. (H) ] dt. Hx iDi f i 0 i Hx 1 1 T 1 =D. (H )ox.+ o. (f )oA. + 0 1 (HHuwu w} 1 X l. 1 1 (A20} 1 T } d\) = M (R1ox + h dljJ (A21}
PAGE 114
106 while (58), (.59), and (60) become (A22) + dQ ~TM1 (d~ h)]/Q}t (A23) f Eq~ation (63) should be replaced by dt. l. 1 + D. (H H w w)]/a J. u u where the definitions of a and g are still g = D.(HT) + P:"D.(f) l. X 1. l. a = (H ) :n. (f) (fT) :n. (HT) o. (H) X 1. 1. l. l. X 1. + D. (fT)P:D (f) l. J. l. Then the jump conditions for the matrices become Di (P) T = gg /a D. (R) 'I' + gD. (f )R./a 1. l l Di (q) T + + D. (H Huwu 1 w)]/a = g[D. (f )q. l 1. 1. (A25) (A26) (A27) (A28) (A29) (A30)
PAGE 115
107 T + D. (Ri) = gD. (f ) (Ri) /a l. l. l. D. (M) l. Di (h) T + T + = (Ri) .D. (f)D. (f )R./a l. l. l. l. T + T + = (R } .D. (f) [D. (f }q. l. l. l. l. 1 + D.(H H w w)]/a l. u u and (76) should be replaced by = [D.(H) + D.(fT)(P RM1 Ri):]ox:l. X l. l. l. + D. (fT) [RM1 (di h) + q] :+ D, (H Huwu1 w} l. l. l. Then (77) becomes [(H ):n. (f) (fT):n. (HT) n. (H}]dt. X l. l. l. l. X l. l. T = [D. (H) + D. (f )P.JcSx. l. X l. l. l. + D. (fT) (Rdv + q)~ + D, {H Huwu1 w) l. l. l. (A31} (A32) (A33) (A34) {A35}
PAGE 116
[1] [2] [3] [4] [5] [6] [7] (8) [9] REFERENCES A. E. Bryson and W. F. Denham, "A Steepest Ascent Method for Solving Optimal Programming Problems," J. AppZ. Meah., Series E, Vol. 29, No. 2, June 1962, pp. 247257. C. W. Merriam, "A Computational Method for Feedback Control Optimization," Information and ControZ, Vol. 8, No 2, April 1965, pp. 215232. S. R. McReynolds and A. E. Bryson, "A Successive Sweep Method for Solving Optimal Programming Problems," Proceedings of the Joint Automatic Control Conference, Rensselae r Polytechnic Institute, Troy, New York, June 1965. T. E. Bullock; "Computation of Optimum Controls by a Method Based on Second Variations," Ph.D. Dissertation, Stanford University, Stanford, California, 1966. T. E. Bullock and G F. Franklin, "A Second Order Feedback Method for Optimal Control Computations," IEEE Trans. Automatia ControZ, Vol. A. c.12, No. 6, December 1967, pp. 666673. A. E. Bryson, W. F. Denham, ands. E. Dreyfus, ."Optimal Programming Problems with Inequality Constraints I: Necessary Conditions for External Solutions," AIAA J., Vol. 1, No. 11, November 1963, pp. 25442550. w. F. Denham, "On Numerical Optimization with State Variable Inequality Constraints," AIAA J., Vol. 4, No. 3, March 1966, pp. 550552. W. F. Denham and A. E. Bryson, "Optimal Programming Problems with Inequality Constraints II: Solution by Steepest Ascent," AIAA J., Vol. 2, No. 1, January 1964, pp. 2534. D. H. Jacobson and M. M. Lele, "A Transformation Technique for Optimal Control Problems with a State Variable Inequality Constraint," IEEE Trans. on Automatia ControZ, Vol. AC14, No. 5, October 1969, pp. 457464. 108
PAGE 117
109 (10] D. H. Jacobson, "Differential Dynamic Programming Methods for Solving BangBang Control Problems," IEEE Trans. on Automatic Control, Vol. A. c.13, No. 6, December 1968, pp. 661675. (11] P. Dyer and S R. McReynolds, "Optimization of Control Systems with Discontinuities and Terminal Constraints," IEEE Trans. on Automatic Control, Vol. A. c.14, No. 3, June 1969, pp. 223229. (12] P. Dyer and s. R. McReynolds,. "On Optimal Control Problems with Discontinuities," J. Math. Anat. Appl., Vol. 23, September 1968, pp. 585603. [13] A. E. Bryson and Y. C. Ho, Applied Optimal Control: Optimization, Estimation, and Control, Waltham, Mass.: Blaisdell Publishing Company, 1969. [14) c. H. Schley, Jr., and I. Lee, "Optimum Control Computation by the NewtonRaphson Method and the Ricatti Transformation," IEEE Trans. on Automatia Control, Vol. A. C.12, Nor 2, April 1967, pp. 139144. [15] D. H. Jacobson, s. B. Gershwin, and M. M. Lele, "Computation of Optimum Singular Controls," IEEE Trans. on Automatia Control, Vol. A. c.15, No. 1, February 1970, pp. 6773. [16] A. R. M. Noton, P. Dyer, C. A. Markland, "Numerical Computation of Optimum Control," IEEE Trans. on Automat;ic Con.trot, Vol. A. C.12, No 1, February 19 6 7 pp 5 96 6 [17] D. H. Jacobson, M. M. Lele, and J. L. Speyer, "New Necessary Conditions of Optimality for Control Problems with State Variable Inequality Constraints," to appear.
PAGE 118
BIOGRAPHICAL SKETCH Edward Ross Schulz was born in Lockhart, Texas, on December 21, 1929, and graduated from Lockhart High School in May, 1948. He received the Bachelor of Science degree in Electrical Engineering from the University of Texas in January, 1952, and the Master of Engineering Degree from the University of Florida in April, 1967. Mr. Schulz has been employed by the General Electric Com pany, primarily in the aero space field, since February, 19 52. .Mr. Schulz is married to the former Margaret Sn1.=: Willms and is the father of two boys. He is a member of Eta Ka.ppa Nu, Tau Beta Pi, Phi Kappa Phi, and the Institute of Electrical and Electronics Engineers. 110
PAGE 119
I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Thomas E. Bullock, Chairman Assoc. Prof. in Elec. Eng. I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree f Doctor of~ hy. William A. Walter Assoc. Prof. in Elec. Eng. I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the d re f Philosophy. < ~ l /\1\. Allen E. Durling Prof. in Elec. Eng. I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degr of Dlr f hilosophy. Assoc. Mathematics
PAGE 120
This dissertation was submitted to the Dean of the College of Engineering and to the Graduate Council, and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. December, 1970 Dean, Graduate School
xml version 1.0 encoding UTF8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchemainstance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EX3O7LQE6_2NSGLB INGEST_TIME 20181121T19:08:09Z PACKAGE AA00064580_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES

