Dynamics and optimization of a human motion problem

MISSING IMAGE

Material Information

Title:
Dynamics and optimization of a human motion problem
Physical Description:
x, 152 leaves. : illus. ; 28 cm.
Language:
English
Creator:
Ghosh, Tushar Kanti, 1945-
Publication Date:

Subjects

Subjects / Keywords:
Human mechanics   ( lcsh )
Engineering Mechanics thesis Ph. D
Dissertations, Academic -- Engineering Mechanics -- UF
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis -- University of Florida.
Bibliography:
Bibliography: leaves 149-151.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 000580737
oclc - 14079183
notis - ADA8842
System ID:
AA00004925:00001

Full Text
















DYNAMICS AND OPTIMIZATION OF A HUMAN MOTION PROBLEM


By

TUSHAR KANTI GHOSH





















A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE :.UNIk.RSITY OF FLORIDA IN PARTIAL
FULrILL~.I..'-I OF THE REQUIREMENT'S FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY




UNIVERSITY OF FLORIDA
1974



































TO MY FATHER AND MY MOTHER













ACKNOWLEDGMENTS


It has been a very happy experience to work with Dr. W. H.

Boykin, Jr., during my stay at the University of Florida. I am

grateful to him for his help and guidance throughout this work, from

suggesting the topic of the dissertation to proofreading the manuscript.

I wish to express my deepest gratitude to him, both as an efficient and

enthusiastic research counselor and as a human being.

I wish to thank Dr. T. E. Bullock for the many discussions

I had with him about the theory and numerical methods of optimization.

These discussions provided me with understanding of many of the concepts

that were used in this work.

I chose Drs. L. E. Malvern, U. H. Kurzweg, and 0. A. Slotterbeck

to be on my supervisory committee as a way of'paying tribute to them as

excellent teachers. It is a pleasure to thank them. Special gratitude

is expressed to Professor Malvern for going through the dissertation

thoroughly and making corrections.

I am thankful to Drs. T. M. Khalil and R. C. Anderson for

examining this dissertation when Dr. O. A. Slotterbeck left the University.

I am thankful to Mr. Tom Boone for his interest in this work and

for volunteering his services as the test subject of the experiments.

Thanks are also due to the National Science Foundation which

provided financial support for most of this work.

Finally, it is a pleasure to thank my friend Roy K. Samras for

his help in the experiments and Mrs. Edna Larrick for typing the

dissertation.















TABLE OF CONTENTS


Page

ACKNOWLEDGMENTS ......................... iii

LIST OF FIGURES. . ... ... vi

NOTATION .................... .......... viii

ABSTRACT . . . ix

CHAPTER 1 INTRODUCTION . .... 1

1.0. Why and What . . .. 1
1.1. Dynamics and Optimization in Human Motion 1
1.2. Previous Work. . ... 3
1.3. The Problem Statement. . ... 7
1.4. The Kip-Up Maneuver . . 8
1.5. Present Work . . 8

CHAPTER 2 EXPERIMENTATION AND CONSTRUCTION OF THE
MATHEMATICAL MODEL . .. 10

2.0. Introduction . .. 10
2.1. Mathematical Model of the Kip-Up ....... 10
2.2. The Equations of Motion. . .. 13
2.3. The Equations of Motion for the Experiment
and the Integration Scheme . .. 21
2.4. Experimental Procedure . .. 23.
2.5. Results and Discussion . .. 26
2.6. Sources of Errors .. . .. 31

2.6.1. Imperfections in the Model . 31
2.6.2. Errors in Filming and Processing the Data 31
2.6.3. The Integration Scheme . 32

CHAPTER 3 ANALYTIC DETERMINATION OF THE MINIMUM-TIME
KIP-UP STRATEGY . ... 34

3.0. Introduction . . 34
3.1. Mathematical Formulation of the Kip-Up Problem 35
3.2. Bounds on the Controls . .. 36
3.3. Torsional Springs in the Shoulder and
Hip Joints . . .. 38
3.4. Boundary Conditions . ... 39











TABLE OF CONTENTS (Continued)


Page


CHAPTER 3 (Continued)


3.5. The Necessary Conditions of Time
Optimal Control . .
3.6. The Solution Methods . .
3.7. A Quasilinearization Scheme for Solving
the Minimum-Time Problem . .

3.7.1. Derivation of the Modified
Quasilinearization Algorithm .
3.7.2. Approximation of the Optimal Control
for the Kip-Up Problem .
3.7.3. A Simple Example Problem for the
Method of Quasilinearization .
3.7.4. The Results With the Kip-Up Problem

3.8. Steepest Descent Methods for Solving the
Minimum-Time Kip-Up Problem .


3.8.1.
3.8.2.
3.8.3.
3.8.4.


Derivations for Formulation 1 .
Derivations for Formulation 2 .
Derivations for Formulation 3 .
The Integration Scheme for the
Steepest Descent Methods .


3.8.5. Initial Guess of the Control Function

3.9. Results of the Numerical Computations
and Comments . . .
3.10. Comparison of the Minimum-Time Solution
With Experiment . .


S51


. 52

S. 61

. 69
.72


81
S 93
106

108
109


110

127


CHAPTER 4 CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE WORK .


130


4.1. Conclusions. . . 130
4.2. Recommendations for Future Work. 134

APPENDIX A DETERMINATION OF THE INERTIA PARAMETERS OF THE
KIP-UP MODEL FROM THE HANAVAN MODEL .. 138

APPENDIX B AN INVESTIGATION OF A STEEPEST DESCENT SCHEME
FOR FINDING OPTIMAL BANG-BANG CONTROL SOLUTION
FOR THE KIP-UP PROBLEM .. ... 143

LIST OF REFERENCES . .. 149

BIOGRAPHICAL SKETCH . .. 152


*
.

.

















LIST OF FIGURES


Figure Page

1. Hanavan's Mathematical Model of a Human Being ...... 12

2. Mathematical Model for Kip-Up . ... 14

3. The Three-Link System . . ... .15

4. Sketch of Kip-Up Apparatus Configuration ... .24

5. Measured and Smoothed Film Data of Angles 9 and t
for the Kip-Up Motion . . ... .27

6. Measured and Computed Values of c for Swinging Motion 29

7. Measured and Computed Values of c for Kip-Up Motion 30

8. Unmodified Control Limit Functions . ... 37

9. Approximation of Bang-Bang Control by Saturation Control 64

10. Graphs of Optimal and Nearly Optimal Solutions Obtained
via Quasilinearization for Simple Example ... 71

11. Modification of Corner Point Between Constrained and
Unconstrained Arcs After Changes in the Unconstrained
Arcs . . . ... 90

12. Solution of Example Problem by the Method of Steepest
Descent . . . 105

13. Initial Guess for the Control Functions ... .118

14. A Non-Optimal Control Which Acquires Boundary Conditions 119

15. Approximate Minimum Time Solution by Formulation 2
of the Method of Steepest Descent .. ... 122

16. Approximate Minimum Time Solution by Formulation 3
of the Method of Steepest Descent . ... 124











LIST OF FIGURES (Continued)


Figure Page

17. Angle Histories for Solution of Figure 16 .. 125

18. Difference Between Measured Angle and Mathematical
Angle for Human Model Due to Deformation of Torso 129

19. Construction of Kip-Up Model from Hanavan Model .... 139















NOTATION

Usage Meaning
* dx
x ; total derivative of the quantity x with respect to time t.
dt

x x is a column vector.

T T
x (x) Transpose of x; defined only when x is a vector or a matrix.

ax
x, yx; partial derivative of x with respect to y.
y

x, Partial derivative of the column vector x with respect to the
Sy y scalar y. The result is a column vector whose ith component
is the partial derivative of the ith component of x with
respect to y.

x, ,x Partial derivative of the scalar x with respect to the column
S- vector y; also called the gradient of x with respect to y.
It is a row vector, whose ith element is the partial deriva-
tive of x with respect to the ith element of y.
ax
x Partial derivative of a column vector x by a column vector y.
The result is a matrix whose (i,j)th element is the partial
derivative of the ith element of x with respect to the jth
element of y.
T n 2
,1 x|( Norm of the column vector x. Defined as either x x or K.x.
(to be specified which one) where x is a vector of dimension n,
K. are given positive numbers, x. is the ith element of x.
1 1


viii












Abstract of Dissertation Presented to the
Graduate Council of the University of Florida in Partial
Fulfillment of the Requirements for the Degree of Doctor of Philosophy


DYNAMICS AND OPTIMIZATION OF A HUMAN MOTION PROBLEM

By

Tushar Kanti Ghosh

March, 1974



Chairman: Dr. William H. Boykin, Jr.
Major Department: Engineering Science, Mechanics,
and Aerospace Engineering


Questions about applicability of analytical mechanics and

usefulness of optimal control theory in determining optimal human

motions arise quite naturally, ar-I especially, in the context of man's

increased activities in outer space and under water. So far very

little work has been done to answer these questions. In this disserta-

tion investigations to answer these questions are presented.

A particular gymnastic maneuver, namely, the kip-up maneuver is

examined experimentally and theoretically. A mathematical model for

a human performer is constructed for this maneuver from the best per-

sonalized inertia and joint centers model of a human being available

today. Experiments with the human performer and photographic data col-

lection methods are discussed. Comparisons of the observed motion with

solutions of the mathematical model equations are presented. Discrepan-

cies between the actual motion and the computed motion are explained in

terms of principles of mechanics and errors in measurements. Some

changes in the model are suggested.











An approximate analytic solution of the kip-up maneuver performed

in minimum time is obtained for the model via optimal control theory.

Several numerical methods are used to determine the solution, which is

compared with the observed performance of the human subject. Difficul-

ties in solving human motion problems by existing numerical algorithms

are discussed in terms of fundamental sources of these difficulties.

Finally, recommendations for immediate'future work have been

made.














CHAPTER 1


INTRODUCTION




1.0. Why and What

Man's increased interest in the exploration of space and the

oceans was an impetus for a better understanding of the mechanics of

large motion maneuvers performed by human beings. Experience in space

walking and certain athletic events brought out the fact that human

intuition does not always give correct answers to questions on human

motion. For certain problems the solutions must be found by analytical

methods such as methods of analytical mechanics and optimal control.

Broadly speaking, this work deals with the application of the

principles of mechanics and optimal control theory in the analytical

determination of human motion descriptors.


1.1. Dynamics and Optimization in Human Motion

Dynamics provides the basic foundation of the analytic problem

while optimal control theory completes the formulation of the mathemat-

ical problem and provides means to solve the problem. In any endeavor

of analytic determination of a human motion the first steps are construc-

tion of a workable model having the same dynamics of the motion as that

of the human performer, and obtaining the equations of motion for the

human model. However, the principles of mechanics alone do not give

enough information for analytical determination of a desired maneuver

whenever there are sufficient degrees of freedom of movement. Without

1










knowing what goes on in the human motor system, optimal control is

presently the only known analytical method which can provide the remain-

ing necessary information.

After a workable dynamic model of the human performer has been

obtained, the remaining part of the analytic determination of a physical

maneuver is a problem of control of a dynamic system where the position

vectors and/or orientation of the various elements of the model and

their rate of change with respect to time (representing the "states" of

the "dynamic system" being considered) is to be determined. The state

components change from one set of (initial) values to another set of

(final) values at a later time. The "control variables," that is, the

independent variables whose suitable choice will bring the change are

torques of the voluntary muscle forces at the joints of the various

limbs. However, the problem formulation is not mathematically complete

with the above statements because there would, in general, be more than

one way of transferring a system from one state to another when such

transfers are possible. Constraints are required in the complete for-

mulation. The concept of optimization of a certain physically meaning-

ful quantity during the maneuver arises naturally at this point. A cost

functional to be maximized or minimized gives a basic and needed struc-

ture to the scheme for exerting the control torques. It may be expected,

logically, that, unless given special orders, a human being selects its

own performance criterion for optimization while doing any physical

activity. Some of the physically meaningful quantities that may be

optimized during a physical activity are the total time to perform the

activity, and the total energy spent during the activity.











1.2. Previous Work

Very little work has been reported in the literature so far

where the optimization considerations have been used in the study of

human motion. Earlier work in the study of the mechanics of motion of

living beings was done primarily from the view of grossly explaining

certain maneuvers modeling the applicability of principles of rigid body

mechanics. Most of this work was done under either free fall or zero

gravity conditions. The models were made up of coupled rigid bodies

and conservation of angular momentum in the absence of external torques

was the most often used principle of mechanics.

The righting maneuver of a free falling cat in midair attracted

the attention of several authors in the early days of studies of living

objects. Marey's [1] photographs of a falling cat evoked discussions

in 1894 in the French Academie des Sciences on whether an initial angu-

lar velocity was necessary in order to perform the righting maneuver.

Guyou [2] modeled the cat by two coupled rigid bodies and explained the

phenomenon with the aid of the angular momentum principle with the angu-

lar momentum of the entire cat identically equal to zero. Later, more

photographic studies were made by Magnus [3] and McDonald [4,5,6].

McDonald made an extensive study of the falling cats with a high speed

(1500 frames/second) motion picture camera. His description of their

motion added many details to previous explanations. McDonald found no



Numbers in brackets denote reference numbers listed in the
List of References.











evidence for the simple motion of Magnus. In addition he studied the

eyes and the vestibular organ as motion sensors.

Amar [7] made one of the most complete of the early studies of

human motor activities in 1914. This study of the relative motion of

the head, limbs and major sections of the trunk was made with a view

to study the efficiency of human motion in connection with industrial

labor.

Fischer [8] considered the mechanics of a body made up of n links

and obtained equations of motion without introducing coordinates. He

made discussions of applications of his theory to models of the human

body, but did not give applications for the equations of motion he had

obtained.

In recent years most of the analytical studies of human motion

have been associated with human beings in free fall as applied to

astronauts maneuvering in space with and without external devices.

McDonald [9] made extensive experimental studies of human motions such

as springboard diving and the "cat-drop" maneuver. McCrank and Segar

[10] considered the human body to be composed of nine connected parts.

They developed a procedure for the numerical solution of their very

complex equation of motion. Although some numerical results were pre-

sented, no general conclusions were drawn.

The most significant contribution to the application of rational

mechanics to problems in the reorientation of a human being without the

help of external torques was made by Smith and Kane [11]. Specifically,

they considered a man under free fall. In this paper the authors pointed

out that the number of the unknown functions exceeded the number of the












equations of motions that were obtained for the system and recognized

the need for optimization considerations. In order to get the adequate

number of equations, they introduced a cost functional to be optimized

which consisted of an integral over the total time interval of some

suitable functional of the undetermined generalized coordinates.

Optimization of this functional became a problem of calculus of varia-

tions, which yielded the necessary number of additional equations (the

Euler-Lagrange equations) to solve the original problem completely.

The approach of Smith and Kane suffers from one major drawback--

it ignores the internal forces of the system. The internal forces due

to muscle groups at the various joints of the body segments are mostly

voluntary, have upper bounds in their magnitudes and are responsible

for the partially independent movements of the various limbs. Because

they are internal forces, it is possible to eliminate them completely

in any equations of motion. These can be obtained, for example, if the

entire system is considered as a whole. However, such equations will

be limited in number (at most six, three from the consideration of

translation and three from the consideration of rotation) and will, in

general, be less than the number of the unknown functions. The intro-

duction of a cost functional will yield via the calculus of variations

the proper number of equations. However, the maneuver obtained may be

beyond the physical ability of the individual. It is therefore essen-

tial to recognize the role of all the internal voluntary forces that

come into play during a physical maneuver.

Ayoub [12] considered an optimal performance problem of the

human arm transferring a load from one point on a table to another point.











The motion considered was planar. Internal forces and constraints on

the stresses were considered. A two-link model was considered for the

arm and numerical solutions were obtained, using the methods of Linear

Programming, Geometric Programming, Dynamic Programming and simulation.

The performance criterion was a mathematical expression for the total

effort spent during the activity. The motion considered was simple

from the point of view of dynamics. Also, it required less than ten

points to describe the entire motion. This allowed consideration of

many physical constraints but the dynamics of the human body motion con-

sidered was quite different than when large motion of the various

limbs are involved. A list of references of works on human performance

from the point of view of Industrial Engineering is given in Ayoub's

thesis.

The research in the area of free fall problems showed that

analytical solutions agreed qualitatively with observations in those

cases in which rational mechanics has been applied with care. Examples

of such problems are the cat-drop problem and the jack-knife-diver

maneuver (Smith and Kane). Attempts at analytical solutions of activ-

ity problems which are not in the free fall category have not been com-

pletely successful. Such problems are in athletic activities (such as

running, part of the pole vault maneuver and part of the high jump

maneuver) and in activities associated with working on earth.

Simultaneously with the study of the dynamics of motion, several

investigations were made for the determination of the inertia parameters

of human beings at various configurations. Knowledge of inertia param-

eters is essential for performing any dynamic analysis. A list of the











research activities in this area is given in References 13, 14 and 15.

However, only Hanavan [14,15] has proposed a personalized model of a

human being. This inertia model has been used in the present investi-

gation.


1.3. The Problem Statement

The present work belongs to a broader program whose objective

is to investigate the basic aspects of the applicability of rational

mechanics to the solutions of any human activity problem. The aspects

presently being considered are:

1. Construction of an appropriate presonalized model for the

individual's maneuver under consideration

2. Formulation of a well-posed mathematical problem for the

analytic description of the maneuver

3. Solution of the mathematical problem.by a suitable analytical

method

4. Comparison of the analytical solution with an actual motion

conducted in an experiment with a human subject

5. Determination of muscle activity and comparison with computed

muscle torque histories for the maneuver.

If a complete analysis based on the above steps results in the

correct motion as compared with the experiment, then the results can be

used for training purposes and design of man-machine systems, but, more

importantly, the results will establish the applicability of rational

mechanics to the solution of problems of human activity.

In the present investigation, a particular gymnastic maneuver,

the kip-up, has been selected for analysis as outlined above. The methods











developed will, of course, be valid for other maneuvers but the ana-

lytical model will be personalized to the maneuver and individual.

Among all the common physical and athletic activities, the kip-up

maneuver was found to be particularly well suited for analysis. The

motion involves large motions, and is continuous and smooth. Also, it

is planar and needs relatively fewer generalized co-ordinates for its

complete description, since a correctly executed kip-up exhibits three

"rigid" links. At the same time it is not a trivial problem from the

point of view of our basic objective.

The physical quantity (the performance criterion) chosen for

optimization (minimization, in this case) is the total time to do the

maneuver.


1.4. The Kip-Up Maneuver

The kip-up maneuver is an exercise that a gymnast performs on

a horizontal bar. The gymnast starts from a position hanging vertically

down from the horizontal bar and rises to the top of the bar by swinging

his arms and legs in a proper sequence. During the maneuver the motion

is symmetric and does not involve bending of the elbows and the knees.

Normally, the grip on the bar is loose most of the time. For an inex-

perienced person, the maneuver is not easy to perform.


1.5. Present Work

In Chapter 2, a mathematical model for the kip-up motion and

results of laboratory experiments to test the accuracy of the model

are presented. First, a mathematical model of a professional gymnast

for the kip-up motion has been constructed. The dynamic equations of











motion for the mathematical model were then obtained. Two sets of

equations of motion were derived: one for the purpose of verifying the

accuracy of the mathematical model from experiments and one for the

purpose of optimization of the kip-up maneuver. Next, the results of

laboratory experiments with the gymnast are presented. The gymnast was

told to perform symmetric maneuvers on the horizontal bar, including the

kip-up. The maneuvers were photographically recorded. Two of the many

records were selected, one of a simple swinging motion with relatively

small oscillations, and another of his quickest kip-up maneuver.

The angle measurements from these film records were then used in the

equations of motion to check the accuracy of the mathematical model.

An error analysis was then performed to explain the disagreement between

the experimental and the computed results.

In Chapter 3, an analytic solution of the minimum time.kip-up

for the mathematical model was obtained by numerical computations.

First, the analytical problem of the determination of the kip-up in mini-

mum timewas stated in precise mathematical terms. This involved repre-

senting the equations of motion in the state variable form, specifying

the boundary conditions on the state variables, establishing the bounds

on the control variables, and modeling the stiffness of the shoulder

and the hip joints at extreme arm and leg movements. A survey of the

necessary conditions for time optimality given by the optimal control

theory has been presented. Finally, several numerical schemes used for

solving the kip-up maneuver problem are presented and the results of the

numerical computations are discussed.

In Chapter 4, final conclusions of the present investigation

and recommendations for future work have been presented.

















CHAPTER 2


EXPERIMENTATION AND CONSTRUCTION
OF THE MATHEMATICAL MODEL




2.0. Introduction

In this chapter, modeling of a human being for the kip-up

maneuver is considered. In Section 2.1, a mathematical model for a

professional gymnast is constructed, using the personalized model of

Hanavan [14]. In Section 2.2, equations of motion for the mathematical

model are derived for the purpose of using them in the analytic deter-

mination of the subject's optimal kip-up motion. An equivalent system

of first-order differential equations are derived in Section 2.3 for

testing the inertia properties and structure of the model. In Sec-

tion 2.4, the laboratory experiments performed are described. The

results of the experiments are discussed in Section 2.5. An analysis

of the comparisons between the observed and computed results are also

presented.


2.1. Mathematical Model of the Kip-Up

The equations of motion of a deformable body such as the human

body are usually partial differential equations. Presently, not enough

is known or measurable about the deformation of the human body under

voluntary motion to determine a partial differential equation model.

Also, such models are quite difficult to handle. However, for situations












where the deformation is small compared to the displacement of a body,

the deformable body may be considered as a rigid body in writing the

equations of motion. The rigid link assumption has been used widely

in modeling human beings. The personalized model of Hanavan [14] is

based on this assumption. The elements of the Hanavan model are shown

in Figure 1 and consist of fifteen simple homogeneous geometric solids.

This construction allows a large number of degrees of freedom for the

model and minimizes the deformation of the elements without undue com-

plexity. As the number of degrees of freedom increases to the maximum,

it is likely that a mathematical model based on the Hanavan model

becomes more accurate. However, increases in the degrees of freedom

increase the model's complexity and make it difficult to analyze mathe-

matically.

Observation of a correctly performed kip-up indicates that the

human performer might be modeled quite accurately as a system of only

three rigid links. The two arms form one link, the head-neck-torso

system forms another, and the two legs form the third link. The

shoulder and hip joints may be approximated as smooth hinges where

these links are joined together. Deformation of the link consisting

of the head, neck, and torso during certain periods of the maneuver

is detectable when observed via high speed filming. It was felt that

the effect of the deformation of the torso would be no more significant

than the standard deviation in the Hanavan inertia parameters because

this part of the orientation of the torso is determined by the line

joining the shoulder and the hip joint centers.






0




O O






Figure 1. Hanavan's Mathematical Model
of a Human Being.










The muscle forces acting at the hip and shoulder to cause link

motion have been replaced with their rigid body equivalent resultant

forces and couples. If the masses of the muscles causing the forces

are small compared to the other portions of the links, then the net

effects of the forces are the torques of the couples. The resultants

do not appear in the equations of motion.

The three-link kip-up model is shown in Figure 2. It is con-

structed with elements of the Hanavan model. Twenty-five anthropometric

dimensions of the gymnast were taken and used in a computer program for

calculating first the inertia properties of the Hanavan model and then

those of the kip-up model. The determination of the inertia properties

of the kip-up model from those of the Hanavan model is presented in

Appendix A.


2.2. The Equations of Motion

The mathematical model for the kip-up motion is a three-link

system executing plane motion under gravity. The active forces in the

system are the pull of gravity acting on each of the elements of

the system and the two muscle torques. The muscle forces at the shoulder

acting at the joint between links 1 and 2 are replaced by the torque u .

Likewise, u2 is the torque for the hip joint between links 2 and 3.

The system is suspended from a hinge at the upper end of link 1, repre-

senting the fists gripping the horizontal bar which is free to rotate

on its spherical bearings. All joint hinges are assumed to be friction-

less. The general three-link system for which the equations of motion

will be derived is shown with nomenclature in Figure 3. Equations of

motion of the system were obtained via Lagrange's equations as follows:









14
























'I-
U



*H






aaa


CH





0 a
4l
o


/ m



a4)

j cH D










0:

4.




















Vertical





element 2


element 3 CG I


z

i = length of element 1 distance between the hinges 0 and A
A
2 = length of element 2= distance between the hinges A and B
S= distance between the center of gravity of element and the hinge
r2 = distance between the center of gravity of element 2 and the hinge A

r = distance between the center of gravity of element 2 and the hinge A
r = distance between the center of gravity of element 3 and the hinge B

CG1, CG2, CG3 = locations of centers of gravity of elements 1, 2, and 3,
respectively

c = angle between O-CG1 and OA

B = angle between A-CG2 and AB

mI, m2, m3 = mass of elements 1, 2, and 3, respectively
11 12' = moments of inertia of elements 1, 2, and 3, respectively,
about axes perpendicular to the xz plane through their respective
centers of gravity

I = moment of inertia of the horizontal bar at the hinge 0 about its
r
longitudinal axis
g = acceleration due to gravity


Figure 3. The Three-Link System










If we define

1 2 2 2 2 2 2
A = 2(mlrl +m2r2+ 12+m2 1+ 3+m3r3+m3+m32+ I)

B = m2r2 1

C = m3 12

D = m3r3 1

E = m33 2



2 (2.2.1)


M = mlrlg

N = (m2+m3 )g

V= m3 2g

W = m2r2g


R = m3r3g

with Equations (2.2.1), we can express the Lagrangian of the system as:
*2 *2
L = ( [A+ B cos (6+&) + C cos 0 + D cos (89+) + E cos ] + 2 [F+ E cos 4]

"1 *2
+ 9p[2F+B cos (-+P) + C cos 0+D cos (8+4) + 2E cos J] + 2 J

+ [J + E cos 4] + cp E[J+ D cos (9+) + E cos 4] + M cos (cp+r)

+ N cos c + V cos (9+8) + W cos (Cy+ B) + R cos (y+9+ ). (2.2.2)

For the Hanavan model of Atir system, the angles a and B are zero.

If we define ul and u2 as positive when they tend to increase

the angles 8 and respectively, we can write the equations of motion

as:











d bL aL
= (2.2.3)


d 'L BL
d L L (2.2.4)


d bL aL
t = u (2.2.5)


Let us define

aI = 2(A + (B+C) cos 8 + D cos (8+&) + E cos 4)

a2 = 2F + (B+C) cos 9 + D cos (8+4) + 2E cos

a3 = J + D cos (&+4) + E cos

b1 = a2

b = 2(F + E cos )

b3 = J + E cos

e1 = 3
c 2 3
c2 = b3
c3= J

dl = cp[2(B+C) sin 9 + 2D sin (9+4)] +p~4[2D sin (98+) + 2E sin ~]

+ E[2D sin (9+$)+ 2E sin 4] + 92[(B+C) sin 9 + D sin (9+4)]

+ 2 [D sin (9+ + E sin ] (+N) sin c (V+W) sin (c+8)

R sin (c+9+4)

d2 = u1+ 4[2p+ 29+ ] E sin -_ [(B+C) sin 8 + D sin (9+-)]

(V+W) sin (CP9+) R sin (cp+9+*)

d3 = u2 [D sin (9+*) + E sin ] 9 E sin -2;p E sin

R sin (9+i8+) .

(2.2.6)










With Equations (2.2.6) we obtain from Equations (2.2.2) (2.2.5) the

equations of motion

alP + bl" + cl = d1 (2.2.7)

a2c + b2" + c24 = d2 (2.2.8)

a3cp + b39 + c39 = d3 .(2.2.9)

It will be helpful to express the equations of motion in

normal form in formulating optimal control problems for the system.

For that purpose we define the state variables as

X1=p X2=p X3= 9 X4= X5= X6= (2.2.10)

al bI c1

S= a2 b2 c2 (2.2.11)

a3 b3 c



dl b1 c1

1 = d b2 c2 (2.2.12)

d3 b3 c



al dl c1

2 = a2 d2 2 (2.2.13)

a3 d3 c3



al b1 d
1 1 1

A3 a2 b2 d2 (2.2.14)

a3 b3 d










Using these definitions, the equations of motion (2.2.7) (2.2.9) can

then be written in the normal form as


X1 = X2 2
X -


.* 3
3 1 =X -

3 = 6 T "

To write down the equations of motion in more convenient forms, we shall

further define the following quantities:

A 112 A3 T
f(X,u) = [X2, -, X4, X6 (2.2.15)

e = d

S= d2 ul (2.2.16)

e3 = d3 u2

el b1 c1

21 e b2 c (2.2.17)

e3 b3 c3


a1 e1 c1

2 = a2 e2 c2 (2.2.18)

a e3 c3



a bl el

3= a2 b2 e2 (2.2.19)

3 b3 e3










(X)1 "2
A(X) = 2X X, x4,
2' b 4' E'


1
B(X) = -














1
B (X)














B (X)
2 =1


0

(-b1c3 + b3c1)

0

(-a c3 a3cl).

0

(-alb3 + a3bl)





(-blc3 +b3c )

0

(ac3 a3 c 1)

0

(-alb3 + a3b )



0

(blc2 -b2cl)

0

(-alc2 + a2c1)

0

(alb2- a2b1)


A3 jT
X, -



0

(bc2 -b2 c 1)

0

(-alc2 + a2c)

0

(alb2 a2b )


Using the definitions(2.2.16) (2.2.22), the equations of motion

(2.2.15) may now be expressed by any one of the following equations:


(2.2.20)


,


(2.2.21)













(2.22)



















C-.
(B', E j (S ,-5 _n; S5. n:, :; i:






s,, (S .. (S +, 3 n 3VdA


"s^

/ -, -S 3" d ,- S -V + n (53-; 3 ",
S,





4z '3 -s 7- J

S. t .

(+f ,{5 .-;- -.,,f i) 'd' $ .3S : 33 '3



S I

I 3














Fp to this .oint. :his analysis has been ,er; en-

era and three-diTensional witn no referen e to 0 D.ate-

1ike -wo-dimensiona pro lem. Eq a tion ,'.) wi t e aid

of ( tcar be a pli ed to any t rer e- i-ensinr.a~ e, as icity











with c expressed in terms of p using Equation (2.3.1). Hamilton's

canonic equations are then given by

H
c = .p (p,c,t) (2.3.3)

and

6H
S= (p,Y,t). (2.3.4)


From Equations (2.3.2) (2.3.4) we obtain


= p-9[2F+(B+C)cos 9+Dcos (9+4)+2E cos ] -[J+Dcos (9+4) +Ecos ]
2[A + (B+C) cos 9 + D cos (9+i) + E cos 4]
(2.3.5)

p =- [(M+N) sin c + (V+W) sin (cp+9) +R sin (9p+9+*)]. (2.3.6)

If we wish to introduce the effects of the friction at the hinge O,

the equation for p becomes

p= F (2.3.7)


where F is the generalized friction torque at the hinge O.


The Integration Scheme


The integration scheme integrates Equations (2.3.5) and (2.3.7).

The initial condition of cp is obtained from the measured values of Cp.

The initial value of p is obtained from the definition of p given in

Equation (2.3.1). To compute the initial value of p, c has been com-

puted for this starting point only. The 8 and 4 data are differenti-
bH 8H
ated numerically to generate 9 and i values at every step. H and H

are computed at every step by using these values. The value of F is

not known a priori and it requires a separate experiment for its

determination. Integrations were done with F = 0 and F = C sgn
r r











with C determined experimentally. The difference between the two

cases was found to be insignificant even with C=1 ft-lb which was

well above the possible friction torque at the bearings.


2.4. Experimental Procedure

The test rig consisted of a horizontal bar and two motion

picture cameras. The horizontal bar was made of a short solid round

steel bar 1-3/16 inches in diameter and 58 inches long supported by two

very rigid vertical columns through a pair of self-aligning spherical

bearings. The bearings allowed free rotation of the bar with the arm

of the subject.

One movie camera was placed with its line of sight aligning

with the horizontal bar and about 30 feet away from the bar as shown

in Figure 4. Alignment of the camera's line of sight with the horizontal

bar would give the correct vertical projection of the two arms on the

film for determining the angle c directly. The second camera was placed

in front of the horizontal bar at the same elevation as the bar. The

film taken in this camera showed whether a particular motion was sym-

metric or not about the vertical plane of motion and also, the angle

between the two arms, which is required for computing the moment of

inertia of the arms. The film speed was determined from the flashes of

a strobe light regulated by a square wave generator.


Experiments


The experiments were done during two separate periods with the

same subject, a professional gymnast. An average of 15 experiments

of the subject's performance on the bar were recorded on film each day.















ca


U5











The subject was told to avoid bending his arms and legs and to maintain

symmetric motion. In the early days of experimentation he was told to

just swing on the bar by moving his stiff arms and legs relative to the

torso. These experiments were done with the idea of obtaining small

angle data for verifying the inertia properties of the Hanavan model.

The filming of the camera was done at speeds of 32 frames per second and

62.1 frames per second (as determined from stroboscopic measurements).

In later experiments the subject was told to perform the maneuver with

some specific objectives. He was told to perform what he thought would

be the kip-up (1) in minimum time, (2) with minimum expenditure of

energy, and (3) putting"least effort. Each of these maneuvers was

repeated several times. Between any two subsequent maneuvers,'the

subject was given adequate rest periods to avoid fatigue. This exper-

iment was conducted with the idea of making optimization studies as well

as testing the model.

Four white tapes were stuck to the subject, on the sides of

his upper and lower arms, sides of his torso, and on the sides of his

legs. These tapes were aligned between joint centers as suggested by

the Hanavan inertia model.


Processing the Data


The film speed was measured with the aid of a stroboscope by

running the camera with a developed film with the same speed setting.

After removing the lens the shutter was exposed to the stroboscope

flash. By arresting the shutter in the stroboscope light, the shutter

speed was obtained.












The films were run on an L-W Photo Optical Data Analyzer.

For the purpose of testing the inertia properties, two maneuvers were

selected, one from each day's filming. The films were projected

plumb line perpendicularly on paper fixed to a vertical wall. As each

frame was projected, the white tapes fixed on the subject, now clearly

visible in the image, were marked out on the paper of the pad by means

of a pencil and a straight edge. In this way each frame was "trans-

ferred" on separate sheets of paper. The angles 8 and were then

measured from these traces. The angle y was measured with respect to

a vertical reference. The vertical reference was obtained from a sharp

window wall in the background.

Of the two sets of data processed, one was smoothed before

using it in the integration scheme. This was the one filmed at the

higher speed for the fast kip-up motion. A plot of the raw data and

those after preliminary smoothing for this set are shown in Figure 5.


2.5 Results and Discussion

The results of the integration of the equations of motion for

two of the data sets analyzed are shown in Figures 6 and 7. Figure 6

shows data for swinging motion, while Figure 7 is for the kip-up.

Also, in Figure 7 is given the computed results corresponding to

reinitiating the integration program with the measured data. Differ-

ent starting points of integration were selected to eliminate the

errors that were generated before these points.

In Figure 6, curve 2 of the computed values for the unsmoothed

data agrees well with curve 1 of the measured values for a little more










27















*









-0
a
- o0




o
ClU







C,)








0
boo


-H









Orz

00
D-











6 f
r^
Tl


(aaa.aa) sai2uv


0 0 0 0 0 0 0 0 0
SO0 O O C1 0
v h I I 0l O
I- IA



















Cr bhti : o1 j r applicable to

Ci$p : c: S :. Stir cf cn :n o itT.r r h..-in ciffer-

e-nt precerties an; geos;e .... 0CJ v ;o eere cn, the analy-

i. < .. '. d-| r n thF, p1 : o C [i S JCred r. Sd-

gle laysr cr rf'tiplayer plates, thin or moderately thick

!oates, etc., and also on -re type f displacerrent une-

tions V -rd Y, HIabI [41 has deronstrated the appiica-

ilit y of these results to single layer plate.

In this Jisserration, a three-layered late, popu-

larly known as a 5~rdwich pla e will be considered.


Sandwch i-ate


A sandwich plate consisting oc three layers is

shown in Figure (2). T*e face layers are much thinner

than the core. All layers are unifarml thick throuc. Jt-

The two facss are of tne same thickness t., the core thick-

ness bein1 t-.

Tne present analyst's is :apable of -reatinr mixed

boundary value problems, On the upper surface, the line

4E separates the two regiors A. and .-s iu over which

dispelceuents and stresses res:ectively are prescribed.

CO tihe owner surface, the lii~ CO separaFtes he corresDord-

ing regions. The line CD is located exactly below tre line

AB. Also, :t ary two points located or, the upDer and lower

surfaces of the plate and having tne same xI and x
































9-

0


4
3
cs


4-)
o O
CO
ca k
0 h
&< ? 0
E~ cu


9-
CH
S0o


3 W
a) Cr


ton o 1 0 tn o0 I o



(ulTpUH) d)






















9-
CH
o a
co





0 cu
9-


0 CH 4-.
r-) 4



C0 w
m 0 4~i







Of O4
0 3
10 Q)

















cq
0 0


fi .
M ^ +
ct fI C
(0 > 0)
(S '0 <
Q)I

*0 P '
I
c t
M +
ca c

S


(uUpuHI) 6d


0
z



0 h
00 )
rrl


0 0 0 0 0 0
C4 C4 T'










2.6. Sources of Errors


The following are considered to be the sources of errors

responsible for the disagreement between the computed and measured

values.


2.6.1. Imperfections in the Model

The human being for the motion studies was modeled as a system

of rigid bodies. The response of the system to be compared to that of

the rigid body model was a single generalized coordinate of the system.

The errors in modeling can be lumped into the overlapping categories of

(1) definition of the generalized coordinates of the rigid elements of

the system, (2) deformations of link lines during motions from prior

joint center measurements, and (3) significant variations during motion

in the inertia properties of the torso with respect to the fixed coor-

dinate system. In several experiments the torso deformed with signif-

icant movement of the shoulder joint centers. In these cases the con-

stant inertia properties model is obviously incorrect. These variations

not only cause inaccuracies in the inertia parameters but also result in

errors in the link lines or additional errors in the mass center of

element 2. These errors are reflected in errors in the angles 8 and ',

which in turn cause a time varying "phase shift" in the computed angle (.


2.6.2. Errors in Filming and Processing the Data

The primary source of errors in filming was considered to be

caused by inaccuracies in knowing precisely where the link lines were

in relation to the film plane. As mentioned in Section 2.4, care was











taken to minimize this error. That is, the cameras were aligned with

the horizontal bar so that the link line of the arms projected in the

plumb line plane of the film was very nearly the correct model refer-

ence line. Since the link lines of the torso and legs were nearly

plumb line vertical, no corrections of the image data were required

for these links. Filming of static thin rods which were connected to

the bar at known angles to each other and the vertical plane produces

overall measurement errors of about 1 degree standard deviation.

Errors up to 3 degrees can be expected in data from films of the motion

experiments. These errors can cause the rates of the angles to be in

error by more than 30 percent. This is the primary cause of the error

in the amplitude of the angle c computed from the dynamical equations.

(Second derivatives of the data can be in error by more than 100 percent.

This ruled out the use of equations such as Euler's or Lagrange's.)


2.6.3. The Integration Scheme

The integration scheme uses at any step the measured values of

e and and the computed values of E and i stored previously. Once a

difference between the measured (actual) and the calculated values of cp

has developed at a time t due to any of the sources of errors discussed

above, the system configuration determined by the calculated value of c

and the measured (actual) values of 9 and at the time t will be dif-

ferent from that of the actual system at that time. This will cause

the model to have a different response after time t than that of the

actual system which has a different relative configuration. This

in turn will cause a further deviation between the actual and the

calculated motion that follows this instant of time. To reduce this










effect of propagation of error via the system equations, several

reinitializations of 9 and p were done by restarting the integration

at different points.

Various order differentiation and integration schemes were

tested with insignificant differences in the results for smoothed

data. These were done with data from filming at F=62.1 frames per

second with a maximum integration step size of 2/F second. As men-

tioned previously, the main cause of amplitude error was the errors in

the derivatives of the raw, unsmoothed data.

The results obtained from the experiments show that the model

for the kip-up motion constructed from the Hanavan model was reason-

ably good considering its kinematical simplicity. In spite of imper-

fections in the model, its dynamic behavior was quite similar to the

actual motion, so that this model could provide reasonable estimates

of optimal human performance via the theory of optimal processes and

numerical solution methods.

However, the results obtained from the experiments indicate

that the application of rational mechanics to the analysis and design

of man-machine systems could prove inadequate unless the model and the

data gathering techniques can be improved. This is especially true in

the design of high accuracy or low tolerance systems.
















CHAPTER 3


ANALYTIC DETERMINATION OF
THE MINIMUM-TIME KIP-UP STRATEGY




3.0. Introduction

In this chapter the determination of an analytic solution of the

kip-up maneuver is presented. The problem of analytical determination

of the kip-up strategy in minimum time has been cast as a problem of

optimal control of dynamical systems. Before the techniques of the

optimal control theory may be applied to the problem, it is necessary

to state the physical problem in the language of mathematics and to

introduce the physical constraints that must also be considered for the

solution. Thus, the first four sections of this chapter have been

devoted to the formulation of the mathematical problem. In Section 3.5

a survey of the necessary conditions for optimality obtained from the

optimal control theory is presented. Since the problem under consider-

ation cannot be solved in closed form, numerical methods were used to

obtain the solution. In Sections 3.6 3.9, the choice of the numerical

methods, their derivations and the results of the numerical computations

are discussed. In Section 3.10, results of the numerical computations

are compared with the actual motion.











3.1. _Mathematical Formulation of the Kip-Up Problem

The problem is to determine the minimum time strategy for the

man model to kip-up without violating control constraints. These con-

straints represent the maximum torques the man's muscles can exert for

any given configuration. Formulated mathematically, we have the follow-

ing:

For the system equations

X = f(X,u) = A(X) + B(X)u (3.1.1)


and the boundary conditions

X(O) = X (given) (3.1.2)
-o


f f f f (3.1.3)
(X(tf)) X(tf) X = O, X = given (313)

where
t = final time, to be determined (3.1.4)
and
X(t) is the time-dependent state vector,

given by

Xl(t) =p(t), X2(t) = p(t), X3(t) = (t)
(3.1.5)

X4(t) = 9(t), X5(t) = (t), X6(t) =(t)

find a control

u(t) = [u (t),u2(t)]T (3.1.6)


such that simultaneously Equations (3.1.1) (3.1.3) are satisfied,

t is minimized, and for all values of t, 0 < t < t the inequalities

S (X) < u (t) < S X)
(3.1.7)
1 S (X)
S2(X) u (t) <. S_(X)











are satisfied. S (X) are given functions of X and represent the bounds

on the control u.(t). The functions f(X,u), A(X), and B(X) were pre-

viously given in Section 2.2. S is the error in meeting the terminal

values of the state variables.


3.2. Bounds on the Controls

The control variable ul is the muscle torque exerted at the

shoulder joint and u2 is that exerted at the hip. For the individual

being modeled, the functions ul and u2 will have upper and lower limits

which are functions of the state X.

Samras [16] experimentally determined the maximum muscle torques

at the shoulder and hip joints for various limb angles at the joints.

This was done for the same subject modeled in the present study. These

measurements were made under static conditions and the maximums in

either flexion or extension were measured for the shoulder torque for

various values of e and the hip torque for various values of J. The ex-

perimental bounds on the shoulder torque were then fitted by polynomials

in 9. The hip torque bounds were expressed in polynomials in '.

Even though each of these bounds might be expected to depend to

some degree on all four state variables X3, X4, X5, and X6, the bounds

on the shoulder torque ul depend primarily on X3 and the bounds on the

hip torque u2 depend primarily on X5. The measurements of Samras do not

include the rate dependence X4 and X6. Although the rate effect appears

to be measurable, it is a second-order effect and quite difficult to

obtain. The control limit functions are given in Figure 8. These func-

tions are correct only for a certain range of values of the angles


















S = 156.0 1.125 *
22


2 = 18.4 + 2.05 9
S- 0.0127 82


1
S2 = -55.2+0.263 i
0.0071 *2


)-0.856 8

+ 0.0054 82


Figure 8. Unmodified Control Limit Functions
(Samras [16]).


150




100




50



0


-150











1 1
X and X The values of S and S2 can never be positive and those of
3 5 1 2

S2 and S2 can never be negative. Whenever these sign conditions are

violated by extreme values of the states, S. is set equal to zero. Also,
1
2
from extrapolated measurement data an upper limit has been set for S2 at
1
160.0 ft-lb and a lower limit has been set for S2 at -100.0 ft-lb.


3.3. Torsional Springs in the Shoulder and Hip Joints

Our dynamical model and the control limit functions of the

shoulder and the hip do not account for the stiffness of the shoulder

and the hip joints at the extremities of shoulder and leg movements.

It has been observed that the shoulder joints produce a resistance to

raising the arm beyond an angle of e9 30. The hip joints resist move-

ment for > 1200, or for J < -35. The effects of these "stops" are

important and must be included in the model, since the film data showed

that these limits were reached. There are no data available for the

stiffness of these joint stops. It was observed that, although the

joints were not rigid, they were quite stiff. It was therefore decided

to use stiff torsional spring models at the model's shoulder and the

hip joints. These would be active when the stop angles were exceeded.

For the shoulder the spring is active for e 0.5 radian. For the hip

joint the spring is active for 4 K -0.6 radian and I 2 2.1 radians.

The springs have equal stiffnesses. One generates a 100 ft-lb torque

at the shoulder for a deflection of 0.1 radian. This corresponds to a

joint stop torque of the order of the maximum voluntary torque avail-

able at the shoulder for the deflection of 0.1 radian. This givesa spring

constant of K = 1000 ft-lb/rad. The spring forces at the shoulders
s










would therefore be equal to -K (8-0.5) for 9 < 0.5 radian and those at
S

the hip joints would be -K (*-2.1) for 4 2.1 radians and -K s( +0.6)
s s

for S -0.6 radian. These torques at the shoulder and the hip joints

were added to the voluntary control torques ul and u2 when the stops

were activated.


3.4. Boundary Conditions

The boundary conditions for the kip-up maneuver were chosen

from the experimental data of Section 2.5. The initial values selected

correspond to motion which has already begun. This is beyond the

initial unsymmetrical motion which occurs on beginning the first swing.

This motion is difficult to model and is not important in this basic

research. The final values of the state variables represent the model

atop the horizontal bar still moving upward and just before body con-

tact with the bar. (Once the torso contacts the bar, the model is no

longer valid.) The actual motion in the experiment terminated shortly

after this point when the gymnast used the impact of the horizontal bar

with his body to stop himself.

The initial and final values of the state variables for the

optimization problem are listed in Table 1.













TABLE 1

BOUNDARY CONDITIONS FOR A MINIMUM TIME KIP-UP MOTION



State Variables Initial Value Final Value

1 1
SX = 0.340 Xf = -2.84
o f

2 2
SX2 = -2.30 X =-7.05
o f

3 3
SX = 0.305 X = 2.88
o i
4 4
X = -0.660 Xf = 0.163
o f

5 5
X = -0.087 X = 0.436
o f

6 6
SX = -1.20 X = 0.108
o f


3.5. The Necessary Conditions for Time Optimal Control

In this chapter, we look into the necessary conditions for the

minimum time problem formulated in the previous chapter. The necessary

conditions for optimality of motion for the case when the constraints

on the control are not a function of the states are given in Reference

17. For the case where control constraints depend on the states, the nec-

essary condition requires a modification in the adjoint equations.

These are obtained through a calculus of variations approach [18].

This approach is used in the following developments.

Writing the state equations of our system as

X = f(X,u) = A(X) + B(X)u (3.5.1)










we can construct the cost function as

tf
J= 1 dt (3.5.2)
o


where t is free. The Hamiltonian is then given by


H(X,u,,) = 1 + %Tf (3.5.3a)

= 1 + XTA(X) + XTB(X)u
(3.5.3b)

= 1 + X A(X) + X TB1(X)u + B2(X)u2 (3.5.3c)


where X(t) is the time-dependent six-dimensional column vector of

adjoint variables. A, B, B1, B2, and f are the quantities as defined

by Equations (2.2.15) (2.2.22) in Section 2.2.

The minimum-time control policy u (t) will be given by the one

that minimizes the Hamiltonian (3.5.3), provided no singular arcs are

present. We note that in this case the Hamiltonian is a linear func-

tion of the control u and therefore the minimum with respect to u occurs

only in the upper and the lower bounds of u if there is no singular

solution. Thus, we have, recalling the definitions of S. in Section 3.2,


(1) If TB (X) > 0, ul = the minimum allowable value of ul

= S(X) (3.5.4a)

T
(2) If X B (X) < 0, u1 = the maximum allowable value of u

= S (X) (3.5,4b)


(3) If T B (X) > 0, u = the minimum allowable value of u
2 2

= S (X) (3.5.5a)
2 -

(4) If \TB (X) < O, u = the maximum allowable value of u2

= S (X) (3.5.5b)
2 -










(5) If TBI(X) = 0
,(5) u u = possible singular control. (3.5.6)
or AB (X) =
-2t -


u and u2 will be determined by investigating whether or not there is

a singular solution with respect to these variables.

The adjoint equations will be different for the portions of the
0
trajectories for u corresponding to constrained and unconstrained arcs.

The adjoint equations are, in general, given by

T = -H,X (3.5.7)

This yields

(a) When neither ul nor u2 lie on a constraint


ET = -H,= -TA, TB u TB u (3.5.8)
'X -'X --1,X 1 2X 2

(b) When any one or both ul and u2, denoted by ui (i = 1 or 2), lie on

a constraint denoted by Sj (j = 1 or 2) the right side of Equation (3.5.8)
i

of the adjoint variables has the additional term


T B. S3
-1 i,


and the equation can be written as

2
*T T T T 2 T ji
S= A, B u1 B u 2 6 x B. S
'X 'X i=l X
(3.5.9)
where

6 = 0 if XB. = 0
i --1

6 = 1 if T B. 1 0.
i -1

The boundary conditions on the state and the adjoint variables











X(0) = given = X \(0) = free
-o
(3.5.10)

X(t ) = given = Xf (t ) = free

and

H(t ) = (1 + Xf) 0 (3.5.11)
f --t f


The state and adjoint equations together with the control laws

and the boundary conditions written above form a two-point boundary

value problem (TPBVP) in the state and adjoint variables. If these
0
equations can be solved, the optimal control, u will be immediately

obtained from Equations (3.5.4a) (3.5.6).


Investigation of Singular Solutions

T o
It has been noted that if JTB = 0, uo cannot be determined

from the requirement that the Hamiltonian is to be minimized with

respect to u The same is true for u when- _B = 0.
1 2 -

Since the treatment for u2 is the'same as for ul, we shall

investigate a singular control for only u1.

If the quantity J B1 = 0 only for a single instant of time,

then the situation is not of much concern because the duration of the

interval is not finite and we can simply choose u1 = u (t) or u (t )

or 0, where u l(t) = control at the instant preceding t, ul(t ) is the

instant exactly after t. The situation needs special attention when

TB,= 0 for a finite interval of time.

If t < t S t is an interval for which u is singular, it is
1 2 1

clear that, for our system

XB = 0 for t t t S t (3.5.12)
= -1 1 2











and therefore,

d T
d (ABl) = 0 for t < t t2 (3.5.13)


or

XTB + XTB = for t < t t (3.5.14)
1 --1 1 2

or, for the interval t1 t t2 the following results must hold:

Case 1

Only ul is singular. u2 is nonsingular. Since u2 is not

singular, u2 is on a constraint boundary and is given by

u = S j = 1 for the lower constraint
2 2
j = 2 for the upper constraint.

The adjoint equations are given by Equation (3.5.9)

*T T T T i T j
S= B u B S B S3 (3.5.15)
--X -1 1 2, 2 -2 2,5


B = B, X
X 'X 'X
'X





From Equations (3.5.14), (3.5.15), and (3.5.16), we obtain


[-XTA TB2-x TB S 2]B + XTB (A + B S2j) = 0 (3.5.17)
'X 2, 2 22, -1, -2 2


It is to be observed that the necessary condition (3.5.17) is not

explicit in u .

Case 2

Both u and u are singular. The value of u is no longer S
1 2 2 2

and the term T B S in the Equation (3.5.15) drops out in this
2 2 X
'X
case. Accordingly, one obtains

-T[A, B A] T[B B B B ] u 0 (3.5.18)
-, 1 -1 -1 -2 2
'X 'X










Proceeding from the assumption that u2 is singular, one would also get,

for this case, when both u1 and u2 are singular


-T[A, ] B A] [BI B B ]u = 0 (3.5.19)
-X -2 -1 2 2 -1 1
'X 'X 'X


From Equations (3.5.17), (3.5.18), and (3.5.19) we can see that
T T
only if both T B and _B are zero simultaneously, is it possible to

find a singular solution by suitable choices of ul and u2 from the

condition (3.5.13). If only one of TB1 and B 2, say TB is zero,
1 -2 1
d T
the requirement -- ( B ) = 0 does not yield an equation explicit in
dt -1 2
d T
u as observed in Equation (3.5.17). It is thus required that -- ( B )
dt
= 0, which will be explicit in ul, during the interval t1 2 t < t

together with the requirement that the relation (3.5.17) is satisfied

at t= t These two conditions will ensure that X B = 0 in the

interval t1 < t t2.

It is to be noted that singular control computed by the above

procedure has not been proved to be the minimizing control. Additional

necessary conditions analogous to the convexity conditions for singular

controls have been obtained by Tait [19] and Kelley, Kopp and Moyer [20]

for scalar control and by Robbins [21] and Goh [22] for vector control.

For the general case of vector control these conditions, summarized by

Jacobson [23], may be stated as on singular subarcs:


S-. H, = 0 if q is odd (3.5.20)
Ldt -
and


(-1)p [d2p H Ii 0l (3.5.21)









d2p
In these equations, -2pH, (X,\) is the lowest order time derivative
dt
of H, in which the control u appears explicitly, and q < 2p.

For a scalar control, Equation (3.5.20) is satisfied iden-

tically.

Equations (3.5.20) and (3.5.21) also do not constitute suf-

ficiency conditions for minimality. A complete set of sufficiency

*conditions for singular arcs has not yet been established in the

literature of optimal control theory for a general nonlinear system.

We can see that there are quite severe restrictions on the

existence of singular arcs in the human motion problem. In the

numerical methods used in the present work to determine the optimal

solution, only in the method of quasilinearization is it necessary to

express the control (its optimal value) in terms of the state and

adjoint variables, while in the gradient methods where successive

improvements are made in the control variables, this is not so. In

the attempts with the quasilinearization method, singular solutions

were not considered in the construction of the two-point boundary value

problem in the state and adjoint variables. It was decided that if

a solution to the TPBVP was obtained by quasilinearization, singular

arcs would be looked for later. The gradient methods exhibit singular

arcs automatically if there are any. The additional necessary condi-

tions for singular arcs should be checked when off-constraint arcs are

exhibited by the gradient method.












3.6. _The Solution Methods

The optimal control problem formulated in the preceding section

cannot be solved in closed form. Numerical methods must therefore be

used to find its solution. In the optimal control theory literature

several numerical methods have been proposed for solving the differ-

ential equations and the optimality conditions that arise out of optimal

control problems such as the present one. None of these methods guaran-

tees that a solution will be obtained readily, while some of the methods

do not guarantee that a solution may be obtained at all. The methods

are all iterative, necessitating the use of high-speed computers for

all nontrivial problems. A nominal guessed trajectory is improved

iteratively until the improved solution satisfactorily meets all the

necessary conditions.

Depending on whether the method requires finding the first or

both first and second derivatives of the system equations with respect

to the state and control variables, these methods are called First-Order

or Second-Order methods, respectively. This is so because they, in

effect, make first-order or second-order approximations of the system

equations with respect to the state and control variables. The first-

order methods, in general, have the property that they can start from

a poor guess and make fast improvements in the beginning. They need

fewer computations in each iteration. But their performance is not

good near the optimal solution where the convergence rate becomes very

poor. The second-order methods, on the other hand, need a good ini-

tial guess to be able to start but have excellent convergence prop-

erties near the optimal solution. Because the second-order methods











need computation of the second derivative of the system equations,

they need more computing time per iteration, which may be excessive

for some problems.

Apart from the first- and second-order methods mentioned above,

there is another class of methods which tries to combine the advantages

of both of these methods while eliminating the disadvantages of both.

The Conjugate Gradient Method, Parallel Tangent Method, and the Davidon-

Fletcher-Powell Method fall into this class. These methods work very

much like the first-order method except that, in the first-order

expansion, the coefficients of the first-order term, or the gradient

term, is modified by some transformations. These transformations are

generated from the modified gradient term of the previous iteration and

the gradient term of the current iteration. This has the effect of

using the information that is obtained from a second derivative.

It is not known which of the several methods used for solving

optimal control problems is good for a given problem and one may have

to try more than one method in order to obtain the solution. In the

published literature, most of the illustrations of these methods are

simple. In these simple problems control or state variable histories

do not have wide oscillations or the system equations themselves are

not complicated. This makes it truly difficult for someone without

previous experience to decide upon the merits of these methods.

There is no preference list, and it seems certain that there cannot

be one whereby a decision can be made as to which method should be

tried first so that a solution of a given problem will be obtained

most efficiently. In this respect, deciding upon a computing












method for a given problem is still an art and depends largely on the

previous experience of the individual trying to solve the problem.

In the attempts to solve the minimum time problem, the method

of quasilinearization was taken up first. This choice was based on

several factors. This is the only method where the two-point boundary

value problem obtained from the necessary conditions of optimality is

solved directly, and this feature was found very attractive. As a start-

ing guess, this method requires the time histories of the state and

adjoint variables. Time histories of the state variables were available

from the experiments. (It was decided that if the method was success-

ful for this guess, an arbitrary and less accurate initial guess would

be tried later.) When it converges, the method has a quadratic conver-

gence rate. Also, in spite of its being a well-known method for solv-

ing nonlinear two-point boundary value problems since it was first

introduced by Bellman and Kalaba [24], its applications in solving

optimal control problems have been very few. There was thus an added

incentive for using this method--to determine its usefulness in solv-

ing fairly complicated optimal control problems.

Sylvester and Meyer [25] proposed, with demonstrations, an

efficient scheme for solving a nonlinear TPBVP using the method of

quasilinearization. This scheme was available in the IBM SHARE program

ABS QUAS1 and was used by Boykin and Sierakowski [26], who reported

excellent convergence properties of the scheme for some structural

optimization problems.

With this record of success, the program QUAS1 was taken up for

our problem. But with our problem several difficulties were encountered












from the very beginning. First, the bang-bang control law obtained

from the necessary conditions had to be replaced by a suitably steep

saturation type control law. Second, a slight modification in computa-

tion scheme was necessary when it was found that the method was unable

to solve a simple example problem. The example problem could be solved

with these modifications. But, in spite of all these changes and sub-

sequently, many attempts to generate a guess of the adjoint variables,

the method could not be made to work for the human motion problem.

Reasons for the difficulties encountered are discussed in detail in

Section 3.7.

During the attempts with quasilinearization, it was found that

computations of the second derivatives of the system equations were

taking an exorbitant amount of time and this was the deciding factor

for the next choice of a computing method. Also, the appearance of the

control function linearly in the Hamiltonian put restrictions on the use

of most of the other second-order methods.

The next attempts were based on the first-order steepest descent

method proposed by Bryson and Denham [27,28]. The most attractive

feature of this method is that the various steps involved in it render

themselves to clear physical understanding. This method directly reduces

the cost function in a systematic way and one obtains good insight into

the basic steps in the iterative computations and can make adjustments

to improve convergence and/or stability with relative ease. These

features of the method of steepest descent may more than offset the

advantages of other methods for some complicated problems. In the

attempts with this method, three different formulations of the minimum












time problem were tried. In the first formulation the computations were

not pursued beyond a certain point due to computational difficulties.

The solution was obtained by the second formulation and verified by the

third formulation. These attempts are discussed in Section 3.8.


3.7. A Quasilinearization Scheme for Solving
the Minimum-Time Problem

In Section 3.5 the adjoint equations and the optimal control

laws (Equations (3.5.9), (3.5.4a)-(3.5.6)) have been derived for the

minimum time kip-up problem. The system equations and the boundary

conditions on the state and the adjoint variables are given by Equa-

tions (3.5.1) and (3.5.10), respectively. From these equations we can

readily see that if the control variables ul and u2 appearing in the

system and adjoint equations are replaced by their optimal expressions

in terms of the state and the adjoint variables, one obtains a non-

linear TPBVP in the state and adjoint variables. If these equations

are solved, the optimal state and adjoint variable trajectories will

be obtained and the optimal controls can be constructed by using the

state and adjoint variables and the optimal control laws.

In the TPBVP in the state and the adjoint variables, the final

time is not a given constant and is to be determined from the implicit

relation (3.5.11). This makes the problem one with a variable end

point.

The method of quasilinearization is formulated primarily for

a fixed-end-point TPBVP. In problems with variable end points, the

adjustment of the final time is usually done by a separate scheme, not

integral with the quasilinearization scheme. Long [29] proposed a











scheme for converting a variable end point problem into a fixed-end-

point problem with the adjustment of the final time built into the

quasilinearization process. For the present system, however, this

scheme was not practicable because the boundary condition (3.5.11)

becomes too complicated to handle in this formulation.

It was decided that with a separate algorithm for adjusting

the final time, described later in this section, the nonlinear TPBVP

with free final time would be converted to a sequence of nonlinear

TPBVP's with fixed final times. Each of these fixed final time problems

would then be solved by the modified quasilinearization algorithm until

the correct final timewas obtained. The derivation of the modified

quasilinearization algorithm is described below.


3.7.1 Derivation of the Modified
Quasilinearization Algorithm

The fixed final time nonlinear TPBVP to be solved falls in the

general class of problems given by
dy
d= g(,t) (3.7.1)

With the boundary condition

B, y(0) + Br Y(tf) + c = O tf = given (3.7.2)

y, g, and c are of dimension n, B and B are matrices of dimension

(nXn). It is being assumed that the TPBVP has been defined for the

interval 0 t < t for some given t > 0.

In the state and adjoint equations, if the expressionsfor optimal

control in terms of the state and the adjoint variables are used for the

control variables, one obtains










X = f(X,uo(X,%)) = F(X,\)


T o
S= -HX(X,u(X,), = G(X,X) (say).

In the formulations of the TPBVP given by Equations

(3.7.2), it may be seen that for the kip-up system, n=12,



Y= G 0 B = 0 0 B'


and


(3.7.3b)

(3.7.1) and


c x
- X
x0


The 0 and I appearing in the matrices B and Br represent 6 X6 order null

and unit matrices, respectively.

Let z(t) be an initial guess vector for y(t) which satisfies

the boundary conditions (3.7.2). If g(y,t) is approximated by its

Taylor series expansion about g(z,t), keeping only the first-order term,

one obtains


,g
g(y,t) = g(z,t) + +
TY Y=


w =
F1 y=Z


(y-z).


, so that,


i
W = (3.7.4)
-j
J y=z
th
or, W = partial derivative of the i element of g with respect to the
th element of y, evaluated t =
j element of y, evaluated at y = z.


(say)


(3.7.3a)










With the above approximation of g(y,t), Equation (3.7.2) becomes

dy
= g(x,t) + W(z,t)(y-z)
dt
or,
de dz
= -- + g(z,t) + W(z,t) e
dt dt -

where,

e = y(t) z(t) = error in the guess z(t).


Rearranging the above equation, one obtains
de dz
-- W(z,t)e =- + g(z,t). (3.7.5)
dt dt

Since z(t) is chosen to satisfy the boundary conditions,

B, z(0) + Br z(tf) + c = 0.

Subtracting this equation from Equation (3.7.2), one obtains the

boundary conditions on the error e(t) as

B e(0) + Br (tf) = 0. (3.7.6)
r f -

Equations (3.7.5) and (3.7.6) form a linear TPBVP in e(t) which, when

solved, will give the values of the error between the guessed solution

z(t) and the actual solution y(t) based on the linearized expressions

of the right side of Equations (3.7.1) about the guessed solution z(t).

Because of using the linearized equation instead of the full nonlinear

equations, the values of e(t) obtained by solving Equations (3.7.5) and

(3.7.6) will not be the actual error between the guess z(t) and the

solution. However, a new guess of y(t) will be obtained from e(t) by


z'(t) = z(t) + I e(t),


0 < I < 1.


(3.7.7)










The algorithm of Sylvester and Meyer uses = 1 for all the

time, which is the usual quasilinearization algorithm. It was found,

while solving a simple example problem, that without the incorporation

of a multiplier I in the expression (3.7.7), i.e., using

z'(t) = z(t) + e(t)

the method was unstable. The convergence property of the scheme with

the incorporation of the multiplier j can be understood for small values

of T by comparison with the step-size adjustment procedure of the usual

steepest descent algorithms. The mathematical proof for the convergence

property follows the proof of Miele and Iyer [301 and is now given.

The integral squared norm of the error in the guessed solution

z(t) can be expressed by the integral

tf

J = oJ z g(z,t)T (z g(z,t)3 dt.

Similarly, the error in the solution z (t) = z(t) + T e(t) is given by

tf
J /= S- g(z,t)f g(z ,t)3 dt


If ] is sufficiently small, one can write

g(z ,t) = g(z,t) + g, (z,t) I e


where = gy = W(z,t) (from Equation (3.7.4)).


Also, for all values of T,

Z = Z + I e .











From these results, one obtains, for small values of T,

tf

J' J = 2 [z g(z,t)T We
o

Since e(t) satisfies the differential equation (3.7.5), this finally

yields:



S- J = 2 z g(z,t)12 dt
o

= a negative quantity.

Thus, for sufficiently small values of T, the reduction in the cost is

assured.

In the quasilinearization algorithm z (t) takes the role of z(t)

as the new guess of y(t) and the process is continued until the error in

satisfying the differential equations is reduced to an acceptable value.

The linear TPBVP of the error Equations (3.7.5) and (3.7.6) is

solved as follows:

The time interval t= 0 to t= t is divided into m small inter-

vals. This results in m+l values of t at which the solution will be

computed. Equation (3.7.5) can be written in a finite central differ-

ence scheme as

e +z t. +t. e.+ e z -z z +z t + t
-i+1 i -i i++l i i+1) -1 -i+1 -i+1 -i -i+l -i i+l 1
h 2 2 2 h. g 2 2
i 1
where h. = t t.. The subscript i denotes values at the ith
1 i+l i
station, i=1,2,...,m+l. Rearranging and simplifying the above expres-

sion one obtains
S1 1
(I+- h.W)e.+ (-I+- h W.)e_. r (3.7.8)
2 21,2 1 -m
i = 1,2,...,m











where
S+z. t. +t.
r. = z -z -h g (3.7.9)
-i -i+l -1i i 2

/z. +z. t. + t.\
= W i+ -i i+l 1) (3.7.10)
1 2 2

and

I unit matrix of dimension n xn

The boundary conditions, Equation (3.7.2), reduce to

B e + B = 0 (3.7.11)
a -1 r -m+1 -

Equation (3.7.8) can be cast into the following convenient recursive

expression

e. + D. e = s (3.7.12)
-1 i -i+1 -i

where
1 -1 1
D. = (I+- h W.) (-1+ h. W.)
1 2 i 1 2 i 1

and (3.7.13)
+1 -1
s. = (I+ h W.) r.
-1 2 1 1 -1

By repeated substitution, equation (3.7.12) yields the following rela-

tionship between e and e
1 -m+l1

m
ST + (-)m ( TT D.) e (3.7.14)
-1 1 -m+l
i=1

where
m i-i
T= s + E (-1) ( TT D.)s. (3.7.15)
-1 2 -1
i=2 j=l

or, multiplying by BI on both sides of Equation (3.7.14),

m
B e= Bf T + (-l)m B TT D. + (3.7.16)
i=l

Equations (3.6.16) and (3.7.11) can be solved simultaneously for 1

and c to give
-m+1










-1
e = -C1 B T (3.7.17)

where
m
C = (-1)m B T ( D.) + B (3.7.18)
i=1

With e determined, e, e ,e are determined in succession by
-m+1 -m m-1 -1

using the recursive relation (3.7.12). Then, the new iterate is given

*by

zi+(t) = z(t) + E(t).


The stopping condition of the algorithm is given by the fact that

e. should be small. When they are small, it may be seen from Equation

(3.7.8) that the quantities r. will also be small. From Equation (3.7.9)
-1

it is also seen that small r. correspond to satisfying the central differ-
-1

ence expression of the differential equations. That is, the finite dif-

ference equation error must be small. This does not mean that the dif-

ferential equation error is small unless the intervals for the differences

are sufficiently small.

The IBM SHARE program ABS QUASI is a program of the procedure

outlined above without the provision of the multiplier 1 in Equation

(3.7.7), and therefore is for T= 1. The program was modified to intro-

duce and to adjust T to get the desired convergence. The algorithm may

be described by the following steps:

1. Set up the matrices B and B and guess a nominal trajectory

z(t) that satisfies the boundary conditions (3.7.2).

Set ITER = 0 (ITER for Iteration)











2. Do the following for i=1,2,...,m.

a. Find r. and W. as defined in Equations (3.7.9) and
-1 1

(3.7.10). Find the largest element of r., searching
-1

between the elements of each r., for i=1,2,...,m.
-1

Call it E2MAX. If E2MAX < specified maximum allowable

error, print out zi and stop the computations.

b. Using Equation (3.7.13), find D. and s..
1 -1

3. Calculate T and C according to Equations (3.7.15) and (3.7.18).

Calculate the integral norm of the error

m

M. 2 m -m+1
J2 = E I1 riI2 + (i m!I2 + i1i2
i=2

(here the norm is defined by the sum of squares of the elements

of the vector r.). Set J1 = J2.
-1

4. Find e using Equation (3.7.17).
-m+1

5. Decide upon a value of 1. Discussions on the choice of 1 will

be presented in a later section.

6. Generate and store e e e ... using Equation (3.7.12).
-m -m-1' -m-2 1

Generate the new guess z. i= 1,2,...,m+l, by doing
-1

Z. = + e..
-1 -1 -1

Set ITER = ITER+ 1.

7. Find J2 and r., i=1,2,...,m and find E2MAX as in step 2.
-1

8. If J2 > J1 (unstable), go to step 11.

9. Stop if E2MAX < a prescribed value.

10. If J2 < J1 to to step 12 to continue to the next iteration.

11. If this step has been performed more than a specified number

of times in this situation, go to step 13. If not, store the











value of the current I and J2. Recover the values of

z. of the previous iteration by doing

z = z -- TC .
-1 -1 -1

Reduce the value of T. Generate the new z. by doing
-1

z. = z. + T e .

Go to step 7.

12. If this step has been performed more than a specified number

of times or if step 11 has been performed at least once in

this iteration, go to step 13. If not, store the value of J2

and 7. Recover the new z. as in step 11, this time by increas-
-1

ing ] to increase speed of convergence. Go to step 7.

13. Find out the minimum values of J2 that have been obtained in

steps 11 and 12. If this value is greater than Jl, stop com-

putations and look for the cause of the instability. If this

value is less than J1, recall the T corresponding to this J2 and

regenerate the z. for this case. Go to step 2.
-1


In this program the value of I was selected from the consider-

ation that at the station m+ 1 the maximum value of


ABs[ Y ) j = 7,8,...,12


should be less than a prescribed value 0 in parentheses represents the

jth element of the vector). This procedure limits the changes in the

existing trajectory by limiting the magnitude of the maximum fractional

change in the terminal values of the variables not specified at the

final time.










Whenever an iteration was found unstable, I was reduced by

half. When there was an improvement, a linear extrapolation formula

was used to increase the value of I so that the norm of the error J2

would decrease to a desired value. In such an attempt, however, 1

was not allowed to increase beyond a certain multiple of its existing

value.


3.7.2. Approximations of the Optimal Control
for the Kip-Up Problem

The method of quasilinearization disregards the question of

singular solutions in the present investigation. It was found in

S Section 3.5 that there were many requirements for the existence of

a singular control arc and the necessary conditions for the existence

of singular controls are quite complicated. It was decided that before

going into those cases extrema without singular arc would be looked for

first and if the computational method was successful, singular subarcs

would be looked for later.

If the singular solutions are not considered, the optimal
o o
controls ul and u2 become bang-bang and are given by Equations (3.5.4a)

through (3.5.5b).

It is convenient to rewrite these equations at this point

in the following way:

o 1
< 0, u = S1(X)

1 1-
If --1B(X) = 0, uo = 0 (3.7.19)


o 2
> 0, u S (X) .
1 1-







62



O, u
<0, u = S (X)
2 2-

f T (3.7.20)
If -XB (X) = u = 0 (3.7.20)
2 2


> 0, u = 2(X)
2 2-


Let these control laws be expressed by the following expressions:


u =S sgn (- BI) (3.7.21)
1 S1 1

and
o T
u = S2 sgn (- B2) (3.7.22)

where

sgn (x) = signof x (3.7.23)

and

= S. when (-TB.) < 0
S i -1
o (3.7.24)
i=1or2 2
1 ----


With these expressions for the control laws, the state equations (3.5.1)

become
T T
X = A(X) + B1 S sgn (-XTB ) + B2 S sgn (- B 2) (3.7.25)
1 ~- 2 2 2

= F(X,X) (cf. Equation (3.7.3a)).


In the present quasilinearization algorithm the derivatives of

F(X,\) with respect to both X and X in constructing the matrix W.

(Equation (3.7.4)) areneeded. One can see that computation of the

derivative of F(X,X) with respect to X will occur only as a general-

ized function with no numerical value for computation in the limit,

because X appears only in the argument of a sign function. Instead











of proceeding to the limit the following approximation of the bang-bang

control was made. For i = 1 and 2


S.(X) if AA.(-TB.) < Si(X)


O T 2 T 1
uo Fu = AA.(-TB.) if S(X) LAA.(- B.) Si(X) (3.7.26)
i 1 1 -1 1 1 -1 1 -


S2(X) if AA.(-XB.) > S(X)
1- 1 --1 1-

*
where AA1 and AA2 are two positive constants. This function u. of

T 1 2
AA.(-X B.) is called the saturation function (sat) when S. and S. are
1 -1- 1 1

unity. The change made in the optimal control is shown graphically in

o *
Figure 9 for u. and u., i = 1 or 2.
1 1
o *
The controls u. and u. have been plotted against the function
1 1

- TB. near a switch point in Figures 9a and 9b, respectively. It can
- -1
o
be seen that the approximation u. differs from the optimal control u.

only in the portion KL. By increasing the value of AA., u. can be made

O
closer to u..
1

With the above approximation of the bang-bang control by a

"saturation" control represented by Equation (3.7.26), one would

be able to find the derivative of the control u. (and hence, of F) with
1

respect to X. The derivative will be zero when the control is on a

constraint S( X) and will be nonzero on the arc KL which appears near
1 -

a switching time. In order to represent the saturation control (3.7.26)

by its linearization, it is necessary that at least one of the W.,

i = 1,2,...,m, which contains the information of the first derivatives,

be computed on the arc KL. Otherwise, this vital portion of the control

would go unaccounted for in the linearized equations. This may happen





















-I


64







r-
0


0
II k



q o
OC3





--- 4




o 4O
II K *



































S1 o-' 0
OI R0
\ ++ 0





o 3 ca














o a
C


*P

wr.
O 0



C3



Ca


0
/-s o







XI r





0 0
0 C 0o


1 '-4
o c-






-r4

0


I r


C'
XI

Cl]


I 0-.'-


I







65



if the arc KL is too steep for a given selection of integration stations.

In such a case, when none of the W. is computed on the switching por-
1

tions of the control variables like the arc KL,the TPBVP, Equations

(3.7.5) and (3.7.6),cannot be solved as explained below.

First, from a physical reasoning, it can be seen that the linear-

ized state equations would get decoupled from the adjoint equations if

all of the W., i = 1,2,...,m, show zero derivatives of F with respect
1

to X. This means that the first six equations of (3.7.5) would get

decoupled from the last six. But the boundary conditions (3.7.6) is such

that they are for the first six variables of e(t) only. Therefore, this

results in a situation where there are six first-order equations with

twelve boundary conditions, a situation which, in general, does not

have a solution.

From the point of view of computations with the present algorithm

it can be shown that the matrix C in equation (3.7.17) cannot be used to

solve for e
-m+1-

With the system equations defined as (Equations (3.7.3a) and

(3.7.3b))

X = F(X,X)

= G(X,X)


one obtains

Fx F

[W]i =

G 'x G,
L -- i


where the subscript i represents the i interval.










Let Pj, Q., E., DD., R. and M., j = 1,2,3, and 4, represent

6 X 6 matrices. Let

"P P "
1 2

[W] =


I


where P2 = [F, = [0] for all points

arcs such as KL in Figure 9b. Then


1
I+ hi P




1 3
hZ P 3
2 13



31 Q24

Q3 Q4


other than those lying on the


1
hi P2



1
I + -hi
2 i


It may be seen that Q2 = [0]


if P2 = [0].


From Equation (3.7.13),


D. =


1
- hi
2 1


-w. [-
W. I


1
+ -h W.
2 Ji]


Q QR E E


(say),


where


RI R2


R3 R4
3


1
R -hi
2 21


= h + i]
+ 1 h


P = ro0 if P = [0].
2 2


+ 1
[I + h
2 1


-1
W.]
1











It may again be seen that


E2 = R2 + 2R4 = [0] since R2 = Q2 = [0] when P2 = [0].


Now, consider the product D. D. D. of any three succes-
1 1i+1 1+2
m
sive D. in the product TU D.. Let this multiplication be expressed as
1 i=l 1


E11 E21 E12 E22 E13 E23 DD1 DD2


E31 E41 E32 E42 E33 E43 DD3 DD4


The expression for DD2 is


DD2 = E11 E12 E23 + E21 E32 E23 + E11 E22 E43 + E21 E42 E43-


If the upper right elements E21 = E22 = E23 = [0], it may clearly be

seen that DD2 = [0].

In the product


m 2
Tr D. =
i=1 A 3 -4



one would therefore obtain


M2 = [0] if P2 = [0] for all i, i = 1,2,...,m.

With 2 = [0], the expression for C (Equation (3.7.18))

m
D = (-1)m B IT D. + B
i 1 r
i=l

becomes, after using the values of B and Br,
2^











0 0 1 0
B2= Br =
1 0 0 0

I 0 I 0

C = (-1)m = (-1)m[
SM 0



Thus, C becomes singular if F, = 0 for all i, i = 1,2,...,m.


For a given step size or subdivision in the t-interval (0,t )

the singularity of C sets an upper limit on the steepness of the arc KL

of Figure 9b whichcan be used for approximating the bang-bang control.

This steepness of the arc KL depends on AA1 (or AA ) for a given X(t)

and \(t). So, in selecting AA1 and AA2, we have to make sure that they

are not too large. When the solution is obtained, however, the slope of

KL does not depend on the choice of AA1 and AA2 if we decide to choose

AA1 = AA2 = AA, say. This is because AA will be "absorbed" within X(t),

acting as a scale factor on X. In fact, if we define X as AA*X, the state

and adjoint equations of our system do not change, and we could therefore

select AA1 = AA2 = 1. The slope of the arc KL in the solution depends

on the value of AA*\ and not on AA alone. However, when the solution

has not yet been obtained, the best value of AA need not be 1.

If the final time, tf, selected for our problem is much larger

than the minimum time, for a bang-bang optimal solution, it can be

expected that the arc KL will have a relatively small slope. If the

final time is gradually reduced, this slope will increase until it

becomes so large that the matrix C becomes singular as explained above.

At that point the solution obtained for the smallest t will very











nearly be a bang-bang control and will represent the approximate solu-

tion of the minimum time problem.

The computation of the optimal final time via the quasilinear-

ization method was done according to the stopping condition outlined

above. A final time would be guessed, and the TPBVP (Equations (3.7.1)

and (3.7.2)) would be solved. The final time would then be reduced by

reducing the integration step size, and the TPBVP would again be solved.

The process would be continued until the matrix C becomes singular and

the TPBVP could not be solved any further.


3.7.3. A Simple Example Problem for
tie Method of Quasilinearization

A simple problem was first taken up to explore the various

features of the quasilinearization algorithm constructed above. The

system was defined as

X= x I
1 X2 (3.7.27)

2 u


X1(0) = 0

X2(0) = 0

X,(t f) = 1
1 f (3.7.28)

X2(t ) = 0


The constraints on u were

-1 u < 1 (3.7.29)

The cost function to be minimized is the final time tf.










The adjoint equations for the system are


(3.7.30)




The optimal control u is

u = sgn (- 2) (3.7.31)

With the approximation of the optimal control

u = sat (-AAX 2) (3.7.32)


the state and adjoint equations become


1= X2

X = sat (-AA\X )
2 sat (- 2 (3.7.33)

1= 0

S= -x\
2


The boundary conditions are given by Equations (3.7.28). The analytical

solution of the optimal control is given by

t = 2 u = -1
1 2
= -1 X = -1+ 2t--t for 1
2 =t-1 X2 = 2-t

u= 1
1 2
XI = t2 for 0 t < 1
1 2 4
X2= t

The solution is shown graphically in Figure 10.

The problem was solved numerically by solving Equations (3.7.33)

and (3.7.28) for different: final times tf by quasilinearization.









1.0




0



-1.0


1.0




0



-1.0


(c)
1.0



0




-1.0


tf 0


-1.0


1.0


-1.0


Curve 1 -X1

Curve 2 X

Curve 3 -

Curve 4 -* 2

Curve 5 u


Figure 10.


Graphs of Optimal and Nearly Optimal
Solutions Obtained via Quasilinearization
for Simple Example.


4



3 5


(d)











The solutions for t = 2.25, 2.05, and 2.005 were obtained and are shown

in Figures 10a, b, and c, respectively. With t = 2.00 the matrix C

became singular in iteration 10. The theoretical solution (t = 2.00)

is shown in Figure lOd. The results for tf =2.005 is a reasonably good

approximation of the optimal solution.

For these problems, 25 time subdivision intervals were used

The initial guess was deliberately poor, as shown in Figure O0e. The

solutions were obtained in 9 to 11 iterations from this guess in all

these cases.

The program originally written, according to Sylvester and

Meyer [25], did not have the provision for the amount of adjustment I

in the iterations. Their method was found to be unstable for some of

the initial guesses.


3.7.4. The Results With the Kip-Up Problem

The kip-up problem was taken up after the method of quasilinear-

ization was found successful in the case of the example problem. In the

kip-up problem, however, many difficulties were faced from the very

beginning and the problem could not finally be solved by this method.

A major difference between the human motion problem and the example

problem or the problem solved by Boykin and Sierakowski [26] is very

prominent. In the latter problems, the control variables switched only

once from one boundary to another in the entire trajectory, whereas,

in the human motion problem, there were many such switching. This

made the human motion problem less amenable to iterative methods.











The program for the human motion problem was extremely lengthy,

taking more than eleven hundred statements and requiring the use of the

large core of the computer (IBM, 360-65). The subroutine (DIFEQ) which

generated the right side of the state and the adjoint equations and

their derivatives, turned out to be quite lengthy and required an

exorbitant amount of computing time. The quasilinearization program

called this subroutine at every station of the total interval and at

every iteration. As a result, the program required a tremendous amount

of computing time--about 40 seconds per iteration. All computations

were done in double precision.

Several sources of difficulty were detected in the unsuccessful

attempts to solve the human motion problem by quasilinearization. The

central issue was the tremendous amount of computations with accuracy

required.

The total time interval was divided into 100 equal parts (seg-

ments). As the first (and the only one) guess of the final time, these

segments were chosen to be 0.0216 second each. This made the final.

time (2.16 seconds) equal to the time taken by the gymnast to do the

actual maneuver. With 100 segments, the program required the large

core of the computer. Even though a larger number of smaller segments

would have been preferred, this could not be done because it was

intended not to go beyond the large core. It was later found that

numerical integration of the nonlinear state equations needed time

steps at least as small as one-sixth of what was taken for quasilinear-

ization. It was initially hoped that, since the quasilinearization

algorithm solves the linearized equations instead of the nonlinear











equations, the central difference solution would be stable for larger

integration step sizes. This was not the case.

It was proved theoretically that with perfect precision the

method was convergent for all initial guesses. However, it is well

known for the unmodified method of quasilinearization (1 =1) that the

method is convergent only for certain initial guesses. It may therefore

be fair to expect that even with the modified method, the rate of con-

vergence should depend on the initial guess and for some initial guesses,

this convergence may be extremely poor. For this reason, several initial

guesses were tried. The guess of the state variables was taken from the

experimental data whichwere shown to agree vell with the computed motion

in Chapter 2. Different initial guesses for the adjoint variables were

tried. The first attempts simply used constant values for all the

adjoint variables. Convergence from this guess was nonexistent. Next,

attempts were made to generate the adjoint variables from forward inte-

gration of the adjoint equation with the guessed values of the state

variables and a guessed value of adjoint variables at time t=0. In

these cases the integration were unstable with large numbers generated.

The method was not pursued further.

Lastly, the method suggested by Miele, Iyer and Well [31] was

tried for generating the initial guess of the adjoint variables. In

this method, an auxiliary optimization problem is constructed from

the original problem. It tries to make an optimal choice of the adjoint

variables such that the cumulative norm of the error in satisfying the

adjoint equations for a given state variable trajectory is minimized.

This is performed as follows.












Suppose u (X,.) is the optimal control. The state and adjoint

equations may then be written as

X*
X = f(X,u(X,.)) = F(X,X) (3.7.34)

T *
= -f, X,u (X, = G(X,) (3.7.35)


Suppose we have a guess of the state and the adjoint variables

given by X(t) and X(t). Since X(t) and X(t) do not satisfy Equations

(3.7.34) and (3.7.35), let
+ T
= + fx,u (X,%))%


so that e is the error in the adjoint equations (3.7.35). The above

equation can be rewritten as
T
X = e f, (X,u (,)) (3.7.36)


Now, consider the optimal control problem, where

a. X(t) is a given function of time t

b. X(t) is the s:ate variable

c. e is the control variable

d. The system equation is given by Equation (3.7.36)

e. The cost function to be minimized is

If
1 r T
J s dt
2 2


f. t is the final time (fixed) of the original problem

g. Boundary conditions are

X(0) nr.-d (t ) are free.
If











For the optimal control problem posed above, we can construct
T
the Hamiltonian, using the Lagrange multipliers e (6-dimensional

vector):
T
1 T T
H = e + e (e f, ) (3.7.37)
2 -- 'X -


The necessary conditions for optimization are


H = 0
'2,

and

= (H )T
-* 2'


or, after performing the differentiations

H = + C= 0 (3.7.38)
2, -* -

and

= (3.7.39)
-* X -*


The boundary conditions are

e (0) = e (t ) = 0 (3.7.40)
-* -* f -

because \(t) is free at t=0 and at t = t .

Using Equation (3.7.38) in (3.7.39) and (3.7.40), one obtains

S= f, G (3.7.41)

and

e(0) = e(t ) = 0 (3.7.42)


Clearly, Equations (3.7.36), (3.7.41) and (3.7.42) form a linear TPBVP

in X and e. This problem should be easier to solve, in theory, than the

original TPBVP of the state and adjoint variables and should give an

optimal choice for the multiples of X.











In several attempts this linear TPBVP could not be solved for

the guessed state trajectory by either the quasilinearization program or

by the IBM scientific subroutine package program DLBVP.

The prime reason for the failure of the method of quasilineariza-

tion was finally found to be the numerical inaccuracies in the compu-

tations which were dominant in spite of using the double precision

arithmetic. The problem was perpetuated and amplified by the large

numbers in the right side of the state and adjoint equations and in the

derivatives of these equations. The matrices to be inverted at the

various stages, one at every step of integration and another at the end

of the integration, were ill-conditioned for inversion. When a dif-

ferent subroutine for inversion than what came with the QUASI program was
m
used, different numbers resulted. The entries of the matrices T and T D.
i=l 1

were very large and resulted in large numbers for some entries of C.

This made the matrix C ill conditioned for inversion. Any error in

inverting the matrix C would be amplified in the values of em+'

This amplification was due to the multiplication of the inverse of C

by T, a matrix used in generating e M If e was in error, all other
m-l -m+1

e. would be in error because they were generated from e .
-i -m+1

If the inversion of C was accurate, then the first six entire

of e computed from Equations (3.7.16,17,18) should be almost zero. But,

instead, large values of the order of 103 were obtained! This obviously

indicated that e and hence all the e. were being computed inaccurately.
1 -I

The quasilinearization program failed to solve the human motion

problem due to the above three reasons, and primarily due to the last

one, the numerical inaccuracies. This was felt to be rather difficult











to overcome since it was intimately related to the method used to

solve the linear TPBVP and the structure of the original TPBVP. So

far as the method was concerned, the key point was that the matrix C was

becoming ill conditioned. This occurred because the recursive rela-

tionship between e. and e has been used to generate a relationship
-i -i+l
m
between e and e which resulted in the product TT D. with large
-1 -m i=1
m
entries. A look at the expression for C in terms of TT D., B.,
i= 1 1

and B (Equation (3.7.8)) would make it clear that with such a value
r
m
of TT D., C would automatically be ill conditioned.
i=l I
The other standard methods of solving linear TPBVP (Equations (3.7.5),

(3.7.5)), for example, the transition matrix algorithm might have been

numerically more stable. However, other difficulties arose because these

methods require several forward integration in one iteration. This

means calling the subroutine DIFEQ (the subroutine to generate the right

side of the differential equations and its derivatives) many more times.

This increases the computing time enormously.

With a step size small enough for the integration to be stable,

the storage requirements, computing time, and, therefore, the'cost of

computing increases considerably. Even if these factors are absorbed,

it may still be necessary to try several initial guesses of the adjoint

variables to get the method to converge.

In view of the above problems, it was concluded that the

standard method of quasilinearization was not suitable for the human

motion problem and so should not be pursued further.











3.8. Steepest Descent Methods for Solving
the Minimum-Time Kip-Up Problem

Three different formulations of first-order steepest descent

methods were used after the method of quasilinearization was unsuccess-

ful in solving the minimum-time kip-up problem. These formulations

differ from each other in the construction of the cost functional,

handling of the terminal constraints, treatment of the control incre-

ments and in the method of adjusting the final time. The basic features

of these three formulations are described below.


Formulation 1

a. The cost functional is the final time. The terminal errors

and the cost functional are reduced simultaneously.

b. The adjustment of the final time is done by extending or

truncating the final end of the trajectories.

c. The control functions take the form of a sequence of constrained

and unconstrained arcs. Improvements are made at the uncon-

strained parts only, including the junctions of the constrained

and unconstrained arcs.

The method is based on the works of Bryson and Denham [27,28]

and Bryson and Ho [32].


Formulation 2

a. The cost functional is the sum of a scalar representing the

final time and a norm of the terminal error.

b. A change in the independent variable t is introduced by defin-

ing the transformation











t = QT

where a is a constant, or

dot
0
dT

so that a is treated as an additional state variable. The final

time tf is directly proportional to a when Tf is held fixed.

Long [29] used this transformation of the independent variable

to convert free end point TPBVP to fixed end point TPBVP for

solution by the method of quasilinearization.

The cost functional is reconstructed as


2 6 2
J = Ko C + E K.i., K ,Ki,... ,K > 0.
o 1. 1 i o i 6
i=l

No terminal constraints were introduced in this formulation

since they (the .'s) are included in the cost functional.
1

c. The control functions are assumed to be free to change in any

direction while computing the gradient of the cost function.

That is, the control constraints were not considered when com-

puting the gradient of the cost functional used to find a suit-

able increment in the control and a. The control constraints

were imposed in the next iteration during forward integration

of the state equations. When the computed control violated

a constraint in a subarc it was set equal to its limit, the

constraint function on the subarc. This approach for treating

constraints on the control has been used by Wong, Dressler and

Luenburger [33].











Formulation 3

This formulation consisted of the features (a) and (b) of

Formulation 2 and (c) of Formulation 1.

The derivations of the numerical algorithms for these formula-

tions are now presented. The basic concepts on which these algorithms

are based are available in the literature [28,32]. The results are

derived in a manner suitable for analysis of the outcome of the numer-

ical computations, and, thus, the derivations presented here are slightly

different from those found in the literature for any gradient method

approach to the computation of optimally controlled motion.


3.8.1. Derivations for Formulation 1

Suppose we have a continuous nominal control u (t) and u (t)

and a nominal final time t These control histories have some parts

lying on the constraints S3(X), i = 1 or 2, and the remaining parts lie
1 -

away from the constraints. The parts lying on constraints will be

called "constrained arcs" and the parts lying off the constraints will

be called "unconstrained arcs." The intersections of constrained and

unconstrained arcs will be called "corner points." A nominal guess for

the control variables consists of specifying the corner points and

values of the control at unconstrained portions. On the constrained

arcs control variables are generated from constraint functions. An

initial choice of the control history and the final time will not, in

general, satisfy the boundary condition and will not do so in minimum

time either. One can improve upon the trajectories in the following way.

At first, we establish how a particular state variable X (the

ththe state vector X) changes at the final time due to
i component of the state vector X) changes at the final tire due to









a small change in the control history and the final time. For that

purpose, let a cost functional be defined

J = X^(t) (3.8.1)


Let \ (t) be an arbitrary time-varying vector of dimension six.

Since the system satisfies Equation (3.1.1), the final value of the

state variable will also be given by

f .T
J' = (tf) + 1 (f-X) dt (3.8.2)


If the control variables ul(t) and u2(t) change by a small

amount 6ul(t) and 6u2(t), there will also be a small change in the

state variable X(t), denoted by 6X(t), throughout the trajectory.

It is clear that these changes in the control, denoted by 6u(t) and

in the state variables denoted by 6X(t), will not be independent of

each other. Apart from changing the control history, an increment to

the final time tf by a small amount 6t and small increment 6X(O) to the

initial state vector are also prescribed. The first-order change in J

due to the changes in the control and the final time is given by

J = X (t ) dt + 6X1 (X) 6x) t + (1 )T 6X(O)

t t
+ ai)T fX +T i) 6X dt + (aXi)T f, 6u dt.

(3.8.3)

The 6u is chosen in the following way: For the unconstrained parts,

6u is completely free. The parts presently on the constraints will

remain on the constraints for the same periods of time as before.









For these portions, the change in control is given by the shift of the

constraint due to state changes according to the relation

6u.(t) = Si X(t) (3.8.4)
1 iX -
1X

Let C. denote all those portions of the trajectory of the
1
control u.(t), i = 1 or 2, which lie on any of the constraints S. or
1 1
2 o
S.. Also, let C. denote all those portions of the control u. which
1 1 1
do not lie on a constraint. If the expressions for 8u on the con-

strained arcs given by Equations (3.8.4) are substituted in Equa-

tion (3.8.3) and the integration of the last term of the right side of
o 1
Equation (3.8.2) is split into integration over the intervals C1, CI,
o 1
C2, and C2, one obtains

S= X(t )dtf + {Xi i)T X t + {(Xi)T 6T
f t=O

tf
+ Jo () f, T ( i)} X dt



+ (i)T f u1 dt + ( ) f, 6X dt
o 1 1 'X
C C -
1 1

+ (X) f, u dt + iT f, S 6X dt (3.8.5)
C 0 u 2 2 C u 2, x
o i u2 1u2dt 2 (Xi)3,
2 2

If i (t) is computed such that

i(t) = 1 for i = j (3.8.6)
f
= 0 for i j
i th i
where X. = j element of X
J
and
ST 1T
_- _Uf + fu + 6f u X (3.8.7)
u 2 2 -
I 1 X 2 X









where

6 = 0 on C 6 1 on C
1 1= 1 1oo

6 = 0 on C, 6 1 on C2
2 2 2 2

and

u1 and u used in Equation (3.8.7) are computed only when

the controls u1 and u2 are on a constraint and u1 and u2 are replaced by

the constraint expressions S (X), one obtains
1 -

J' = X1(t ) dtf + (i(0))T 6X(0) + (oi) T f,'u dt
C1 1


+ (iT 'u u dt
C2 2
2


= f(X (t ),u(t ))dtf + (i())T 6X(0) + o(i )T f, bu dt
f f o --u 1
C1 1


+ o() f, Bu2 dt (3.8.8)
C2 2


where f is the ith element of the vector f(X,u).


Equation (3.8.8) is the desired expression for the change of

the state variable Xi at the final time t due to (1) a small arbitrary

increment Ou(t) given to the control variable u(t) over the unconstrained

portions, (2) a small increase dt of the final time tf, and (3) an

arbitrary small change.6X(0) of the initial state vector X .
-o

Similarly, one can find the expressions for the change in the

terminal value of any other state variable X It can be seen that if

one constructs the (6 X6) matrix











R(t) = [l(t) 2 (t), (t), 4(t), X (t), 6(t)


so that R(t) satisfies


R(tf) = I (6 x6 unit matrix) (3.8.9)

and

R(t) = f, U+ 1 u f, u2, R(t) (3.8.10)
-1 X -2 'X

where the meaning of the various terms in the parentheses of right

side of Equation (3.8.10) is the same as that in the same terms appear-

in Equation (3.8.7), then the change in the terminal value of the state

vector is given by

6X(t )=f dt + R(0)0X(O)-0 o RT f, dU dt+ R f, Ou dt
C 1 C2 2
1 2
(3.8.11a)

If we choose that 6X(0) = O, we obtain

8X(tf) = f dtf + R f, 5u dt+ Rf, 6u2 dt. (3.8.11b)
- t o u 1 o u 2
C 1 C 2
U1 2


Following the method prescribed by Bryson and Ho [32], it will

now be attempted to make improvements in the terminal errors given by


= X(t) -

and at the same time reduce the final time t Thus, since it is being

sought to minimize tf, one would maximize -dt or minimize dt with

respect to 6ul, 6u2 subject to the constraint


6X(t ) = f(t )dt + I R f, du dt + o RT f, 6u dt (3.8.12)
f c u C2
C 1 C 2
1 2










with

6X(t ) = 6P(t )


where 6x (t ) is a chosen decrement in the terminal error such that

6u maintains.the first-order approximations.

In this incremental minimization problem, the incremental cost

functional, dtf, to be minimized, and the constraints are linear in the

incremental control parameter Ou. Such a problem does not have an

extremum. However, since these are linearized relations obtained from

a nonlinear system, the increments 6ul, 6u2, and dt should be small for

the first-order approximation to be valid. To limit the increments

6ul, 6u2, and dt the following quadratic penalty term to the incre-

mental cost dt is added:
1 2 1 2 1 p 2
Sb dt + 2 o 6u 1(t) dt + J u2 8 2 (t) dt, (3.8.13)
C C

where b is a positive scalar quantity and W (t) and W2(t) are positive

scalar quantities specified as functions of time.

Adding these quantities to the cost functional and adjoining

the constraint relations (3.8.12) to the resulting expression by a

multiplier v (a six-dimensional vector), the following problem is

obtained.

Minimize wrt, 6ul, 6u2, and dtf

t 1 2 1 S62 W 1 2 T
[dtf + b dt +u Wo ul dt+ + du2 2dt+v{f(tf)dtf
C1 2


+o R f, 6u dt+ s RT f, u dt 6X(t) .f (3.8.14)
C 1 1 C 2
1 2











If the derivatives of this functional with respect to 6u 6u2, and

dt are set to zero, one obtains


1 T
dt [1 + f(t )]
f b -


1
6u 1
1 IVW1


T
- U1


and


1 fT
2 -2 U2


Rv on C
- 1



o
Rv on C
2


(3.8.15)


(3.8.16)


for an extremal.

Using Equations


b

or


(3.8.15) and (3.8.16)


+ -
f1+ v f3 Ig v-
f


in (3.8.12), one obtains

(3.8.17)


where


=- f f (tf)] [ P(t) + f(tf



T -1 2T T -1 T
I R f, f, Rdt+ 0 R f, W
C 11 1 C 2 2
1 2


(3.8.18)


R dt.


(3.8.19)

The value of 6xP(t ), the desired change in the terminal values of the

state variables may be chosen as a decrement in the terminal error _.


x(tf) = ei i (3.8.20)

where


0< e. 1
i


p th
XP(t ) is the i element of 6Xp.
i f











The steepest descent algorithm for Formulation 1 can now be

described as follows:

1. Guess a nominal control history ul(t) and u2(t) and a final

time t .

2. Integrate the st'te equations forward with the nominal control

and the nominal final time with the initial values of the state

vector given by X(0) = X Store X (t). Compute and store E.
-o -

Find the norm

61
i1 ltI = { Ki."
i=1


for some positive K., i = 1,...,6 (to be specified).
1

3. Save the controls ul and u2 in another variable UOLD, the corner

times N in the variable NOLD and the final time t in TFOLD.

4. Set R(t ) = I, the (6 X6) unit matrix. Integrate backward

Equations (3.8.10). At the same time compute and store RT f,
T
and R f, It is not necessary to store R(t).

5. Select the positive constant b and the positive quantities

W (t) and W2(t). Unless there is some special reason, W (t)

and W2(t) may be taken as a positive constant equal to W (to be

specified). In such a case, the storage for W (t) and W2(t)

will not be needed. Select e., i = 1,2,...,6 such that

0 < e. 1.
1

6. Compute I using Equation (3.8.19).

7. Compute v, ul1, du2, and dt from Equations (3.8.18), (3.8.15),

and (3.8.16). Note that 6ul and 6u2 are defined for the
O O
unconstrained parts C and C only, and are to be computed only
1 2

on those parts.









Change the final time t to t + dt If the integration step

size is h, this means that the total interval is to be increased or

decreased at the final end (depending upon whether dt is positive or

negative) by dtf/h, rounded to the nearest integer number of integra-

tion steps. In the case of a positive dt the controls ul(t) and

u2(t) in the interval TFOLD t t tf (new) will be given by extrapola-

tion of the existing curves. If a control function is on a constraint

at the final time, TFOLD, it should remain on the same constraint in

this period and is to be generated during the forward integration in the

next steps.

8. Set the control u. = UOLD. + 6u., i = 1 or 2, for the uncon-
1 1 i'
strained portions. Integrate the state equations forward with

the new control. The corner times N are to be generated again

during the forward integration as described graphically in

Figure 11.

Figure 11 shows the different cases which may arise during the

forward integration. It is seen that a new corner time is generated at

the point where the unconstrained u. + 6u. curve, extrapolated if neces-
1 1
sary, meets the constraint. This part of the forward integration makes

the programming complicated with many logical program statements.

9. Find the errors in the boundary conditions f and find the norm

I ill as in step 2. If this norm is less than that of the

previous iteration go to step 3 to continue the iterations.

If it is not reduced, then do the following:

a. If dt is too large, increase b, and go to step 9c below.



















































































0 r.
k. -4

CH a I
0 0 4-
OU

a
00
*0 0 0



*H. u M
-4 ) m
000a >
0 0


S0
4-)














o 0
cp


o r4
4.-)
0 0
4O


0 r0 o
; 1H +
04-a
CH Zs
OOP
U a



*H 0 0


'0 -P
o0 0


o






o 0







4-


0
-r





0
C

0











4-



0
0

Go
'0









a4
II
0 0
4.



c,
r-l
4-


0
0
II

rd


*o



0
4-

0



C



"o
0










U
0








0 0


'0
Q o

0
4>
+n *k
O 0
0 0
a) -









94m
S0






4-) 0
0 0




*4-

0

0









*H <


O 0
4-)