Citation
Motion planning and control of robot manipulators via application of a computer graphics animated display

Material Information

Title:
Motion planning and control of robot manipulators via application of a computer graphics animated display
Creator:
Crane, Carl David
Publisher:
Carl David Crane
Publication Date:
Language:
English

Subjects

Subjects / Keywords:
Animation ( jstor )
Computer graphics ( jstor )
Coordinate systems ( jstor )
Cosine function ( jstor )
Graphics ( jstor )
Polygons ( jstor )
Robots ( jstor )
Sine function ( jstor )
Unit vectors ( jstor )
Wrist ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright [name of dissertation author]. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
000940986 ( alephbibnum )
16664018 ( oclc )

Downloads

This item has the following downloads:


Full Text







MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY









BY

CARL DAVID CRANE III


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


1987














ACKNOWLEDGEMENTS


The author wishes to thank his wife, Sherry, and his

children Theodore, Elisabeth, and Stephanie for their

patience and support. Also, special thanks go to his

committee chairman, Professor Joseph Duffy, for his

encouragement and guidance. He is also grateful to the

members of his graduate committee who all showed great

interest and provided considerable insight.

This work was funded by the U.S. Army Belvoir Research

and Development Center, McDonnell Douglas Astronautics

Company, and Honeywell Corporation.

















TABLE OF CONTENTS


ACKNOWLEDGEMENTS . . . .

LIST OF TABLES . . . .

LIST OF FIGURES . . . .

ABSTRACT . . . . . .

CHAPTERS

I INTRODUCTION . . . .

1.1 Background . .

1.2 Review of Previous


Page

. . . . . . ii

. . . . . . vi

......... . vii

. . . . . . ix




. . . . . . 1

. . . . . . 1

Efforts . . . 4


II DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS . . . . .

2.1 Introduction and Objective .

2.2 Database Development . . .

2.3 Coordinate Transformations .

2.4 Backface Polygon Removal . .

2.5 Sorting of Plane Segments .

2.6 Description of Hardware System

2.7 Program Modifications . .

2.8 Results and Conclusions . .


III PATH GENERATION . . . . . .

3.1 Introduction and Objective .

3.2 Notation . . . . . .


iii


*.. .. 11

*.. .. 11

*..... 12

S . . 318

...... 29

. . . 30

*..... 38

. . . 40

*.. ... 49


..... 51

* . . 51

* . . 52









3.3 Mechanism Dimensions of the T3-776


3.4

3.5

3.6

3.7


Manipulator


. . . . . . . 55


Reverse Displacement Analysis

Forward Displacement Analysis

Path Generation . . . .

Results and Conclusions . .


* . . 59

*..... 83

. . . 88

.... ...101


IV ROBOTIC

4.1

4.2

4.3

4.4

4.5

4.6


TELEPRESENCE . . . .

Introduction and Objective

Telepresence Concept . .

System Components . .

Method of Operation . .

Problems Encountered . .

Conclusions . . . .


. . . 103

. . . 103

. . . 104

. . . 106

. . . 110

. . . 127

. . . 128


V INTERACTIVE PATH PLANNING AND EVALUATION .. .129

5.1 Introduction and Objective ..... .129

5.2 Robot Animation . . . . ... .130

5.3 Program Structure . . . ... .136

5.4 Workspace Considerations . . ... .142

5.5 Path Evaluation . . . . ... .147


5.6 Calculation of Intermediate Points .

5.7 Robot Configurations . . . . .

5.8 Singularity Analysis . . . . .

5.9 Interpretation of Singularity Results

5.10 Selection of Configuration . . .

5.11 Preview of Motion . . . . .

5.12 Communication with Robot Controller

iv


.153

.155

.157

.163

.164

.165

.167










5.13 Roll, Pitch, Yaw Calculations ... .168

5.14 Results and Conclusions . . .. ..172


VI DISCUSSION AND CONCLUSIONS . . . .. ..174


REFERENCES . . . . . . . . . .177

BIOGRAPHICAL SKETCH . . . . . . ... .180















LIST OF TABLES


Table 2-1

Table 2-2

Table 3-1

Table 5-1

Table 5-2


Page

T3-776 Mechanism Dimensions . . .. .22

Plane Segment Sorting Cases . . . 35

Sample Angles for j and j+l Positions 99

T3-726 Mechanism Dimensions . .. .132

Direction Cosines . . . ... .162














LIST OF FIGURES


Page
Fig. 2- 1 Cincinnati Milacron T3-776 Manipulator . 13

Fig. 2- 2 Conceptual Sketch . . . . . .. 14

Fig. 2- 3 Collection of Rigid Bodies . . . .. 15

Fig. 2- 4 Graphics Data Structure. .. . . . .17

Fig. 2- 5 Skeletal Model of T3-776 Manipulator . .. 20

Fig. 2- 6 Transformation to Viewing Coord. System . 25

Fig. 2- 7 Parallel and Perspective Transformation . 28

Fig. 2- 8 Wire Frame Model of T3-776 Manipulator . 31

Fig. 2- 9 Backfacing Polygons Removed . . . .. 32

Fig. 2-10 Plane Segments with Infinite Planes . .. 36

Fig. 2-11 Animated Representation of T3-776 Manipulator 50

Fig. 2-12 Animated Representation of T3-776 Manipulator 50

Fig. 3- 1 Spatial Link . . . . . . . .. 53

Fig. 3- 2 Revolute Pair . . . . . . . 54

Fig. 3- 3 Cincinnati Milacron T3-776 Manipulator . 56

Fig. 3- 4 Skeletal Model of T3-776 Manipulator . .. 57

Fig. 3- 5 Hypothetical Closure Link . . . .. 61

Fig. 3- 6 Hypothetical Closure when S II S7 .... .65

Fig. 3- 7 Location of Wrist Point . . . . .. 68

Fig. 3- 8 Determination of 2nd and 3rd Joint Angles . 71

Fig. 3- 9 Three Roll Wrist . . . . . . .. 76

Fig. 3-10 Moving and Fixed Coordinate Systems ... .77

Fig. 3-11 Forward Analysis. . . . . . . .85
vii









Fig. 3-12 Displacement Profile . . . . ... 90

Fig. 4- 1 Telepresence System . . . . ... .107

Fig. 4- 2 Nine String Joystick . . . . ... .109

Fig. 4- 3 Scissor Joystick . . . . . .. ..109

Fig. 4- 4 System Configuration . . . . ... .111

Fig. 4- 5 Animated Representation of MBA Manipulator .114

Fig. 4- 6 Obstacle Locations Deter. by Vision System .114

Fig. 4- 7 Display of Objects in Manipulator Workspace .116

Fig. 4- 8 Warning of Imminent Collision ..... .119

Fig. 4- 9 Operation in Man-Controlled Mode ... .119

Fig. 4-10 Determination of Intersection ..... .121

Fig. 4-11 Generation of Alternate Path ..... .121

Fig. 4-12 Display of Computer Generated Path . .. .125

Fig. 5- 1 Cincinnati Milacron T3-726 Manipulator . .131

Fig. 5- 2 Animated Representation of T3-726 Robot . .134

Fig. 5- 3 Collection of Rigid Bodies . . ... .135

Fig. 5- 4 Data Structure for Precision Points ... .138

Fig. 5- 5 Skeletal Model of T3-726 Manipulator .. .139

Fig. 5- 6 Manipulator Workspace . . . . ... .143

Fig. 5- 7 Top and Side Views of Workspace ..... .143

Fig. 5- 8 Three Roll Wrist . . . . . . ..145

Fig. 5- 9 Orientation Limits . . . . ... .146

Fig. 5-10 Motion Behind Base . . . . ... .151

Fig. 5-11 Intersection of Planar Line Segments .. .151

Fig. 5-12 Coordinate System for Singularity Analysis .160

Fig. 5-13 Display of Singularity Results .... .166

Fig. 5-14 Calculation of Roll Parameter ..... .171
viii














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy


MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY

By

CARL D. CRANE III

MAY 1987



Chairman: Dr. Joseph Duffy
Major Department: Mechanical Engineering

It is often necessary in a hazardous environment for an

operator to effectively control the motion of a robot

manipulator which cannot be observed directly. The

manipulator may be either directly guided via use of a

joystick or similar device, or it may be autonomously

controlled in which case it is desirable to preview and

monitor robot motions. A computer graphics based system has

been developed which provides an operator with an improved

method of planning, evaluating, and directly controlling

robot motions.

During the direct control of a remote manipulator with

a joystick device, the operator requires considerable

sensory information in order to perform complex tasks.

Visual feedback which shows the manipulator and surrounding

workspace is clearly most important. A graphics program








which operates on a Silicon Graphics IRIS workstation has

been developed which provides this visual imagery. The

graphics system is capable of generating a solid color

representation of the manipulator at refresh rates in excess

of 10 Hz. This rapid image generation rate is important in

that it allows the user to zoom in, change the vantage

point, or translate the image in real time. Each image of

the manipulator is formed from joint angle datum that is

supplied continuously to the graphics system. In addition,

obstacle location datum is communicated to the graphics

system so that the surrounding workspace can be accurately

displayed.

A unique obstacle collision warning feature has also

been incorporated into the system. Obstacles are monitored

to determine whether any part of the manipulator comes close

or strikes the object. The collision warning algorithm

utilizes custom graphics hardware in order to change the

color of the obstacle and produce an audible sound as a

warning if any part of the manipulator approaches closer

than some established criterion. The obstacle warning

calculations are performed continuously and in real time.

The graphics system which has been developed has

advanced man-machine interaction in that improved operator

efficiency and confidence has resulted. Continued

technological developments and system integration will

result in much more advanced interface systems in the

future.














CHAPTER I
INTRODUCTION




1.1 Background

There have been significant advances in the broad range

of technologies associated with robot manipulators in such

areas as kinematics and dynamics, control, vision, pattern

recognition, obstacle avoidance, and artificial

intelligence. A major objective is to apply these

technologies to improve the precision of operation and the

control of manipulators performing various tasks.

Just as significant an advance has been made recently

in the field of computer graphics hardware. Application of

VLSI technology has resulted in a dramatic increase in

graphics performance of up to 100 times faster than

conventional hardware. Low cost workstations ($20-50K) have

been developed which can generate real time raster images

which are formed by the illumination of discrete picture

elements. Although raster generated images may have less

picture resolution than images produced by vector refresh

devices, they do allow for the generation of solid color

images with shading and hidden surface removal. Application

of these and other computer graphics techniques have

resulted in improved image generation and realism and allow

1








for a wide variety of new applications in the robotics

field. This dissertation addresses the use of such computer

graphics hardware in the following two areas:

a) telepresence system development

b) robotic'workcell modeling

Telepresence systems deal with the direct

man-controlled and autonomous operation of remote robot

manipulators. During man-controlled operation, the user

controls the manipulator directly by guiding the end

effector via use of a joystick or similar device. The

operator moves the joystick as a "master" and the robot

follows correspondingly as a "slave." The graphics system

aids the operator by providing a real time visualization of

the manipulator and surrounding work area. Critical

information such as approaching a joint limit or exceeding

payload capabilities can be displayed immediately as an aid

to the user. For autonomous operations of a remote

manipulator, the graphics system is used to plan manipulator

tasks. Once a task is specified, the user can preview the

task on the graphics screen in order to verify motions and

functions. Modifications to the previewed task can be made

prior to the execution of the task by the actual

manipulator.

The modeling of robotic workcells is a second

application for the animation system. In a manufacturing

environment it is desirable to plan new manipulator tasks

off line. In this manner the manipulators can continue 'old








production' during the planning phase. Assembly line down

time is minimized as the new tasks can be quickly

communicated to the manipulator. The graphics system offers

a convenient and rapid method of planning workcell layouts

and manipulator tasks. The ability to interact with the

system allows the user to reposition objects within the

workspace to verify that all important points can be reached

by the robot. Cycle times can be calculated and compared in

order to improve productivity.

Following a review of previous work dealing with the

animation of industrial robots, subsequent chapters will

detail the development and application of an interactive

computer graphics animation system. A brief description of

each chapter is as follows:

Chapter 2: Development of Animated Displays of

Industrial Robots. This chapter describes the development

of the interactive animation program. The database

development is detailed for the particular case of modeling

the Cincinnati Milacron T3-776 industrial robot. Graphics

techniques are described with emphasis on the removal of

backfacing polygons and the sorting of solid objects.

Chapter 3: Path Generation. A method of generating

sets of joint angles which will move a manipulator along a

user specified path is described. Specific issues deal with

motion near singularity positions and the selection of the

robot configuration at each point along the path.








Chapter 4: Robotic Telepresence. A telepresence system

was developed to allow a user to control a remote robot

manipulator in two distinct modes, viz. man-controlled and

autonomous. This chapter details the use of the graphics

system as an aid to the user. Visual feedback of the work

area is provided together with real time warning of an

imminent collision between the robot and an object in the

workspace.

Chapter 5: Interactive Path Planning and Evaluation.

An interactive computer graphics software system was

developed which assists an operator who is planning robot

motions. The user must specify the path of the manipulator

together with velocity and function information. Once a

task is previewed on the graphics screen, the path datum is

communicated to the actual robot controller. Specific

application to the Cincinnati Milacron T3-726 manipulator is

described in detail.





1.2 Review of Previous Efforts

Early work dealing with the computer graphics animation

of industrial robots occurred in the 1970's. Indicative of

these efforts were reports published from England [1-3],

West Germany [4-5], France [6], and Japan [7]. Common to

this work was the use of computer graphic storage tube

terminals. Hardware limitations resulted in slow animation

rates with bright flashes occurring as the screen is cleared

for each image.








More recently, a program named GRASP was developed by

S. Derby [8]. The program was written in FORTRAN on a Prime

Computer with an Imlac Dynagraphics graphics terminal.

Vector images (wire frames) were generated as raster

technology had not yet developed to be able to produce rapid

images. This program allowed an experienced user to design

and simulate a new robot, or modify existing robot

geometries. Robot motions were calculated and displayed

based on closed form kinematic solutions for certain robot

geometries. A generic iterative technique was used for arms

having a general geometry.

The animation programming of M.C. Leu [9] is indicative

of the current work in the field. A hardware configuration

consisting of two DEC VAX computers, vector graphics

terminals, and raster graphics terminals was utilized to

produce wire frame and solid color images. The program

allows for off line programming of new or existing robot

designs. In addition, a swept volume method was utilized to

detect collisions of the robot arm and any object in the

workspace.

Further improvements in the simulation of robotic

workcells have been made by B. Ravani [10]. Animations have

been developed on an Evans and Sutherland graphics terminal

which can rapidly produce color vector images. Significant

improvements in database development and user interaction

with the computer have made this a versatile simulation

program.








C. Anderson [11] modeled a workcell consisting of the

ESAB MA C2000 arc welding robot. Displacement, velocity,

acceleration, force, and torque data were utilized in the

model as calculated by the Automatic Dynamic Analysis of

Mechanical Systems (ADAMS) software package. Rapid wire

frame animations were obtained on Evans and Sutherland

vector graphics terminals. Solid color representations with

shading were also generated; however, real time animation of

these images was not possible.

A further off line animation system, entitled WORKMATE,

has been developed at the Stanford Research Institute by S.

Smith [12]. A goal of this effort is to implement a graphic

based simulation program through which an inexperienced user

can plan and debug robot workcell programs off line. The

program is run on a Silicon Graphics IRIS workstation which

has the capability of rapidly rendering solid color images

with shading. A significant feature of WORKMATE is that

collisions between objects in the workspace can be

identified for the user in real time. This feature avoids

the need for the user to perform the tedious visual

inspection of the robot motion in order to verify that no

collision occurs along the path.

Several companies have recently entered the robotic

simulation market. Silma, Inc., of Los Altos, California,

was formed in 1983 to develop software which would model

robotic workcells. This group recognized the problem that

each robot manufacturer uses a different robot language to








control the robot. To aid the user, Silma Inc. developed a

task planning and simulation language which was independent

of the type of robot being modeled. Once a task was planned

on the graphics screen, a post processor would translate the

task from the generic planning language to the specific

language for the robot controller. This approach simplifies

operations in a situation where many diverse types of robots

must work together in an assembly operation. The software

was written for the Apollo computer series with an IBM PC/AT

version to be completed in the near future.

AutoSimulations, Inc., of Bountiful, Utah, offers a

software package which runs on the Silicon Graphics IRIS

workstation. This system emphasizes the total factory

integration. The robot workcell is just one component of

the factory. General robot tasks can be modeled at each

robot workcell and cycle times are recorded. Autonomously

guided vehicles (AGVs) are incorporated in the factory plan

together with parts storage devices and material handling

stations. The user is able to model the entire factory

operation and observe the system to identify any bottlenecks

or parts backups. Additional robot workcells can be readily

added or repositioned and AGVs can be reprogrammed in order

to alleviate any system problems. At present time, the

software package offers an excellent model of the entire

factory; however, less emphasis is placed on the individual

workcell. Detailed manipulator tasks cannot be planned and

communicated to the robot controller. Additional programs








will be integrated with the software package in order to

address these issues.

Intergraph Corporation of Huntsville, Alabama, offers a

hardware and software solution to the workcell modeling

problem. The unique feature of the Intergraph hardware is

the use of two 19 inch color raster monitors in the

workstation. This feature greatly enhances man-machine

interface which is the primary purpose of the graphics

system. Intergraph offers a complete line of CAD/CAM

software in addition to the robot workcell modeling

software. Applications of the system to workcell planning

have been performed at the NASA Marshall Flight Center,

Huntsville, Alabama.

Simulation software has also been developed by the

McDonnell Douglas Manufacturing Industry Systems Company,

St. Louis, Missouri. Ninety four robots have been modeled

to date. McDonnell Douglas has acquired considerable

experience in planning workcell operations for automobile

assembly lines. For example, a robot may be assigned

fifteen specific welds on a car body. The user must decide

where the car body should stop along the assembly line with

respect to the manipulator in order to accomplish these

welds. The simulation software allows the user to attach

the robot to one of the weld points and then move the auto

body with the robot still attached. The user is notified if

the weld point goes outside the workspace of the

manipulator. By repeating the process, the operator can








determine the precise position of the car body with respect

to the manipulator so that all weld points are in reach of

the manipulator. The software system generates path datum

which is communicated directly to the robot controller

together with cycle time datum which is accurate to within

5%.

McDonnell Douglas has learned from experience that

tasks that are taught to the robot in this way must be

"touched up". For example, the car body may not stop

precisely at the position that was determined during the

simulation process. The manual touch up of certain points

along the manipulator path can be accomplished with use of

the teach pendant. Typically an average of fifteen to

twenty minutes is required for this manual touch up

operation. A second method of path upgrade can be

accomplished by attaching a mechanical probe to the robot.

This probe measures the actual position of an object with

respect to the robot and then updates positional commands as

necessary. Early application of this technique required the

user to replace the tool of the robot with the mechanical

probe. This was often a time consuming and labor intensive

task. New measuring probes have been applied by McDonnell

Douglas to remove this problem.

A final example of robot workcell simulation software

is that developed by Deneb Robotics Inc., Troy, Michigan

[13]. This software runs on the Silicon Graphics IRIS

workstation. The interactive nature of the program allows








the user to rapidly build new robot geometries, modify

actuator limits, or reposition objects within the workspace.

Detailed solid color images with satisfactory animation

rates are obtainable. In addition the user can be warned of

collisions or near misses between parts of the robot and

objects in the workspace although the animation rate slows

down as a function of the number of collision checks being

made. The strength of the Deneb software is its rapid

animation of solid shaded images together with its ease of

use for the operator. As such, it is one of the more

advanced manipulator simulation software packages.














CHAPTER II
DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS




2.1 Introduction and Objective

In the previous chapter, improvements in computer

hardware were discussed which have particular application to

the problem of real time animation of solid color objects.

The goal of this chapter is to detail the development

(hardware and software) of real time, interactive computer

graphics animations of industrial robots.

Any computer graphics simulation must possess two

characteristics. First it must be realistic. The image on

the screen must contain enough detail so as to give the user

an accurate and obvious image. Second, the images must be

generated rapidly enough so that smooth animation will

result. Picture update rates of 10 frames per second have

provided satisfactory animation on a 30 Hz video monitor

(each picture is shown three times before changing).

Throughout this chapter, all applications will be aimed

at the development of a solid color representation of a

Cincinnati Milacron T3-776 industrial robot. The first part

of this chapter will focus on the development of appropriate

data structures, viewing transformations, and hidden surface

removal methods. The specific programming techniques
11








utilized and demonstrated on a VAX 11-750 computer will be

discussed in detail. The second part of this chapter is

concerned with the modification of the initial work in order

to apply it to a Silicon Graphics IRIS model 2400

workstation. The modifications take full advantage of the

built in hardware capabilities of the IRIS workstation and

result in significantly improved performance.





2.2 Database Development

The first step in the generation of a computer graphics

simulation of a robot manipulator is the drawing of an

artist's concept of what the simulation should look like.

Shown in Figure 2-1 is a drawing of the T3-776 robot and in

Figure 2-2 is a sketch of the desired graphics simulation.

Enough detail is provided for a realistic representation of

the robot. Also shown in Figure 2-2 is a coordinate system

attached to the base of the robot such that the origin is

located at the intersection of the first two joints. The Z

axis is vertical and the X axis bisects the front of the

base. This coordinate system is named the 'fixed coordinate

system' and will be referred to repeatedly.

The robot manipulator is made up of a collection of

rigid bodies as shown in Figure 2-3. Also shown in the

figure is a local coordinate system attached to each body.

The coordinate values of each vertex point of the

manipulator are invariant with respect to the coordinate

























-THREE ROLL WRIST


-WRIST DRIVE SUB-ASSEMBLY















- SHOULDER HOUSING


ELBOW DRIVE SU

SHOULDER AXIS*

BASE SWIVEL--


BASE HOUSING
TURNTABLE
GEARBOX
DRIVE
P.A.U.


Figure 2-1: Cincinnati Milacron T3-776 Manipulator














































Figure 2-2: Conceptual Sketch


























Y





y
X 7 J


Figure 2-3: Collection of Rigid Bodies








system attached to each body. For this reason, local

coordinate data can be collected and permanently stored for

future use. It should be noted that the coordinate values

of the vertices were obtained from actual measurement and

from scale drawings of the robot. In this way, the computer

graphics simulation is as accurate as possible.

It is apparent from Figure 2-3 that the simulated robot

is made up of a series of primitives, i.e. n sided polygons,

circles, and cylinders. An n sided polygon was defined to

be a plane segment. A circle was defined as a 20 sided

polygon, and cylinders as a set of 10 four sided polygons.

Thus it is possible to define all parts of the simulation in

terms of plane segments.

Each primitive (polygon, circle, cylinder) which makes

up the simulation must have certain data items associated

with it. The datum was managed by placing each primitive

into a node of a linked list as shown in Figure 2-4. Each

node of the linked list is a variant structure and contains

specific information such as the type of element, the body

it belongs to, and the number of the vertices which comprise

it. The linked list format, which is a standard feature of

the C programming language, was used because of its

versatility and dynamic memory allocation characteristics.

With every element of the simulation now defined in

terms of some local coordinate system, the following three

tasks must now be performed in order to obtain a realistic

color image:










struct plane segment
{int name ;


int number ;


int color ;

float normalpoint[3] ;


union
{struct circletype
struct polytype
struct cyltype
} dat ;


cir ;
pol ;
cyl ;


struct plane_segment *next ;

}


struct circletype
{int centerpt ;


float radius ;


struct polytype
{int sides ;

float points[] [3] ;

}


struct cyltype
{int endpts[2] ;


float radius ;


indicates whether it is a
polygon, cylinder or
circle
identifies the object that
the plane segment
belongs to
the color of the plane
segment
local coordinates of the
normal point


contains circle data
contains polygon data
contains cylinder data


pointer to next plane
segment




the index of the center
point


the number of sides of
the polygon
array of local coordinate
values




the index of the cylinder
endpoints


Figure 2-4: Graphics Data Structure










1. For a given position of the robot and of the viewer,

transform all local vertex data to a screen coordinate

system so that it can be properly displayed.

2. Delete all plane segments which are non-visible

(backward facing).

3. Sort all remaining plane segments so that the proper

overlap of surfaces is obtained.



Each of these three tasks will now be discussed in detail.





2.3 Coordinate Transformations

In order to produce a drawing of the robot, certain

input data must be known. First the angle of each of the

six revolute joints of the robot must be selected. Chapter

3 details a method of calculating joint angles so as to

cause the robot to move along some desired trajectory. For

the purposes of this chapter, it will be assumed that the

set of joint angles is known for the picture to be drawn at

this instant.

The second input item which is required is the point to

be looked at and the point to view from. Knowledge of these

points determines from what vantage point the robot will be

drawn. The selection and modification of these items allows

the user to view the image from any desired location.










2.3.1 Local to fixed coordinate transformation

As shown in Figure 2-3, the representation of the robot

manipulator is made up of a series of rigid bodies. The

coordinates of each vertex are known in terms of the

coordinate system attached to each of the bodies. The first

task to be completed is the determination of the coordinates

of every vertex in terms of the fixed coordinate system

attached to the base of the robot.

Since it is assumed that the six joint angles of the

robot are known, the transformation of local point data to

the fixed reference system is a straightforward task. The

local coordinate systems shown in Figure 2-3 are named C1

through C6. These local coordinate systems were carefully

selected so as to simplify the transformation of data to the

fixed reference system.

Shown in Figure 2-5 is a skeletal model of the T3-776

manipulator. The vectors along each of the joint axes are

labeled S through S6 and the vectors perpendicular to each

successive pair of joint axes are labeled a12 through a67

(not all are shown in the figure). The variable joint

displacements 92 through 96 (j9) are measured as the

angles between the vectors a.. and ak in a right handed
-13 -jk
sense about the vector S The first angular displacement,

1', is measured as the angle between the X axis of the
fixed reference system and the vector a 12 As previously

stated, it is assumed that the joint displacements ~l

through 96 are known values.
















923



<.S,,


S
-6


Figure 2-5: Skeletal Model of T3-776 Manipulator








Twist angles are defined as the relative angle between

two successive joint axes, measured about their mutual

perpendicular. For example, the twist angle a12 is

measured as the angle between the vectors S and S as seen

in a right handed sense along the vector a12. In general,

all twist angles will be constant for an industrial robot

under the assumption of rigid body motion.

Two additional parameters, link lengths and offset

lengths, are shown in the skeletal model of the manipulator.

A link length, aij, is defined as the perpendicular distance

between the pair axes S. and S.. All link lengths are known
-1 -J
constant values for a manipulator. An offset length, Sj.,

is defined as the perpendicular distance between the two

vectors a.. and a For revolute joints, offsets are
--13 -jk"
constant values. Shown in Table 2-1 are the specific

constant link length, offset, and twist angle values for the

Cincinnati Milacron T3-776 robot manipulator.

A systematic selection of each local coordinate system

was made based on the skeletal model of the manipulator.

The Ci coordinate system was established such that the Z

axis was aligned with the vector S. and the X axis was along
--1
a... With this definition for each coordinate system, the

coordinates of a point known in the C. system can be found

in the Ci system by applying the following matrix equation:













Table 2-1: T3-776 Mechanism Dimensions


Sll = *

S22 = 0 in.

33 = 0

S = 55
5 44
55


a12 =0 in.

a23 44

a34 = 0

45 = 0

56 = 0


12 = 90 deg.

a23 = 0

a34 = 90

45 = 61

"56= 61


* to be determined when closing the loop










x x. dx..

i = A. yj + dyj (2.1)
j dzji
Z. Z. dz..
1 J 1 J


where



Cj -Sj 0

A. = scij cjcij -sij (2.2)
sjSij cjSij ci.



The vector [dxji, dyji dz ji represents the

coordinates of the origin of the C. system as measured in

the C. coordinate system. Also the terms s. and c.

represent the sine and cosine of 0. and the terms s.. and

cij represent the sine and cosine of a... This notation

will be used repeatedly throughout subsequent chapters.

Since all joint angles and twist angles are assumed to

be known, equation (2.1) can be repeatedly used to transform

all vertex data to the first coordinate system, C1. A point

known in terms of the C1 coordinate system, [x1, y1' z1]'

can be found in terms of the fixed coordinate system [xf,

Yf, zf], as follows:


Xf- x,

Yf = M yl (2.3)

zf z1










where



cos 1 -sinl 01
M = sin1 cos l 0 (2.4)

0 0 1



Proper use of equations (2.1) and (2.3) will result in the

determination of all vertex data for the robot in terms of

the fixed coordinate system attached to the robot base.



2.3.2 Fixed to viewing coordinate transformation

Assuming that the point to look at and the point to

view from are known in terms of the fixed coordinate system,

all vertices of the robot are now determined in terms of a

viewing coordinate system. The use of this coordinate

system will greatly simplify the eventual projection of the

robot onto the screen.

As shown in Figure 2-6, the origin of the viewing

coordinate system is the point to view from. The Z axis of

the coordinate system is the vector from the point being

looked at to the point being viewed from. With the Z axis

known, the XY plane is defined.

The exact directions of the X and Y axes are immaterial

at this point. Typically, however, the Y axis of the

viewing coordinate system will point "up." In other words,

for the robot to be drawn with the base along the bottom of






















th










O Z
y1


Figure 2-6: Transformation to Viewing Coordinate System








the screen, the Y axis of the viewing coordinate system must

correspond to the Z axis of the fixed coordinate system.

This association is accomplished by selecting a zenith point

(point B in Figure 2-6) which is high above the robot. As

shown in the figure, the direction of the X axis is obtained

as the cross product of the vector along line OA with the

vector along the line OB. With the X and Z axes now known,

the Y axis can be determined.

As described, vectors along the X, Y, and Z axes of the

viewing coordinate system are known in terms of the fixed

coordinate system. A 3x3 matrix, V, can be formed such that

the first column of the matrix is made up of the known

direction cosines of the unit vector along the X axis

(measured in terms of the fixed coordinate system).

Similarly, the second and third columns of V are made up of

the direction cosines of unit vectors along the Y and Z

axes. Recognizing that V is an orthogonal rotation matrix,

the transformation from the fixed coordinate system to the

viewing coordinate system is given by



xf x- dxfv

Y = VT Yf dyfv (2.5)

z zf dzf



where the vector [dxfv,dyfv,dzfv] represents the coordinates

of the origin of the viewing coordinate system as measured

in the fixed coordinate system.








At this point, the coordinates of all vertices of the

robot are known in terms of the viewing coordinate system.

All that remains is to transform the data one more time to

the screen coordinate system so that it can be properly

displayed.





2.3.3 Viewing to screen coordinate system transformation

The screen coordinate system is defined such that the

origin of the system is located at the lower left corner of

the terminal screen. The X axis points to the right, the Y

axis points up, and the Z axis point out from the screen.

The scale of the axes is dependent on the type of terminal

being used. All data points must be transformed to this

coordinate system so that they may be properly displayed on

the screen. Two types of projective transformations may be

used to perform the transformation between the coordinate

systems. These projective transformations are perspective

and parallel projections and are shown in Figure 2-7.

A parallel projection is the simplest type of

transformation. The conversion to the screen coordinate

system is simply accomplished by ignoring the Z component of

the data from the viewing coordinate system. In other

words, the X and Y values of points in the viewing

coordinate system are simply plotted on the graphics screen

(accompanied by any desired translation and scaling). The

resulting image is the same as would be obtained if the




































(a) (b)





Figure 2-7: Parallel and Perspective Transformation.
a) Parallel Projection ;
b) Perspective Projection.









viewer was standing at infinity with respect to the robot.

Parallel lines will remain parallel on the screen.

The perspective transformation is accomplished by

projecting points onto a plane (screen). One point is

selected as shown in Figure 2-7 as the center of the

projection. The screen coordinates of any point are

determined by finding the coordinates of the point of

intersection of the plane (screen) with the line between the

point in question and the center of projection. This

transformation can again be accomplished via matrix

multiplication (coupled with any desired translation on the

screen).

For the purposes of this work, the parallel projection

method was used for determining the data in terms of the

screen coordinate system. This choice was made because of

the reduced number of calculations required to perform

subsequent sorting algorithms used for eventual solid color

representations of the robot.





2.4 Backface Polygon Removal

The next task that must be accomplished is the

filtering of the linked list of plane segments such that the

elements which are backward facing are removed. In other

words, at any time approximately one half of the plane

segments will not be visible to the viewer. The list of

visible plane segments changes each instant that the robot

or the viewer changes position.









The removal of the backward facing polygons is a quick

and simple task. As indicated in the data structure shown

in Figure 2-4, the coordinates of a normal point are

specified for each plane segment. A vector normal to the

surface (and outward pointing) can be formed by subtracting

the coordinates of the origin of the local coordinate system

from the coordinates of the normal point. Just as all the

vertices were transformed from the local coordinate system

to the screen coordinate system, the normal points are also

transformed. Comparison of the Z coordinate of the normal

point with that of the origin of the local coordinate system

(both now in the screen coordinate system) determines

whether the particular plane segment is visible.

Application of this method results in a greatly

simplified image. Shown in Figure 2-8 is an edge drawing of

the T3-776 manipulator. Figure 2-9 shows the same drawing

with backfacing polygons removed.





2.5 Sorting of Plane Segments

A characteristic of raster type graphics displays is

that whatever is drawn last will appear to be on top. For

example if two polygons, A and B, exist and polygon A is

closer to the viewer and overlaps polygon B, then polygon B

must be drawn on the screen prior to polygon A. The only

other alternative would be to redefine polygon B so that the

regions overlapped by polygon A were subtracted. In this











































Figure 2-8: Wire Frame Model of T3-776 Manipulator














































Figure 2-9: Backfacing Polygons Removed








manner the new polygon B' and the original polygon A would

no longer overlap and it would not matter in what order they

were drawn on the screen.

Numerous techniques exist for sorting polygons for

proper display. Algorithms have been developed based on two

primary techniques. The first involves sorting objects into

the correct order for display [14 16], while the second

technique concentrates on individual pixels (ray tracing

[17-18] and z-buffer algorithms [19-20]).

A sorting technique was used in this work for two

reasons, i.e. a rapid algorithm was required which did not

require a substantial amount of computer memory. The

algorithm which performs the sort is of necessity of order
2
n In general, every plane segment must be compared

against every other plane segment. To shorten this process,

however, a numbering scheme was employed so that, for

example, the sides of a cube would not be compared since it

would be impossible for them to cover each other. Similarly

it is not necessary to compare the five visible sides of a

ten sided cylinder.

The comparison of two plane elements to determine if

one of them overlaps or covers the other is accomplished by

applying a series of tests. The first test is to determine

whether a bounding box placed around the projection of one

of the objects is disjoint from a bounding box placed around

the projection of a second object. If the bounding boxes do

not overlap, then it is not possible for the two objects to

overlap and the comparison is complete.









If the two bounding boxes are not disjointed then all

the points of one object are substituted into the equation

of the infinite plane that contains the second object. If

the resulting value of the equation for all points is

greater than zero (assuming that the viewer's side of the

infinite plane is positive), then the first object may cover

the second object. Similarly, the points of the second

object are substituted into the equation of the infinite

plane containing the first object. Again whether the sign

of the equation is greater or less than zero determines

whether one object may overlap the other. Shown in Table

2-2 are all the possible combinations of signs that may

occur. Figure 2-10 shows a representative sample of the

types of overlap conditions that can occur for two plane

segments.

If it is concluded from the previous test that no

overlap can occur, then the comparison is complete. However

if an overlap may occur, then the projections of the two

objects onto the screen are checked to determine if they do

indeed overlap. This is done by determining whether any

lines of either of the two projections cross. If any of the

lines do cross, then the plane segments do overlap. If none

of the lines cross, then it may be the case that one of the

projections lies completely inside the other. One point of

each of the projected plane segments is checked to determine

whether it is inside the other projected polygon.












Table 2-2: Plane Segment Sorting Cases




This table indicates whether all the vertices of plane
segment 1 are on the origin side (+ side) or the opposite
side (- side) of the infinite plane containing plane segment
2. Similarly, the vertices of plane segment 2 are compared
to the infinite plane containing plane segment 1.


segment 1
+


segment 2
+


+/-


+/-
+/-


result
no overlap
no overlap
1 may overlap 2
1 may overlap 2
1 may overlap 2
2 may overlap 1
2 may overlap 1
2 may overlap 1
overlap may occur
















S2




(a)














(b)














(c)


Figure 2-10: Plane Segments with Infinite Planes.
a) segment 1 (+), segment 2 (+)
b) segment 1 (+), segment 2 (+/-) ;
c) segment 1 (-), segment 2 (+/-)









Clearly, the comparison task is lengthy and time

consuming. The case of two objects whose bounding boxes are

not disjoint and yet do not actually overlap takes

considerable time. In addition, the equation of the

infinite plane for each plane segment had to be calculated

for each image to be drawn based on the position of the

robot and of the viewer. On the average, for a particular

drawing of the T3-776 robot there are 85 plane segments to

compare and sort. Due to this large number, the execution

time of this algorithm on a VAX 11/750 computer is

approximately 10 seconds per drawing.

Clearly, the sorting of plane segments in software will

not allow images to be generated rapidly enough to provide

proper animation. An improvement by at least a factor of

100 is necessary in order to reach the minimum animation

rate of 10 frames per second. A second drawback of the

algorithm is that it will fail if there exists a cyclic

overlap of plane segments. For example, if segment A

overlaps B which overlaps C which in turn overlaps segment

A, then the algorithm as written will fall into a recursive

trap. This problem can be corrected in software, but the

additional calculations will only serve to further increase

the computation time.

A solution to the problem was found via application of

special purpose computer graphics hardware. The animation

program was modified to run on a Silicon Graphics IRIS model

2400 workstation. Proper modifications of the database and








sorting method to take advantage of the hardware

improvements resulted in the rapid generation of full color,

solid images at a rate of over 10 frames per second. The

hardware system and software modifications will be detailed

in the following sections of this chapter.





2.6 Description of Hardware System

The Silicon Graphics IRIS model 2400 workstation is a

68010 based 3-D system designed to function as a stand-alone

graphics computer. It is capable of generating three

dimensional, solid color images in real time without the

need for a main frame computer.

The unique component of the IRIS is a custom VLSI chip

called the Geometry Engine. A pipeline of twelve Geometry

Engines accepts points, vectors, and polygons in user

defined coordinate systems and transforms them to the screen

coordinate system at a rate of 69,000 3-D floating point

coordinates per second.

The display of the IRIS system is a 19 inch monitor

with a screen resolution of 1024 pixels on each of 768

horizontal lines. The monitor is refreshed at a rate of 60

Hz and provides flicker free images. The image memory

consists of eight 1024 x 1024 bit planes (expandable to 32

bit planes). An eight bit plane system will allow for 2

(256) colors to be displayed simultaneously on the screen.








Animation is obtained by setting the IRIS system into

double buffer mode. In this mode, half of the bit planes

are used for screen display and half for image generation.

In other words, while the user is observing an image

(contained on the front 4 bit planes), the next image is

being drawn on the back 4 bit planes. When the image is

complete, the front and back sets of bit planes are swapped

and the user sees the new picture. The complexity of the

image to be drawn governs the speed at which the bit planes

are swapped. Experience has shown that the swapping should

occur at a rate no slower than 8 Hz in order to result in

satisfactory animation.

The one drawback of double buffer mode is that there

are only half as many bit planes available for generating an

image. The reduced number of bit planes further limits the

number of colors that may be displayed on the screen at

once. An IRIS system with only 8 bit planes, such as the
4
system at the University of Florida, can only display 2

(16) colors on the screen at once while in double buffer

mode. It should be noted, however, that a fully equipped

system with 32 bit planes can display 216 (65,536) colors

simultaneously in double buffer mode. This capability

should far exceed user requirements in almost all instances.









2.7 Program Modifications

It was previously noted that an increase in performance

by at least a factor of 100 was required in order to produce

images rapidly enough to result in pleasing animation. A

brief description of the graphics software library which is

included with the IRIS system will precede the discussion of

the specific data structure and software modifications which

were made.





2.7.1 IRIS coordinate transformations

The primary task in drawing images on the screen is the

transformation of coordinate values from local coordinate

systems to the screen coordinate system. The IRIS

workstation accomplishes this by manipulating data in terms

of homogeneous coordinates. Four coordinate values, [x, y,

z, w] are used to define the coordinates of a point. What

is normally thought of as the X coordinate value can be

calculated as x/w. Similarly, values for the Y and Z

coordinates of a point are readily determined. The

advantage of using homogeneous coordinates is that

rotations, translations, and scaling can all be accomplished

by 4x4 matrix multiplication.

The IRIS system constantly keeps track of the current

transformation matrix, M. This matrix represents the

transformation between some local coordinate system and the

screen coordinate system. When any graphics drawing command









is given, as for example 'pnt(50, 20,

point at the local position (50, 20,

datum is multiplied by the matrix M in

the screen coordinate values. The

represented by the following equation:



[x,y,z,w] = [x',y',z',w'] M


40)' which draws a

40), the coordinate

order to determine

transformation is





(2.6)


The basic problem then is to make the matrix M represent the

transformation between the coordinate system attached to

each of the rigid bodies of the robot. For example, when M

represents the transformation between the fixed coordinate

system attached to the base of the robot and the screen

coordinate system, the base of the robot can be drawn in

terms of a series of move and draw commands, all of which

will use local coordinate data as input.

When the IRIS system is initialized, the matrix M is

set equal to the identity matrix. Three basic commands,

translate, rotate, and scale, are called to modify M.

Calling one of the three basic commands causes the current

transformation matrix, M, to be pre-multiplied by one of the

following matrices:


Translate (T ,T ,T ) =
x- y z'


1 0 0

0 1 0

0 0 1

T T T
Sx y z


(2.7)


















Scale (S ,S ,S )











Rot (Q)











Rot (9)
y











Rot (9)
z


S 0 0 0

0 S 0 0
y
0 0 S 0
z
_0 0 0 1





1 0

0 coso

0 -sin9

0 0


Scos9

0

sinG

0





cosG

-sinG

0

0


0

1

0

0





sinG

cose

0

0


With these three basic transformations, it is an easy

matter to cause the matrix M to represent the transformation


(2.8)


0

sinG

cosG

0


-sinG

0

cose

0





0

0

1

0


(2.9)











(2.10)











(2.11)









from the fixed robot coordinate system to the screen

coordinate system. A translate command can be called to

center the image as desired on the screen, a scale command

will allow for .zooming in, and a series of rotate commands

will allow for any desired orientation of the robot base

with respect to the screen. The program is written so that

the parameters to these commands are modified by rolling the

mouse device. In this manner, the user can change the

orientation and scale of the drawing as desired. Since the

images will be drawn in real time, the user has the

capability to interact with the system and alter the viewing

position also in real time.

Once the matrix M represents the fixed coordinate

system, the base of the robot can be drawn. A series of

move and draw commands can be called, using local coordinate

data as input. However, since solid color images are

desired, the order that solid polygons are drawn is of

importance. Because of this, the matrix M is simply saved

and given the name Mf. When any part of the base of the

robot is to be drawn, however, the matrix Mf must be

reinstated as the current transformation matrix M.

The transformation from the matrix Mf to the coordinate

system attached to Body 1 (see Figure 2-3) is a simple task.

The transformation matrix for Body 1, M1, is given by the

following equation:


M1 = Rotz(i) Mf (2.12)










Similarly, the transformation matrices for bodies 2 through

6 are given by the following equations:



M2 = Rotx(90) Rot (-Q2) M1 (2.13)

M3 = Translate (a23, 0, 0) Rotz(93) M2 (2.14)

M4 = Rotz( 4) Rotx(90) Translate (0, -S44, 0) M3 (2.15)

M5 = Rotz( 5) Rotx(61) M4 (2.16)

M6 = Rotz 6() Rotx(61) M5 (2.17)
At this point, all transformation matrices are known

and the image of the robot can be drawn. It is important to

note that the method described here is virtually identical

to that discussed in section 2.3. The improvement, however,

is that all the matrix multiplications required to transform

the coordinates of some point from a local coordinate system

to the screen coordinate system will be accomplished by

specially designed chips. In this way the multitude of

matrix multiplications can be accomplished in a minimal

amount of time.




2.7.2 IRIS data structure

The data structure of the robot animation program was

also modified in order to take advantage of the unique

capabilities of the IRIS system. As previously noted, the

entire image of the Cincinnati Milacron T3-776 robot can be

formed from a series of n sided polygons. The IRIS graphics








command which draws a solid polygon in the currently defined

color is as follows:



polf (n, parray) (2.18)



where n is an integer which represents the number of sides

of the polygon and parray is an array of size nx3 which

contains the local coordinates of the vertices of the

polygon.

Since all polygons are to be defined in terms of their

local coordinates, all polygons were defined once at the

beginning of the program. For example, there exist 126 four

sided polygons in the representation of the T3-776 robot.

Therefore the variable 'p4array' was declared to be of size

[126][4][3]. Each four sided polygon was given a number

(name) and the local X,Y,Z coordinates of each of the four

points were stored in the array.

An obvious disadvantage of this scheme is that point

data will be duplicated, thereby requiring more computer

memory. For example, a cube is defined by eight points and

six polygons. In the method used, each point will appear in

the variable 'p4array' three times, i.e. as a member of each

of three sides of the cube. The advantage of this method,

however, is that of speed. The datum for a particular

polygon is pre-formatted for immediate use in the 'polf'

command. No additional data manipulation is required.









2.7.3 Backface polygon removal on the IRIS

In section 2.4 a method of backface polygon removal was

discussed. A normal point was selected such that the vector

from the origin of the local coordinate system to the normal

point represented a vector normal to the particular polygon

in question. Transformation of the origin point and the

normal point to the screen coordinate system would determine

if the polygon was facing the viewer.

This method was again used on the IRIS workstation with

slight modification. From observing Figure 2-3, it is

apparent that most of the polygons which form each of the

rigid bodies have one of the coordinate vectors as their

normal vector. Therefore, associated with each polygon is

the character string 'x', 'y', 'z', or 'other'. In this

manner, not every polygon will have to have its normal

vector calculated. Allowing three normal vector

calculations for each of the coordinate axes of each rigid

body (21 total), plus the normal calculations of the 'other'

cases (50 total), the normal vector calculations have been

reduced from a previous total of 237 to the new total of 71.

Knowing the transformation matrix, Mi, for each of the

rigid bodies, the transformation of the normal points could

be carried out in software via matrix multiplication. This

process, however, would be too time consuming and would

greatly slow down the animation rate. An alternative method

was found whereby the Geometry Engine chips of the IRIS

workstation could be used to perform the matrix

multiplication in hardware.









The IRIS workstation is placed in "feedback mode."

When in feedback mode, graphics commands are not drawn on

the screen, but rather data items are stored in a buffer.

The command 'xfpt' accepts the local coordinates of a point

as input. The homogeneous coordinates [x, y, z, w] of the

point in terms of the screen coordinate system are stored as

output in the buffer. The Z value of the normal point (z/w)

is compared with the Z value of the origin point of the

local system after both points have been transformed to the

screen coordinate system by the Geometry Engine.

Comparisons of these Z values determines whether the normal

vector is pointing towards the viewer and thereby determines

if a particular polygon is visible. It should be noted that

when a parallel projection is used, as it is in this

example, that the homogeneous coordinate 'w' will always

equal 1 and that the division is therefore not necessary.





2.7.4 Modified Sorting of Plane Segments

After backfacing polygons are removed, the remaining

plane segments must be drawn on the screen in proper order

so that polygons closer to the viewer are drawn last.

Section 2.5 detailed a method for accomplishing this

sorting. A lengthy series of tests were made to compare

every pair of plane segments. Although the sorting

algorithm produced correct results, the computational time

was unacceptable.









A new and simplified method was developed for use on

the IRIS workstation. Once all backfacing polygons are

removed, what remains is a collection of objects. It was

desired to compare and sort the objects, not the individual

plane segments which compose the objects. An object is

defined as a collection of plane segments which cannot

overlap each other. An example of an object is the base

plate of the robot. A second example is the large box

shaped object in the base which rests on top of the base

plate. These examples point out that each of the rigid

bodies shown in Figure 2-3 are composed of a collection of

objects.

Once all objects were defined, a series of rules was

generated which describes how the image of the robot is to

be drawn. An example of such a rule is as follows:



If I am looking from above the robot, then the base

plate must be drawn before the box which rests on

top of it.



The 'if clause' of the above rule will be true if the X axis

of the fixed coordinate system is pointing towards the

viewer. This information is already known since it was

required in the determination of which polygons were

backfacing. Similar rules (again based on previously

calculated facts) make it possible to sort the objects

quickly and correctly. A total number of 12 basic rules








were required to produce accurate images of the robot.

These 12 rules form the basis of a forward chaining

production system. It must be re-emphasized that the

correct ordering can be accomplished in a negligible amount

of time because all data required by the 'if clause' of each

rule were calculated previously.





2.8 Results and Conclusions

The resulting representation of the Cincinnati Milacron

T3-776 robot is shown in Figures 2-11 and 2-12. Pictures

are generated at a rate of 10 frames per second which

results in satisfactory animation. As previously stated,

the user is capable of interacting with the system in real

time to alter the viewing position, freeze the motion, or to

zoom in.

Many applications exist for such a graphics system.

Two particular applications, the control of teleoperated

robots and the off line planning of robot tasks, will be

presented in Chapters IV and V. Additional applications in

the areas of designing workstations, and the evaluation of

new robot designs (based on factors such as workspace

envelope, dexterity capability, and interference checking)

make such a graphics system a valuable tool.





























Figure 2-11: Animated Representation of T3-776 Manipulator


Figure 2-12: Animated Representation of T3-776 Manipulator














CHAPTER III
PATH GENERATION




3.1 Introduction and Objective

This chapter is concerned with the calculation of a

series of joint angles for a robot manipulator which will

cause the manipulator to move along a user specified path.

These calculations will serve as input to the robot

animation program described in the previous chapter. In

this manner, the user will be able to observe and evaluate

the robot motion on the graphics screen prior to any

movement of the actual robot manipulator. As with the

previous chapter, the specific application to the Cincinnati

Milacron T3-776 manipulator will be presented in detail.

The first problem to be considered will be the reverse

kinematic analysis of the robot manipulator.- This analysis

determines the necessary joint displacements required to

position and orient the end effector of the robot as

desired. The problem of path generation is then reduced to

the careful selection of robot positions and orientations

along some desired path. A reverse kinematic analysis is

then performed for each of these locations.








3.2 Notation

The notation used throughout this analysis is that

developed by J. Duffy as presented in reference [21].

Briefly stated, a manipulator is composed of a series of

rigid links. One such link is shown in Figure 3-1. In this

figure it is shown that the link connects the two kinematic

pair (joint) axes S. and S.. The perpendicular distance
-1 -J
between the pair axes is a.. and the vector along this
1J
mutual perpendicular is a... The twist angle between the
-13
pair axes is labelled a.. and is measured in a right handed

sense about the vector a...
-1J
The particular kinematic pair under consideration is

the revolute joint which is shown in Figure 3-2. The

perpendicular distance between links, or more specifically

the perpendicular distance between the vectors a.. and ajk'

is labelled as the offset distance S.. The relative angle

between two links is shown as G. and is measured in a right

handed sense about the vector S..
-j
Four types of parameters, ie. joint angles (0j), twist

angles ( aij ), offsets (S.) and link lengths (aij) describe

the geometry of the manipulator. It is important to note

that for a manipulator comprised of all revolute joints,

that only the joint displacement angles are unknown

quantities. The twist angles, offsets, and link lengths

will be known constant values. Furthermore, the values for

the sine and cosine of a twist angle a.. and an angular

joint displacement 9. may be obtained from the equations
J










I5


Figure 3-1: Spatial Link


































- ~.


Figure 3-2: Revolute Pair


sjj





lij


-I










cij = Si'Sj (3.1)

s.j = ISiSja jI (3.2)



c = a.*a (3.3)
c -jk
Si = laijajkSj (3.4)



Determinant notation is used to denote the scalar triple

product.




3.3. Mechanism Dimensions of the T3-776 Manipulator

Shown in Figure 3-3 is a sketch of the T3-776 robot.

The robot is described by the manufacturer as consisting of

a three roll wrist connected to ground by an elbow,

shoulder, and base revolute joint. Shown in Figure 3-4 is a

skeletal drawing of the manipulator. The three roll wrist

is modeled by a series of three revolute joints whose axes

of rotation all intersect at a point. The elbow, shoulder,

and base joints are each modeled by a revolute joint such

that the axis of rotation of the shoulder and elbow are

parallel.

In the skeletal model the joint axes are labeled

sequentially with the unit vectors Si (i = 1,2..6). The

directions of the common normal between two successive joint

axes S. and S. are labeled with the unit vectors a.. (ij =
1 -13
12,23,..67). It must be noted that only the vectors al2 and

-23 are shown in Figure 3-4 for simplicity of the diagram.
























FROT RIS GAROX RIT DIV SB.ASEBL


ELBOW DRIVE SUB-ASSEMBLY


SHOULDER HOUSING




SHOULDER DRIVE
SUB-ASSEMBLY
(BEHIND SHOULDER
HOUSING)

- BASE HOUSING
TURNTABLE
GEARBOX
DRIVE
PA.U.


Figure 3-3: Cincinnati Milacron T3-776 Manipulator


FRONT WRIST GEARBOX


,WRIST DRIVE SUB-ASSEMB LY















923


S6


Figure 3-4: Skeletal Model of T3-776 Manipulator









As previously stated, the link lengths a.., the offsets

S.., and the twist angles a. are constants which are

specific to the geometry of a particular manipulator. The

values of these constants are tabulated below for the T3-776

robot.



S =* a12 = 0 in. a12= 90 deg.

S22 = 0 in. a23 = 44.0 a23 = 0

S33 = 0 a34 = 0 a34 = 90 (3.5)

S44 = 55.0 a45 = 0 a45 = 61

S55 = 0 a56 = 0 a56 = 61
Sll will be computed subsequently



In addition to the above constant dimensions, S66 and

a67 are selected such that the point at the end of vector

a7 is the point of interest of the tool connected to the

manipulator. For example this point may be the tip of a

welding rod that the manipulator is moving along a path.

Once a particular tool is selected, constant values for S66

and a67 are known.

Furthermore it is noted that the link lengths a12, a34'

a45, and a56 equal zero. However it is still necessary to

specify the direction of the unit vectors a12' a34' a45, and

a in order to have an axis about which to measure the

corresponding twist angles. The vector a.. must be
-1J
perpendicular to the plane defined by the vectors S. and S.
S
and as such can have two possible directions. For the









vectors a a34, a45, and a56 this direction is arbitrarily

selected as the direction parallel to the vector S xS The

values for the corresponding twist angles a 2', 34' a45' and

a 56 listed in (3.5) were determined based upon this
convention.





3.4. Reverse Displacement Analysis

For the reverse displacement analysis the position and

orientation of the hand of the manipulator are specified.

It is desired to determine the necessary values for the

relative displacements of the joints that will position the

hand as desired, ie. to determine sets of values for the six

quantities 1,' 92' (3' 04' 5', and 96. The analysis

is complicated by the fact that there are most often more

than one set of displacements which will place the hand in

the desired position. An advantage of this reverse

displacement analysis is that all displacement sets will be

determined as opposed to an iteration method which would

find only one set of joint displacements.

It turns out that for the T3-776 robot there are a

maximum of four possible sets of angular displacements which

will position and orient the hand as specified. The unique

geometry of the robot, that is S2 parallel to S3 and

S4'S5',6 intersecting at a point, allows for eight possible
sets. However the limits of rotation of the first three

joints reduces the solution to a maximum of four. The








limits of rotation for the angles 1,' 2' and 93 (see

Figure 3-4) are as follows:



-135 < <1 < 135 (degrees)

30 < 92 < 117

-45 < 93 < 60



3.4.1 Specification of position and orientation

The first step of the analysis is to establish a fixed

coordinate system. For this analysis a fixed coordinate

system is established as shown in Figure 3-4 such that the

origin is at the intersection of the vectors S1 and S2. The

Z axis is chosen to be parallel to the S, vector and the X

axis bisects the allowable range of rotation of the angle

i". Throughout the rest of this analysis, this coordinate

system will be referred to as the fixed coordinate system.

Using this fixed coordinate system it is possible to

specify the location and orientation of the hand by

specifying the vector to the tool tip, Rpl, (see Figure 3-5)

and the direction cosines of the vectors S6 and a67.

Although RPl' S,6 and a67 have a total of nine components,

the latter two are related by the three conditions,



S S = 1
-6 -6

a a67 = 1
67 67 = 0
6 "~-67











0P12











RPI \


a7,


.767


Figure 3-5: Hypothetical Closure Link


S1I


\s,









so that the three vectors (RPl' 6' E 67) represent the 9-3=6

independent parameters necessary to locate a rigid body in

space.



3.4.2 Closing the loop

Once the position and orientation of the hand is

specified, the manipulator is connected to ground by a

hypothetical link. The problem of determining the sets of

joint displacements to position and orient the hand is thus

transformed to the problem of analyzing an equivalent

spatial mechanism with mobility equal to one. The concept

of closing the loop is not new. Pieper and Roth [22] were

the first to point out the implicit correspondence between

manipulators and spatial mechanisms using homogeneous

transfer matrices. The method of closing the loop which is

presented here was published by Duffy and Lipkin in

reference [23].

It is now necessary to determine the five constraint

parameters S77, a71, S1', 071' and (01-I1) that complete

the loop together with the input angle of the spatial

mechanism, 07. The first step of the completion algorithm

is to establish a direction for the unit vector S This
-7
vector will act as the axis of rotation of the hypothetical

revolute joint which serves to close the loop. The

direction of S7 is arbitrary so long as it lies in the plane

which is perpendicular to a67. For this analysis S7 is

selected such that a67 equals 90 degrees and thus the

direction cosines of S7 may be obtained from the equation







7 = a7 x (3.6)


With S7 now known, application of (3.1) gives the following

expression for c71:


c71 = S7 S (3.7)


A value of c71=+1 immediately flags a singularity which will

be discussed subsequently. The unit vector a71 is now

defined by

S7 x S
-7 -1
a = (3.8)
I-7 X S11

and thus by application of (3.2),



s71 = a71S7S (3.9)


Utilizing the vector loop equation (see Figure 3-5),


Rl + S77 + aa7 + Sa SI = 0 (3.10)


results in explicit expressions for the hypothetical link

a71 and hypothetical offsets S77 and Sli,


877 = IRpla71Sll / s71 (3.11)

a71 = IS7RPSl1- / s71 (3.12)

S11 = S7a71Rpll / s71 (3.13)










Utilizing the results of (3.8) and equations (3.3) and (3.4)

gives the following expressions for the sine and cosine of

77'


C7 = a67 a (3.14)



S7 = S7a771 (3.15)


In addition, by projection the expressions for the sine and

cosine of (01-~1) are



cos(91-+) = a71 i (3.16)


sin(OG 1-) = ISla71il (3.17)


where i is the unit vector in the direction of the X axis.

It was mentioned earlier that a singular condition is

flagged when c71=+1 (and therefore s71=0). This occurs when

the vectors S7 and S are either parallel or antiparallel

and there are thus an infinity of possible links a71 which

are perpendicular to both S and S However the constraint

S77=0 can be imposed to obtain a unique result as shown in

Figure 3-6. Forming the inner product of (3.10) with S

yields,


S11 = -P1 Sl


(3.18)






z,s,


OO*e*


x



S, I


Figure 3-6: Hypothetical Closure when S II S
-1 'I -7


6


L71


967










Further, from equation (3.10),



a71 = .IP + S11S1 (3.19)


and provided that a71 0,


a1 = -(RlP + S S) / a71 (3.20)
271 P 11 + 71


The remaining angles 97 and (01-~I) can again be

calculated using equations (3.14) through (3.17).

Finally when the axes of S and S are collinear, the
7 -1
condition a71=0 flags a further singularity. The direction

of the unit vector a 1 in the plane normal to the axis of S

is now arbitrary. In this case it is convenient to impose

the additional constraint that 97=0 making a71 equal to

a67 The remaining angle (QI-~1) can again be calculated
using (3.16) and (3.17).

Equations (3.7) through (3.17) plus the special

analysis developed for 57 parallel to S, determine all the

necessary parameters of the hypothetical closure link which

is shown in Figure 3-5. In addition, a unique value for the

angle 97 has been determined. Thus the reverse

displacement solution of the open chain manipulator has been

transformed to the solution of a closed loop mechanism with

a known input angle 07. Well documented methods for

analyzing the closed loop mechanism can thus be used to








determine all possible sets of joint displacements which can

position the hand as specified.





3.4.3 Determination of A1, 22' and 03

At this point the next step of a standard reverse

position analysis would be to analyze the closed loop

mechanism formed by the addition of the hypothetical closure

link to the open chain manipulator. However due to the

relatively simple geometry of the T3-776 robot, a shorter

and more direct approach will be taken.

It should be noted from Figure 3-4 that since the

direction cosines of vectors S6 and a 7 are known in the

fixed coordinate system together with the length of offset

S66 and link a67, that the coordinates of point P2, the

center of the three roll wrist, are readily known. The

vector P2 (see Figure 3-7) from the origin of the fixed

coordinate system to point P2,is given by



R =R -S S -a a (3.21)
-P2 = PI P 6 a 6767 (3.21)


It is also shown in Figure 3-7 that the vectors RP2' a12'

a23 S 4 and S all lie in the same plane. This is due to

the unique geometry of this robot whereby the vectors S2 and

S3 are parallel. Because of this, simple planar

relationships can be utilized to determine the three

relative displacement angles 'l, 02' and 93.



















-23


S44


R
-P2


Figure 3-7: Location of Wrist Point








The angle 41 is defined as the angle between the fixed

X axis and the vector a12 measured as a positive twist about

the vector S1. Application of equations (3.3) and (3.4)

gives the following expressions for the sine and cosine of

hi'


cos A1 = i'a12 (3.22)



sin 1 = ia-l2~S1 (3.23)



The only unknown in these equations is the vector a12'

Since the vectors RP2' a12, and S all lie in the same

plane, it must be the case that al2 is either parallel or

antiparallel to the projection of RP2 on the XY plane. Thus

the vector a12 is given by


+[(R p2 i)i + (R P ) (
-12 [(RP. 2 + (Rp2 ) 2] .5


Substitution of the two possible values of a- into (3.22)
12
and (3.23) will result in two possible distinct values for

+1 and it can be shown that these two values will differ by
180 degrees. It is apparent that one of the calculated

values of A1 may not fall in the allowable range of

rotation of -135 to +135 degrees. If this occurs then there

is an immediate reduction from a maximum of four to a

maximum of two possible configurations of the robot which

will position the hand as specified.








Utilizing equations (3.16) and (3.17) gives the

following expressions for the sine and cosine of 91,


cos 01 = cos(O1-41)C os 1 sin(Q1--1)sin 1, (3.25)

sin 91 = sin(01-~l)cos 1i + cos(Q1-cl)sin 1, (3.26)

It must be emphasized that 01 is defined as the relative

angle between the vector a 2 and the hypothetical link a71

defined in the previous section (see Figure 3-5). As such,

Q1 is calculated at this time for subsequent use in the
determination of the angles in the wrist (94, 05, and

06)
Before proceeding with the analysis it is important to

note that the angles (2 and (3 (see Figure 3-4) are used

in addition to the second and third actuator joint

displacements 02 and 03. These angles are related by the

following equation:


.j = tj 90 deg. (j=2,3) (3.27)

The cosine of angle t3 is calculated by applying the

cosine law to the planar triangle shown in Figure 3-8. The

resulting expression is,


2 2 2
cos 3 = (a23 + S44 RP2) / (2a23S44) (3.28)


Two corresponding values for the sine of t3 are obtained


from the equation















.34
.-34


SI-4
P2


'I


Figure 3-8: Determination of 2nd and 3rd Joint Angles








sin = + 1-cos23 (3.29)


Thus there are two values of }3 which can position the

point P2 as specified and these two possibilities are

referred to as the "elbow up" and "elbow down" orientations.

However due to the limit on the rotation of 3' ie. 45 <

<3 < 150 degrees, only one value of t3 and thus unique
values for the sine and cosine of t3 will be possible.

From (3.27) the sine and cosine of 93 are given by


c3 = cos ( 3-90)

= sin t3 (3.30)

s3 = sin (t3-90)

= -cos 3 (3.31)


Equations (3.30) and (3.31) will be used subsequently in
expressions for the angles in the wrist of the manipulator.

A solution for the unique value of 02 and thereby t2

which corresponds to each set of pairs of angles 1 and t3

is obtained by use of projection. It is shown in Figure 3-8

that the vector R can be written as



a23-23 + 444 = P2 (3.32)


Projecting this equation upon a and then S gives the

following two equations:








a 3a3a + S44S4"a2 =R p"a
23-23 -12 44-4 -12 -P2 -12


a23a23S + S 44'S
-1 44-4 -1


- RP2'S1
--P2 --1


The right sides of both the above equations are known.

Expanding the scalar products on the left sides of (3.33)

and (3.34) gives


a23c2 S44cos(92+3) = Rp2al2


a23s2 S44sin(92+3) = Rp2'S
23 2 44 23 P2-1


(3.35)


(3.36)


Expanding the sine and cosine of (92+3) and regrouping

gives


c2[a23-S44cos03] + s2[S44sin 3] = Rp2al2 (3.37)


c2[-S44sin 3] + s2[a23-S44cos3] = Rp2 S1 (3.38)


Using Cramer's rule to solve for the sine and cosine of 92

and recognizing that


2 2 2
1p21 = a23 + 4


- 2a23S44cos3


(3.33)


(3.34)


gives








c2 = [(a23-S44cos43)(RP2"a12) -

(S44sin3) (Rp2"Sl)]/RP21 2 (3.39)


2 = [(a23-S44cos3) RP2"l) +

(S44sin 3) (RP2 l2' ) ]/ P212 (3.40)


Equations (3.39) and (3.40) result in a unique value for 02

corresponding to each pair of calculated values of +1 and

3'. From (3.27) the sine and cosine of 42 can be written
as


cos +2 = cos(@2+90)

= -S2 (3.41)


sin +2 = sin(92+90)

= c2 (3.42)


As before each calculated value of +2 must be checked to

see if it is in the allowable range of rotation. If it is

not, then the maximum number of possible configurations of

the robot which can position the hand as specified is

further reduced.

At this point up to two sets of the three displacement

angles +1' 02, and 03 are known which position the point

P2 as required. However if there were no joint angle
limitations at joints 1, 2, and 3, there would be four

possible sets of displacements which would position point P2








as required. This reduction from four sets of values to a

maximum of two possible sets is significant in that it

reduces the computational time involved in the reverse

position analysis of the T3-776 robot.



3.4.4 Analysis of wrist

The remaining task of the Reverse Displacement Analysis

is to determine the values of the three angles in the wrist

which correspond to each set of values of the angles l1'

92, and 03. Complete rotation is possible about each of
the three axes S S and S6 so that the result will not be

affected by joint limitations as in the previous section.

Figure 3-9 shows a more detailed view of the three roll

wrist. It is important here to reemphasize how 904 05,

and 96 are measured. Each of the angles 9j (j=4,5,6) are

measured by a right handed rotation of the link a.. into the

link a about the vector S..
-jk -3
Two moving coordinate systems are attached to the robot

as shown in Figure 3-10 such that x' always points along a12

and z' points along S~ and analogously x" always points

along a45 and z" along S4. The relationship between the

fixed XYZ system and the moving x'y'z' system is given by

the following equations:



x' = x cos l + y sinl1

y' = -x sinl1 + y cosl1 (3.43)

z' = z



































N, g5B
g67 2


Figure 3-9: Three Roll Wrist



















Y'
-p


Xi ,12


ifX4


Figure 3-10: Moving and Fixed Coordinate Systems


Z










The direction cosines of the vector S which are given
-6
quantities in the fixed system can now be written in the

moving x'y'z' system by application of (3.43) and,



x6 = x6cosFl + Y6sinl1
y~ = -x6sinl1 + y6cos41 (3.44)

z' = z
z = z6


where x6,y6,z6 and x6, y6, z6 are respectively the

direction cosines of 6 in the fixed and moving systems.
-6
The direction cosines of S6 in the second moving
-6
coordinate system, x", y6, z6 are related to the direction

cosines of S6 in the first moving coordinate system by three

successive applications of the rotation matrix


Cj S.jc. SjS..
3 13 3 13
A.. = -s c c.s (3.45)
0 -sij ci.


which yields



6 (
y = A A23Al2 6 (3.46)









Substituting the values for a12, a23' and a34 from set (3.5)

into (3.46) gives



x c4c2+3 -s4 c4S2+3 x6
6 = -s4c2+3 -c4 4 2+3(347)

1s 0 -C XI
z6_ 2+ 0 2+3 z6


where s2+3 and c2+3 represent the sine and cosine of

(02+ 3). As already stated, the abbreviations s. and c.

in (3.45) denote the sine and cosine of 0. which measures

the relative angular displacement between successive links

aij and ajk. At this point all parameters on the right side

of equation (3.47) are known with the exception of the sine

and cosine of 04.

Alternate expressions for the direction of S in the
-6
second moving coordinate system may be obtained by simple

projection. These relationships are as follows:



6 5
Y = 5 (3.48)




where



X = S56s5
5 = -(s45c56+c45s56c5) (3.49)

5 = c45c56-S45s56c5








Equating the right sides of (3.47) and (3.49) and rearranging gives
the following three equations:


X5 = 4(X6C2+3+ZS2+3) + 4(-) (3.50)


Y5 = c4(6) + s4(X6c2+3+z6s2+3) (3.51)


5 = xs2+3 zc2+3 (3.52)


Equation (3.52) is significant in that it contains c5 as its
only unknown. Substituting for Z5 from set (3.49) and
solving for c5 gives


c5 = (c45c56-x s2+3+z6c2+3) / (s45s56) (3.53)


Equation (3.53) gives two possible values for 05 and these
two values determine two configurations of the wrist for a
specified end effector position.
The next task is to find the corresponding value for
04 for each value of 95. This is readily accomplished by
utilizing Cramer's rule to solve for s4 and c4 in equations
(3.50) and (3.51). The resulting equations are as follows:

c X5(x c2+3+zs2+3) g5y
c4 = 2 2- (3.54)
(x6c2+3+z s2+3) +


s4 2 2 (3.55)
(xc2+3+z s2+3 6








It is interesting to note that both the denominator and

numerator of (3.54) and (3.55) vanish simultaneously when



y6 = 0 and x2+3+zs2+3 = 0 (3.56)


This constitutes what will be defined as an algebraic

indeterminacy for 04. It can be shown that these

relationships are only satisfied simultaneously when SS is

colinear with S4, or in other words when 9 = +180 degrees.

Thus a value of c5=-1 calculated from (3.53) will act as a

flag to signal when equations (3.54) and (3.55) may not be

used to determine 94.

It is readily apparent that when S6 and S4 are

colinear, a degree of freedom of the manipulator is lost in

that it is possible to spin the vector S about the S ,S
5 -4 ~6
line without changing the position and orientation of the

hand. Thus the choice of 04 is arbitrary. This problem

can be overcome by setting 94 to its last previously

calculated value prior to the robot moving into the

indeterminate position.

The only remaining angle to be determined in the wrist

is the corresponding value for 06. Utilizing the unified

notation [21], the following subsidiary expressions may be

written for a spherical heptagon:








Z = Z4321



X6 = 43217


(3.57)


(3.58)


Expanding the left sides of the above two equations and

solving for the cosine and sine of 06 respectively gives


c6 = (c56-Z4321) / s56


6 = X43217 / s56


(3.59)


(3.60)


The right

expanded

following


hand side of equations (3.59) and (3.60) can be

in terms of known quantities by use of the

sets of equations:


X43217 = X4321c7 Y4321s7


X4321 = X432C1 Y432S1

4321 = C71(X432S1+Y432C1) s71Z432
4321 = s71(X432S1+Y432Cl) + c71Z432



X432 = X43c2 Y43s2

Y432 = c12(X43s2+Y43c2) s12Z43
= 43

Z432 = s12(X43s2+Y43c2) + c12 43
SX43s2+Y43c2


(3.61)




(3.62)








(3.63)








X43 = X4C3 Y4s3

Y43 = C23(X4s3 + Y4C3) S23Z4
= X4S3 + Y4C3 (3.64)

43 = s23(X4s3 + Y4c3) + c23 4
= z4



X4 = S45s4

Y4 = -(s34c45 + c34s45c4) (3.65)
= -45

4 = 34c45 34s45c4
= -s45c4
45 4
At this point the reverse displacement analysis is

complete. For any given location and orientation of the

hand, all displacement angles of the manipulator are known.

If the hand is in the effective workspace of the robot, then

there will be up to four possible configurations. That is,

there will be up to two sets of values for the angles l1'

02, and 03. For each of these sets there will be two
corresponding sets of values for the angles 94, Q5, and

(6'



3.5. Forward Displacement Analysis

For the forward displacement analysis it is assumed

that the values for the angles 1,' 02' @3' 04, 95, and 06

are known. It is desired to determine the location and

orientation of the hand, i.e. the coordinates of a reference

point of the tool attached to the manipulator and the









direction cosines of the vectors S6 and a67. This analysis

is included for completeness and will be referenced in

Chapter IV although it is not required for the path

generation problem.

The direction cosines of S6 and a67 in the moving x y

z coordinate system (see Figure 3-11) are simply (0,0,1)

and (1,0,0) since the x and z axes are chosen to point

along the vectors a67 and S6 respectively. These direction

cosines may be expressed in the x'y'z' coordinate system by

five successive applications of the rotation matrix


s.
- sjsij


-S.

3 13

3 13


0

-s.

c.j
13


(3.66)


It may be recalled that the x'y'z' coordinate

attached to the manipulator such that x' points

and z' along S~. Therefore a vector in the x y

can be expressed in the x'y'z' system

transformation equation


x' x
I I I "*
y = A21A32A43 A5465 y
L *
- 'z' z


system is

along a12

z system

using the


(3.67)


Further a vector expressed in the x'y'z' system may be

written in terms of the fixed xyz coordinate system via use

of the following transformation equation










ZI I

OZ~ Y
I jr


923


X,12


z6


Figure 3-11: Forward Analysis


2 6 w










y = M y' (3.68)




where



cos:1 -sin4, 0
M = sin cos1, 0 (3.69)

S0 0 1


Combination of (3.67) and (3.68) and the substitution of the

known direction cosines of S6 and a67 in the x y z

coordinate system gives




Y6 = M A21A32A43A54A65 0 (3.70)

z6 1


x67 11

67 = M A21A32A43A54A65 0 (3.71)

z67L 0


where x6,y6,z6 and x67',67,z67 are the direction cosines of

S6 and a67 respectively in the fixed coordinate system.

The last parameter to be determined is the position

coordinates of the point Pl, the point of interest of the

tool attached to the manipulator. This is obtained by use

of the following vector equation:









-P1 -P2 66-6 a67-67 (3.72)


where R P is the vector to the tool point of interest and

R2 is the vector to the wrist point P2. Since the
-P2
direction cosines of S and a67 are known in the fixed
-6 -67
system, the only unknown in (3.72) is the vector RP2. The

components of this vector in the x'y'z' coordinate system

are simply given by



-P2 = [a23c2 + 44S2+3]' +
[a23s2 S44c2+3]k' (3.73)


where i' and k' are unit vectors along the x' and z' axes.

This vector may be transformed to the fixed coordinate

system by multiplying it by the rotation matrix M of

equation (3.69). With RP2 now known in the fixed system,

R~P can be determined from (3.72).

The forward displacement analysis is now complete in

that the position and orientation of the hand of the

manipulator are uniquely determined for a given set of joint

displacements. It is important to note that the forward and

reverse solutions are complementary. That is, a set of

joint angles determined by the reverse analysis for a given

position and orientation of the hand must produce this same

position and orientation when used as input for the forward

analysis. This serves as a first verification of results.









3.6 Path Generation

The reverse analysis of the manipulator will serve as

the primary tool required to cause the manipulator to move

along a specified path. Simply stated, a set of

intermediate positions and orientations of the robot will be

selected along some path between two user specified end

points. A reverse analysis will be performed at each of the

intermediate positions in order to determine the set of

joint angles which will position the manipulator as

required.

For this analysis, it will be assumed that the user has

defined the initial and final pose of the manipulator.

Specifically, this requires that the user has specified the

coordinates of the tool tip and the directions of the unit

vectors S6 and a67. These nine numbers, six of which are

independent, completely describe the position and

orientation of the manipulator. Throughout the remainder of

this chapter the initial and final positions and

orientations of the manipulator will be assumed to be known

quantities and will be referred to as r S-' a 671' and r ,

S6F' a67F respectively. Many methods exist for the user to
input these values. Alternate methods will be discussed in

Chapter V.

Many strategies can be used in order to determine a

series of intermediate positions and orientations of the

manipulator between two user specified poses. For this

analysis, it was desired to cause the tool point to move








along a straight line. Furthermore, the tool attached to

the end effector should start at rest and accelerate to some

maximum velocity before slowing down and coming to rest at

the final manipulator pose. Due to the desired motion

characteristics, a displacement function based on the cosine

curve was selected. This displacement function is shown in

Figure 3-12.





3.6.1 Determination of number of intermediate points

A first step in the analysis is to determine how many

points should be inserted between the initial and final

poses of the manipulator. Too many points will cause the

motion of the animated manipulator to appear quite slow.

Too few points will result in fast velocities which will

make the animation appear to 'flicker'.

The number of intermediate points was selected based on

two factors, ie. the total straight line distance travelled

and the total change in orientation of the end effector.

Since the initial and final position of the tool tip are

specified, the total straight line distance is readily

calculated as follows:



dist = |rf riI (3.74)



The number of steps based on translation is found by

dividing this distance by some standard step size,






































Time


Figure 3-12: Displacement Profile




Full Text
Figure 4-10: Determination of Intersection
B
A
Figure 4-11: Generation of Alternate Path


91
stepstrang = dist / (step size) (3.75)
For the Cincinnati Milacron T3-776 manipulator, a step size
of 1.9 inches produced satisfactory results. For example,
if the tool tip must move a total distance of 50 inches,
then the number of steps based on this translation is equal
to 50/1.9 or 27 steps.
The number of steps based on the change in orientation
of the tool was also calculated. It is possible to
determine the axis and net angle of rotation about this axis
that will cause the tool of the manipulator to change
orientation as desired (see reference [24]).
The first step of this procedure is
to determine
the
3x3 rotation
matrix, R, which aligns the
final orientation
of the end
effector with the
initial
orientation.
Two
additional
matrices, Cj and
Cp, are
first defined
as
follows:
o
M
II
[-61 -671 (-6Ix-
67I) J
(3.76)
n
ii
[-6F 67F (-6FX-
67F}[]
The elements
of matrix R can
be now
be determined
as
follows:


45
command which draws a solid polygon in the currently defined
color is as follows:
polf (n, parray) (2.18)
where n is an integer which represents the number of sides
of the polygon and parray is an array of size nx3 which
contains the local coordinates of the vertices of the
polygon.
Since all polygons are to be defined in terms of their
local coordinates, all polygons were defined once at the
beginning of the program. For example, there exist 126 four
sided polygons in the representation of the T3-776 robot.
Therefore the variable 'p4array' was declared to be of size
[126][4][3]. Each four sided polygon was given a number
(name) and the local X,Y,Z coordinates of each of the four
points were stored in the array.
An obvious disadvantage of this scheme is that point
data will be duplicated, thereby requiring more computer
memory. For example, a cube is defined by eight points and
six polygons. In the method used, each point will appear in
the variable 'p4array' three times, i.e. as a member of each
of three sides of the cube. The advantage of this method,
however, is that of speed. The datum for a particular
polygon is pre-formatted for immediate use in the 'polf'
command. No additional data manipulation is required.


170
pointed straight up (or down), then there is no projection
of on the XY plane. In this case, the previous value of
E is sent to the robot controller. Communication of actual
joint angles rather than roll, pitch, and yaw data would
have avoided this problem.
The last orientation datum, roll, is somewhat more
difficult to calculate. Shown in Figure 5-14 is the angle
R, which is shown as the angle between the vectors a^ and
V. The vector V is defined so as to be coplanar with the
vectors k and £g and also perpendicular to Sg. V can be
determined by the following equation:
V = k (Sg*k)Sg (5.9)
With the vector V now known, the sine and cosine of angle R
can be readily determined as before via application of cross
and dot products.
As with the calculation of the angle E, an algebraic
indeterminacy results if the tool is pointed either straight
up or straight down. In either of these cases, the vector V
will be undefined. For this case however an alternate
solution method can be found in which equation (5.9) is
replaced such that the vector V is found by successive
rotations of the i^ vector by the angles E and D about the
respective axes Z and Y.


31
Figure 2-8: Wire Frame Model of T3-776 Manipulator


95
method will yield accurate results when 9 is greater than
90 degrees. When 9 is less than 90 degrees, equations
(3.85) will yield accurate results for k except when 9
approaches 0.
The number of intermediate points due to rotation is
based on the magnitude of the net angle of rotation. The
number of steps is given by
stepsrot = 9 / (rotation size) (3.89)
Satisfactory results were found to occur when the rotation
size was chosen to be equal to 5 degrees.
The actual number of intermediate points inserted
between the user specified initial and final manipulator
poses was chosen as the larger number of steps based on
translation and on rotation (equations (3.75) and (3.89)).
This strategy ensured smooth animation of the manipulator
for all cases. In particular, the special case where the
tool tip position remained unchanged, but the orientation of
the tool changed considerably was handled with little
difficulty. For this case, the number of steps based on
translation was 0, but the number of steps due to the change
in orientation was significant.
3.6.2 Determination of intermediate positions and
orientations
The number of intermediate positions and orientations
of the manipulator are now known. The problem now is how to
distribute the intermediate positions in order to produce
the desired motion characteristics.


J-64
If the factor (S44s2+3+a23c2^ equals zero, then the
wrist point of the robot intersects the vector S^. This
represents a singularity because at this instant, four of
the screws of the system would intersect at a point. For
the T3-726 robot the lengths and 823 are equal so that
the factor can be simplified to (s2+3+c2) Again, due to
joint angle limitations, the T3-726 robot cannot achieve
this singularity configuration.
The last factor (s^) indicates that a singularity will
exist
if
5 equals
0
or 180 degrees.
In either
of these
cases,
the
vectors
4
, Sg, and Sg are
coplanar and
thus
the
screws
4 '
$5, and
6
are dependent.
The case
when
65
equals
180
degrees
also results in the
vectors £
a and
4
6
being colinear and thus the screws and £g are identical.
The T3-726 robot is capable of achieving this type of
singularity and this case must be monitored and avoided if
possible. It is important to note however that if
artificial joint limits were placed on 05 in order to avoid
the singularity position, then there would be a unique
solution to the reverse kinematic analysis rather than the
two solutions that actually exist.
5.10 Selection of Configuration
It was previously mentioned that the user must select
the starting configuration of the manipulator at the first
precision point. The singularity analysis was used to


82
Z6 = Z4321
(3.57)
X6 = X43217
(3.58)
Expanding the left sides of the above two equations and
solving for the cosine and sine of 9g respectively gives
c6 (C56'Z4321> 7 s56 <3'59)
s6 X43217 7 s56 (3.60)
The right hand side of equations (3.59) and (3.60) can be
expanded in terms of known quantities by use of the
following sets of equations:
X43217 X4321C7 Y4321S7
(3.61)
4321
X432C1 Y432S1
4321
C71(X432S1+Y432C1)
S71Z432
4321
S71(X432S1+Y432C1)
+ C71Z432
432
X43C2 Y43S2
432
c12(X43S2+Y43c2)
- S12Z43
= Z .
43
432
s12(X43s2+Y43c2)
+ C12Z43
X._s_+y c_
43 2 43 2


CHAPTER VI
. DISCUSSION AND CONCLUSIONS
Advanced computer graphics hardware has been utilized
to generate real time, interactive, solid color images of
industrial robots. This feature offers the user clear
images which assist in direct control (telepresence) and
workcell modeling tasks.
It is apparent that the primary purpose of the graphics
system is to help the user to command the robot manipulator.
The ability to display an abundance of data in a readily
acceptable format significantly improves this man-machine
interaction. Future work must emphasize this factor and new
and better techniques of data display must be incorporated.
For example, during a workcell modeling task, the user may
be trying to position a vision system camera and a light
source. Shadows may impact on the performance of the vision
system and must be displayed. Also it may be helpful to
display objects as being transparent so that hidden items
may be identified and the camera repositioned accordingly.
As previously stated, the additional realism which is
added to the display will slow down image refresh rates.
Slow image generation will result in poor animation and
174


113
The animation requirements were realized by writing
efficient computer code which maximizes the graphics
productivity of the Silicon Graphics system. Briefly, a
subroutine was written which accepts the six joint angles of
the robot as parameters. Given these six angles, a
representation of the robot is produced. This subroutine is
repeatedly called until a halt flag is received by the
program. After a set of joint angles is received, certain
parameters are modified depending on signals received from a
rolling mouse device. The modification of these parameters
gives the operator the ability to change viewing positions
and to zoom in as desired in real time. Such a capability
was shown to be of great importance when operating the robot
while looking only at the computer graphics representation.
Figure 4-5 shows the graphics representation of the MBA
manipulator.
Of equal importance in the effective operation of the
robot, was the ability to accurately display the obstacles
(spheres and cubes) in the robot workspace. This is
accomplished by activating the vision system which consists
of a camera located directly above the robot and an image
processor. The vision system establishes a threshold such
that the objects of interest appear as white objects on a
black background as shown in Figure 4-6. A spiral search
pattern is initiated to locate all white objects in the
field of view of the camera. Analysis of scan line data
identifies each object as a sphere or a cube and determines


105
It should be noted that the use of video cameras to
provide vision feedback does have certain disadvantages.
Primarily, more than one camera must be used to provide
sufficient viewing positions for the operator. Each of
these cameras will most likely have two degrees of freedom
in addition to the zooming capability thereby giving the
operator numerous additional parameters to control.
Secondly, environmental conditions may cause the video image
to be blurred or poor lighting and contrast may make the
image confusing and unclear. Because of these and other
limitations, a system has been developed which replaces the
video feedback with a realistic computer graphics
representation of the robot and the workspace.
The use of a computer graphics system offers three
distinct advantages. First, images are clear and sharp.
The use of solid color representations of the robot and work
area with hidden surface removal provides a clear image of
the scene. Colors can be selected to provide obvious
contrast and clarity for the operator. Secondly, the
computer graphics system readily allows for viewing of the
image from any desired vantage point. This ability removes
the requirement for a multitude of video cameras and allows
the operator to focus his attention on only one monitor
screen. Lastly, the computer graphics system can provide
additional feedback to the operator. For example, when the
manipulator is moved close to an obstacle, an audio signal
can be given and the color of the obstacle can be changed.


CHAPTER I
INTRODUCTION
1.1 Background
There have been significant advances in the broad range
of technologies associated with robot manipulators in such
areas as kinematics and dynamics, control, vision, pattern
recognition, obstacle avoidance, and artificial
intelligence. A major objective is to apply these
technologies to improve the precision of operation and the
control of manipulators performing various tasks.
Just as significant an advance has been made recently
in the field of computer graphics hardware. Application of
VLSI technology has resulted in a dramatic increase in
graphics performance of up to 100 times faster than
conventional hardware. Low cost workstations ($20-50K) have
been developed which can generate real time raster images
which are formed by the illumination of discrete picture
elements. Although raster generated images may have less
picture resolution than images produced by vector refresh
devices, they do allow for the generation of solid color
images with shading and hidden surface removal. Application
of these and other computer graphics techniques have
resulted in improved image generation and realism and allow
1


153
A. If this is the case, then it can be concluded that line
segments A and B do indeed intersect.
For the case under consideration, point is the
origin and thus the total number of calculations is further
reduced. Application of this algorithm to this special case
results in only 7 additions and 10 multiplications required
to determine whether the robot moves behind its base.
5.6 Calculation of Intermediate Points
Once a series of precision points is entered such that
the wrist of the robot can move along a straight line
between them, a series of intermediate positions and
orientations between these points must be determined. The
method utilized was very similar to that discussed in
Chapter III. Intermediate displacement steps were taken
based on a cosine displacement function. Similarly,
intermediate orientation changes were made based on a cosine
rotation function about the net axis of rotation.
In the previous discussion, the step size taken between
successive intermediate points was based on the total
distance travelled divided by a specified constant and on
the net rotation angle again divided by some constant. In
this case the step size was also influenced by the velocity
factor entered during the input of the precision point data.
Modification of the step size constants based on the
velocity factor resulted in greater or fewer intermediate


171
V is coplanar with k and Sg
Figure 5.14: Calculation of Roll Parameter


142
5.4 Workspace Considerations
Once a position and orientation is selected by the
user, either manually or with the mouse, a reverse kinematic
analysis is performed to determine whether the desired
precision point is in the workspace of the robot. If it is
not in the workspace, the operator is so notified and is
asked to reenter the point.
To assist the user in choosing points that will be in
the workspace of the T3-726 robot, the wrist workspace
boundary is drawn on both the top and side views. One
criterion that must be met in order for the chosen precision
point to be within the workspace of the robot is that the
position of the wrist of the robot (the point of
intersection of the last three joints) must lie within the
wrist workspace volume. This volume is established by
ignoring the last three joints of the robot and moving the
first three joints through all possible positions in order
to establish all reachable points of the wrist. The volume
of all possible wrist locations is directly related to the
mechanism dimensions of the manipulator and to the specific
joint angle limitations of the system. It is important to
recognize however that it is totally independent from the
type or size of tool that is attached to the robot and thus
once determined, the workspace volume will never change.
Shown in Figure 5-6 is a drawing of the wrist workspace
volume. Figure 5-7 is a photograph of how the workspace
volume is displayed for the operator in order to assist in


26
the screen, the Y axis of the viewing coordinate system must
correspond to the Z axis of the fixed coordinate system.
This association is accomplished by selecting a zenith point
(point B in Figure 2-6) which is high above the robot. As
shown in the figure, the direction of the X axis is obtained
as the cross product of the vector along line OA with the
vector along the line OB. With the X and Z axes now known,
the Y axis can be determined.
As described, vectors along the X, Y, and Z axes of the
viewing coordinate system are known in terms of the fixed
coordinate system. A 3x3 matrix, V, can be formed such that
the first column of the matrix is made up of the known
direction cosines of the unit vector along the X axis
(measured in terms of the fixed coordinate system).
Similarly, the second and third columns of V are made up of
the direction cosines of unit vectors along the Y and Z
axes. Recognizing that V is an orthogonal rotation matrix,
the transformation from the fixed coordinate system to the
viewing coordinate system is given by
X
V
~x£ dx£v~
>
1)
- d^fv
z
V _
_Zf dz£v_
where the vector [dxfv,dyfv,dzfv] represents the coordinates
of the origin of the viewing coordinate system as measured
in the fixed coordinate system.


z

00
Figure 3-7: Location of Wrist Point


178
11. Anderson, C. "Computer Applications in Robotic Design,"
Proceedings of the 1986 ASME Computers in Engineering
Conference, Chicago, July 1986.
12. Smith, R., "Robot Workmate: Interactive Graphic Off Line
Programming," SRI International, Menlo Park, Ca., Feb 1986.
13. Mahajan, R.> and Walter, S., "Computer Aided Automation
Planning: Workcell Design and Simulation,"
Robotics Engineering, Vol.8, Number 8, pp. 12-15, August
1986.
14. Newell, M.E., Newell, R.G., and Sancha, T.L., "A New
Approach to the Shaded Picture Problem," Proceedings of the
ACM National Conference, New York, 1972.
15. Newell, M.E., "The Utilization of Procedure Models in
Digital Image Synthesis," PhD. dissertation, University of
Utah, 1974.
16. Schumacker, R.A, Brand, B., Gilliland, M. and Sharp,
W., "Study for Applying Computer-generated Images to Visual
Simulation" U.S. Air Force Human Resources Lab Technical
Report, Sept. 1969.
17. Appel, A., "Some Techniques for Shading Machine
Renderings of Solids," AFIPS 1968 Spring Joint Computer
Conference.
18. Goldstein, R.A., and Nagel, R., "3-D Visual Simulation,"
Simulation, pp. 25-31, January 1971.
19. Catmull, E., "Computer Display of Curved Surfaces,"
Proceedings of the IEEE Conference on Computer Graphics,
Pattern Recognition, and Data Structures, May 1975.
20. Myers, A.J., "An Efficient Visible Surface Program,"
Report to the NSF, Ohio State University Computer Graphics
Research Group, July 1975.
21. Duffy, J., Analysis of Mechanisms and Robotic Manipu
lators John Wiley and Sons, New York, 1980.
22. Pieper, D.L., and Roth, B., "The Kinematics of
Manipulators Under Computer Control," Proceedings 2nd
International Congress on the Theory of Machines and
Mechanisms, Vol. 2, Kupari, Yugoslavia, 1969.
23.Lipkin, H., and Duffy, J., "A Vector Analysis of Robot
Manipulators," Recent Advances in Robotics, John Wiley and
Sons, New York, 1985.


18
1. For a given position of the robot and of the viewer,
transform all local vertex data to a screen coordinate
system so that it can be properly displayed.
2. Delete all plane segments which are non-visible
(backward facing).
3. Sort all remaining plane segments so that the proper
overlap of surfaces is obtained.
Each of these three tasks will now be discussed in detail.
2.3 Coordinate Transformations
In order to produce a drawing of the robot, certain
input data must be known. First the angle of each of the
six revolute joints of the robot must be selected. Chapter
3 details a method of calculating joint angles so as to
cause the robot to move along some desired trajectory. For
the purposes of this chapter, it will be assumed that the
set of joint angles is known for the picture to be drawn at
this instant.
The second input item which is required is the point to
be looked at and the point to view from. Knowledge of these
points determines from what vantage point the robot will be
drawn. The selection and modification of these items allows
the user to view the image from any desired location.


104
4.2 Telepresence Concept
It is often necessary in a hazardous environment for an
operator to effectively control the motion of a robot
manipulator which he cannot observe directly. The
manipulator may be either directly guided via use of a
joystick or similar device, or it may be autonomously
controlled in which case it is desirable to preview and
monitor robot motions. The operator must be provided with
an accurate picture of the workspace environment and of the
robot motion, together with other sensory information such
as touch and sound. These inputs will allow the operator to
experience the motion and forces acting upon the distant
robot. In this context, the word telepresence is used to
describe the type of system which permits the operator to
effectively control the motion of the remote robot
manipulator.
The primary sensory feedback that an operator requires
to control robot motion is vision. In many instances,
vision alone will give the operator sufficient information
in order to avoid obstacles and to manipulate objects as
desired. Logically, this vision could be provided by video
cameras which surround the workspace of the manipulator or
which are attached to the manipulator itself.
The ability of the video camera to zoom in and provide
enlarged visualizations of the work area would also assist
in the control of the manipulator. Use of other sensors
such as force feedback devices together with the video
equipment would supplement the system.


29
viewer was standing at infinity with respect to the robot.
Parallel lines will remain parallel on the screen.
The perspective transformation is accomplished by
projecting points onto a plane (screen). One point is
selected as shown in Figure 2-7 as the center of the
projection. The screen coordinates of any point are
determined by finding the coordinates of the point of
intersection of the plane (screen) with the line between the
point in question and the center of projection. This
transformation can again be accomplished via matrix
multiplication (coupled with any desired translation on the
screen).
For the purposes of this work, the parallel projection
method was used for determining the data in terms of the
screen coordinate system. This choice was made because of
the reduced number of calculations required to perform
subsequent sorting algorithms used for eventual solid color
representations of the robot.
2.4 Backface Polygon Removal
The next task that must be accomplished is the
filtering of the linked list of plane segments such that the
elements which are
backward facing are
removed.
In
other
words, at any time
approximately one
half of
the
plane
segments will not be visible to the viewer. The list of
visible plane segments changes each instant that the robot
or the viewer changes position.


122
where £0 is simply equal to (£^ £3) x £3 and is therefore a
known vector. Crossing both sides of this equation with the
vector S gives
S x (X x S) = S x S0 (4.4)
Expanding the left hand side gives,
(S-S)X (S-X)S = £ x S0 (4.5)
Noting that X is perpendicular to £ simplifies the above
equation and allows for the solution for the vector X,
X = (S x S0) / (S*S) (4.6)
If the magnitude of the vector X is greater than the radius
of the sphere, the problem is completed as it is possible
for a point to move from point PI to point P2 on a straight
line. Similarly, straight line motion is possible if the
magnitude of X is less than the radius of the sphere and
point P4 does not lie between points PI and P2. The point
P4 will lie between the points Pi and P2 if the following
condition is satisfied:
*(e4-r2) < 0
(4.7)


16
system attached to each body. For this reason, local
coordinate data can be collected and permanently stored for
future use. It should be noted that the coordinate values
of the vertices were obtained from actual measurement and
from scale drawings of the robot. In this way, the computer
graphics simulation is as accurate as possible.
It is apparent from Figure 2-3 that the simulated robot
is made up of a series of primitives, i.e. n sided polygons,
circles, and cylinders. An n sided polygon was defined to
be a plane segment. A circle was defined as a 20 sided
polygon, and cylinders as a set of 10 four sided polygons.
Thus it is possible to define all parts of the simulation in
terms of plane segments.
Each primitive (polygon, circle, cylinder) which makes
up the simulation must have certain data items associated
with it. The datum was managed by placing each primitive
into a node of a linked list as shown in Figure 2-4. Each
node of the linked list is a variant structure and contains
specific information such as the type of element, the body
it belongs to, and the number of the vertices which comprise
it. The linked list format, which is a standard feature of
the C programming language, was used because of its
versatility and dynamic memory allocation characteristics.
With every element of the simulation now defined in
terms of some local coordinate system, the following three
tasks must now be performed in order to obtain a realistic
color image:


151
Figure 5-10: Motion Behind Base
Figure 5-11: Intersection of Planar Line Segments


z,s,
Figure 3-6: Hypothetical Closure when || Sy


Figure 5-3: Collection of Rigid Bodies
135


%
2'
Co
iVe
cti
oO
ot
BO
aieS
3


35
Table 2-2: Plane Segment Sorting Cases
This table indicates whether all the vertices of plane
segment 1 are on the origin side (+ side) or the opposite
side (- side) of the infinite plane containing plane segment
2. Similarly, the vertices of plane segment 2 are compared
to the infinite plane containing plane segment 1.
segment 1
+
segment 2
+
result
no overlap
no overlap
+/-
+/-
+/-
+
+
+/-
+/
+
+
1 may overlap 2
1 may overlap 2
1 may overlap 2
2 may overlap 1
2 may overlap 1
2 may overlap 1
+/-
overlap may occur


53
Figure 3-1: Spatial Link


83
X43
=
X4C3 Y4s3
Y43
ss
C23(X4S3 + Y4c3) S23Z4
=
X4S3 + Y4C3
(3.64)
Z43
=
S23(X4S3 + V3} + C23Z4
Z4
X4
=
S45S4
Y4
ss
~(S34C45 + C34S45C4}
(3.65)
=
"C45
Z4
*
C34C45 C34S45C4
=
-S45C4
At this point the reverse displacement analysis is
complete. For any given location and orientation of the
hand, all displacement angles of the manipulator are known.
If the hand is in the effective workspace of the robot, then
there will be up to four possible configurations. That is,
there will be up to two sets of values for the angles ^,
2, and 9^. For each of these sets there will be two
corresponding sets of values for the angles 9^, 9^, and
V
3.5. Forward Displacement Analysis
For the forward displacement analysis it is assumed
that the values for the angles ^, 9^, 9^, 9^, and 9g
are known. It is desired to determine the location and
orientation of the hand, i.e. the coordinates of a reference
point of the tool attached to the manipulator and the


124
is desired so that the manipulator will start at rest at
point Pi, accelerate to some maximum velocity, and finally
come to rest at point P2. This is accomplished by selecting
a local coordinate system as shown in Figure 4-11. As the
manipulator moves from PI to P2, the displacement along the
X axis will be varied according to a cosine curve in exactly
the same manner as rectilinear motion was accomplished in
the previous chapter. The Y coordinate value is varied as
follows:
if X<|a-£1| Y = [1 cos (X*PI/1 a_' | ) ] KR/2
if lE'-Pil < X < jb'-^l Y = KR (4.14)
if |b'-£x| Use of equation (4.14) to determine the Y coordinate as
a function of the X displacement offers a significant
advantage. Since the intermediate points to be displayed
are selected based on the X axis displacement, a cosine
distribution as discussed in Chapter III will result. The
manipulator will start at rest, accelerate to a maximum
velocity, and finally slow down and come to rest. The Y
axis displacement simply stretches the original rectilinear
motion so that the path passes through points A and B.
Equation set (4.14) guarantees that the slope of the path,
measured in terms of the local X,Y axes, will always equal
zero at points A and B and in this way the 'bent path' is
smooth and continuous. An example of a computer generated
path is shown in Figure 4-12.


23
X .
1
X .
3
fdx..1
Di
*i
= Aji
yj
+
dyji
z.
1 _
z .
L 3 j
dz .
L 31J
(2.1)
where
A. =
Di
-s
s.c. c.c. -s. .
3 ID 3 13 ID
s.s . c.s .
. 3 13 3 13
13 A
(2.2)
The vector [dx^, dy^, ^zji^ represents the
coordinates of the origin of the Cj system as measured in
the C. coordinate system. Also the terms s. and c.
i 3 3
represent the sine and cosine of 9. and the terms s.. and
3 13
c^j represent the sine and cosine of aij* This notation
will be used repeatedly throughout subsequent chapters.
Since all joint angles and twist angles are assumed to
be known, equation (2.1) can be repeatedly used to transform
all vertex data to the first coordinate system, C^. A point
known in terms of the coordinate system, [x^, y^, z^],
can be found in terms of the fixed coordinate system [x^,
yf, zf], as follows:
1
X
Hi
1
*xr
*f
= M
*i
1
N
l-h
1
1
1i
N
1
(2.3)


179
24. Paul, R., Robot Manipulators, Mathematics, Programming,
and Control, MIT Press, Cambridge, Ma., 1981.
25. Lipkin, H., "Kinematic Control of a Robotic Manipulator
with a Unilateral Manual Controller," master's thesis,
University of Florida, May 1983.
26. Morton, A., "Design and Implementation of a Joystick
Based Robot Teaching System," PhD dissertation, University
of Missouri at Rolla, 1985.
27. Duffy, J., Harrell, R. Matthew, G., and Staudhammer,
J., "Enhanced Animated Display of Man-Controlled Robot,"
University of Florida, College of Engineering, Final Report
submitted to McDonnell Douglas Astronautics Co., April 1986.
28. Soylu, R., PhD dissertation in preparation, University
of Florida, 1987.
29. Duffy, J. and Sugimoto, K., "Special Configurations of
Industrial Robots," 11th International Symposium on
Industrial Robots, Tokyo, Oct 1981.


66
Further, from equation (3.10)
(3.19)
and provided that a^^ ^ 0
(3.20)
The remaining angles ^ and (^-(j>^) can again be
calculated using equations (3.14) through (3.17).
Finally when the axes of and S_^ are collinear, the
condition 3.^=0 flags a further singularity. The direction
of the unit vector a^^ in the plane normal to the axis of
is now arbitrary. In this case it is convenient to impose
the additional constraint that 9^=0 making a^^ equal to
ag^. The remaining angle (^cj^) can again be calculated
using (3.16) and (3.17).
Equations (3.7) through (3.17) plus the special
analysis developed for parallel to S.^ determine all the
necessary parameters of the hypothetical closure link which
is shown in Figure 3-5. In addition, a unique value for the
angle ^ has been determined. Thus the reverse
displacement solution of the open chain manipulator has been
transformed to the solution of a closed loop mechanism with
a known input angle ^. Well documented methods for
analyzing the closed loop mechanism can thus be used to


114
Figure 4-5: Animated Representation of MBA Manipulator
Figure 4-6: Obstacle Locations Determined by Vision System


Figure 3-11: Forward Analysis


2
for a wide variety of new applications in the robotics
field. This dissertation addresses the use of such computer
graphics hardware in the following two areas:
a) telepresence system development
b) robotic'workcell modeling
Telepresence systems deal with the direct
man-controlled and autonomous operation of remote robot
manipulators. During man-controlled operation, the user
controls the manipulator directly by guiding the end
effector via use of a joystick or similar device. The
operator moves the joystick as a "master" and the robot
follows correspondingly as a "slave." The graphics system
aids the operator by providing a real time visualization of
the manipulator and surrounding work area. Critical
information such as approaching a joint limit or exceeding
payload capabilities can be displayed immediately as an aid
to the user. For autonomous operations of a remote
manipulator, the graphics system is used to plan manipulator
tasks. Once a task is specified, the user can preview the
task on the graphics screen in order to verify motions and
functions. Modifications to the previewed task can be made
prior to the execution of the task by the actual
manipulator.
The modeling of robotic workcells is a second
application for the animation system. In a manufacturing
environment it is desirable to plan new manipulator tasks
off line. In this manner the manipulators can continue 'old


27
At this point, the coordinates of all vertices of the
robot are known in terms of the viewing coordinate system.
All that remains is to transform the data one more time to
the screen coordinate system so that it can be properly
displayed.
2.3.3 Viewing to screen coordinate system transformation
The screen coordinate system is defined such that the
origin of the system is located at the lower left corner of
the terminal screen. The X axis points to the right, the Y
axis points up, and the Z axis point out from the screen.
The scale of the axes is dependent on the type of terminal
being used. All data points must be transformed to this
coordinate system so that they may be properly displayed on
the screen. Two types of projective transformations may be
used to perform the transformation between the coordinate
systems. These projective transformations are perspective
and parallel projections and are shown in Figure 2-7.
A parallel projection is the simplest type of
transformation. The conversion to the screen coordinate
system is simply accomplished by ignoring the Z component of
the data from the viewing coordinate system. In other
words, the X and Y values of points in the viewing
coordinate system are simply plotted on the graphics screen
(accompanied by any desired translation and scaling). The
resulting image is the same as would be obtained if the


128
The second source of calibration error was introduced
by the method of joint angle measurement. Voltages derived
from potentiometers attached to each joint of the robot were
converted to digital angular values. Incorrect measurement
or conversion would cause the animated robot to differ in
appearance from the actual robot. Again, calibration
programs were written to minimize these errors. Small
fluctuations in the voltage applied to the potentiometers
would require recalibration of the system.
4.6 Conclusions
The animation tasks were successfully completed. The
graphic representation of the MBA manipulator and workspace
were more than sufficient to enable the operator to
effectively control the robot with a joystick. The ability
of the graphic system to signal the operator in real time
when any part of the manipulator was in close proximity to a
spherical obstacle greatly enhanced the operator's
performance and confidence. Lastly, the ability to preview
and modify computer generated motions prior to the robot
executing the task was quite successful and has numerous
applications whenever manipulators must operate in any type
of hazardous environment.


163
manipulator is at a singularity position. Using the
simplified expressions for the screw coordinates, the
determinant of the 6x6 Jacobian matrix can be expanded by
hand. The number of zero elements in the matrix makes this
a relatively simple task.
After proper expansion and reduction of terms, the
determinant of the Jacobian matrix reduces to the following
expression:
J| a23S44s45S56 (s5}(C3}(S44S2+3+a23C2) (5*8)
It is obvious that the determinant of the Jacobian will
equal zero only if one of the terms in parenthesis in
equation (5.8) equals zero.
5.9 Interpretation of Singularity Results
It is interesting to note what actual robot
configuration occurs when each of the factors of equation
(5.8) becomes equal to zero. If c^ equals zero, then
must equal 90 or 270 degrees. This occurs if the vector a^
becomes colinear with the vector This is a singularity
position for the robot because the vectors £2, S^, and the
wrist point are all coplanar. The singularity represents
the transition point from an elbow up to an elbow down
configuration. The T3-776 robot cannot achieve this
singularity configuration due to the limitation on the third
joint (-60 to 60 degrees).


110
4.4 Method of Operation
Shown in Figure 4-4 is a drawing of the configuration
of the telepresence system. This system configuration
allows for the operation of the manipulator in the two
distinct modes, viz. man-controlled and autonomous. In both
modes the problem to be solved is to identify and pick up a
cube in the workspace and then maneuver the object to a
desired
point without colliding
with a
sphere.
For
the
purposes
of this investigation,
it was
assumed
that
all
objects
in the workspace of the
robot were
i located
on a
flat
horizontal surface.
4.4.1 Man-controlled mode
In man-controlled mode the operator must control the
robot via use of a joystick while observing only the
graphics display screen. In order to duplicate the motion
of the actual manipulator, it is necessary to be able to
accurately measure the joint angles of the robot and supply
these values to the Silicon Graphics system in real time.
This was accomplished by reading voltages from
potentiometers which are attached to each joint of the robot
and which provide feedback information to the robot's
control system. Each of the potentiometer signals was
monitored and converted to digital angular values by a PDP
11/23 computer. A communications protocol was established
between this PDP computer and the Silicon Graphics system so
that angular values would be immediately available to the


7
control the robot. To aid the user, Silma Inc. developed a
task planning and simulation language which was independent
of the type of robot being modeled. Once a task was planned
on the graphics screen, a post processor would translate the
task from the generic planning language to the specific
language for the robot controller. This approach simplifies
operations in a situation where many diverse types of robots
must work together in an assembly operation. The software
was written for the Apollo computer series with an IBM PC/AT
version to be completed in the near future.
AutoSimulations, Inc., of Bountiful, Utah, offers a
software package which runs on the Silicon Graphics IRIS
workstation. This system emphasizes the total factory
integration. The robot workcell is just one component of
the factory. General robot tasks can be modeled at each
robot workcell and cycle times are recorded. Autonomously
guided vehicles (AGVs) are incorporated in the factory plan
together with parts storage devices and material handling
stations. The user is able to model the entire factory
operation and observe the system to identify any bottlenecks
or parts backups. Additional robot workcells can be readily
added or repositioned and AGVs can be reprogrammed in order
to alleviate any system problems. At present time, the
software package offers an excellent model of the entire
factory; however, less emphasis is placed on the individual
workcell. Detailed manipulator tasks cannot be planned and
communicated to the robot controller. Additional programs


22
Table 2-1: T3-776 Mechanism Dimensions
S
S
S
S
S
11
*
a12
S
0 in.
12 90
22
0 in.
a23
5=
44
23 0
33
0
a34
=
0
o,34 90
44
55
a45
ss
0
45 = 61
55
0
a56
S5
0
a__ = 61
56
* to be determined when closing the loop


I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
^hn Staudhammer
Professor of Electrical Engineering
This dissertation was submitted to the Graduate Faculty of
the College of Engineering and to the Graduate School and
was accepted as partial fulfillment of the requirements for
the degree of Doctor of Philosophy.
May 1987
Dean, Graduate School


161
The direction cosines of each of the vectors of the
manipulator in terms of the coordinate system described are
shown in Table 5-2.
Substitution of known mechanism dimensions, and
utilization of subsidiary formulas described in [21] result
in the following expressions for the six screw coordinates:
1 = (
s2+3'
c2+3' 0 '
0,
0,
"a23C2}
2 = (
0,
0, 1 ;
a23S3' a23
C3'
0)
$3 = (
0,
0, 1 ;
0,
0,
0)
4 = (
0,
-l, 0 ;
0,
0,
(5.7)
0)
$5 = (s
45S4'
-C45' "S45c4 ;
S44S45C4'
0, S
44S45S4}
6 <
X54'
~Z5' X54 ;
-S44X54'
0,
S44X54^
The expressions s^ and c^ represent the sine and
cosine of
angle ^

Similarly, the
expressions
S2+3
and c2+3
represent
the
sine and cosine
of the angle
(2 +
e,). The
terms used in $, are defined as follows:
Xc>, = X,c, Ycs.
54 54 54
X54 = X5S4 + X5C4
X5 = S56S5
(S45C56 + C45S56C5)
C45C56 S45S56C5
Singularity positions are identified by calculating the
determinant of the Jacobian matrix for a given set of joint
angles. If the determinant equals zero,
then the


120
Shown in Figure 4-10 are the two known points PI and P2
which represent the initial and final positions of the end
effector of the robot. The question to be answered is
whether the line segment from Pi to P2 intersects the sphere
of radius R which is centered at P3. The first step in the
solution is to determine the coordinates of point P4 which
represents the point on the infinite line containing PI and
P2 which is closest to the center of the sphere. If the
distance from P4 to the center of the sphere is greater than
the radius of the sphere, then it can be concluded that the
line segment P1-P2 does not intersect the sphere.
To determine point P4, the vector equation of the
infinite line which contains PI and P2 is first obtained.
This equation is as follows:
(4.1)
r jc S = £i x S
where £ is a unit vector along the line and thus equals
p^-p^/1P2-P^ | and r_ represents a vector from the origin to
any point on the line. Since point P4 lies on this line,
then it must satisfy the equation and thus
(£3 + X) x S = El x S
(4.2)
Rearranging this equation gives
X x S = S0
(4.3)


107
Figure 4-1: Telepresence System


140
The tool orientation is defined by specifying the
direction cosines of the two unit vectors Sg and ag7 as
shown in Figure 5-5. At this point it may appear that nine
quantities have been specified in order to define the six
position and orientation freedoms. However three
constraints (S^g and a^ are unit vectors and must be
perpendicular) reduce the set of nine values to a set of
only six independent values.
One of the following set of four functions must be
selected for each precision point:
1. Open gripper.
2. Close gripper.
3. Delay.
4. No operation.
This set is indicative of the type of operations that can be
performed by the T3-726 robot. If the delay function is
selected, then the operator must also enter the length of
time in milliseconds that the robot should pause at the
precision point.
A velocity value from one to ten must be selected for
each of the precision points. The value selected is a
measure of how fast the manipulator will be moving at the
middle of path as the robot moves from the previous
precision point to the present point. A value of ten cause
the manipulator to move at a rate approximately five times
faster than a velocity value of one.


97
Oj = (9/2.0) t (3.92)
where 9 is the net angle of rotation and t' is as defined
in (3.91). The.orientation vectors at the jth position are
therefore simply determined by multiplying the initial
vectors S,.T and by the rotation matrix determined from
(3.92) for the jth position. In this manner, the change of
orientation of the end effector will be performed smoothly.
The change in orientation will begin slowly, accelerate to
some maximum value at the middle of the motion, and finally
slow down and gradually come to rest.
At this point the position and orientation of the
manipulator at each of the intermediate locations along the
rectilinear path are known. The reverse kinematic analysis
described in section 3.4 is applied to each of the
intermediate points to determine the sets of joint angles
which will position the manipulator as required.
3.6.3 Selection of configuration
During the description of the reverse analysis, it was
pointed out that up to four configurations of the Cincinnati
Milacron T3-
776 robot can position
and
orient
the
end
effector as
required.
Therefore,
for
each
of
the
intermediate
points along
the rectilinear
path, up
to
four
sets of joint angles have been calculated. The problem now
is to filter the sets of joint angles. It is necessary to


MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY
BY
CARL DAVID CRANE III
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1987


93
Rll+R22+R33 1 + 2cose (3.82)
Solving for cos gives
cos = 0.5(R1;l+R22+R33-1) (3.83)
The angle of rotation will be defined to be positive about
the vector k such that is in the range of 0 to 180
degrees. With this additional constraint, equation (3.83)
will yield a unique result for the angle .
Three additional equations may be obtained by
subtracting pairs of off diagonal elements as follows:
R32 R23 2kxsine
R13 R3^ = 2kySin (3.84)
R01 R..= 2k sin
Z X 12 Z
The components of k may be obtained from the set (3.84) to
yield
X
a
ii
(r32-
-R23)
/
2sin
ti
(R13'
"R31}
/
2sin
(3.85)
k =
z
(R21
_R12)
/
2sin
When
the
angle
of rotation
is small, the axis of
rotation is not well defined since the denominator of set
(3.85) approaches zero. A similar problem with set (3.85)
exists if the angle of rotation approaches 180 degrees,
although the axis of rotation is indeed well defined in this
case. Therefore when the angle of rotation exceeds 90


141
The required data for each precision point can be
entered in either of two ways. The value of each item can
be typed in manually after a prompt or the positions and
orientations can be entered by use of the mouse device. The
manual entry ofdata is straightforward. An advantage of
this method is that data items can be easily read in from a
file to ensure precise repetition of previous paths.
The use of the mouse however to enter data offers the
user a visual interpretation of where the robot will move.
The user is given a top and side view of the robot. By
moving the mouse controlled cursor to a position in the top
view part of the screen and pressing one of the mouse
buttons, the user selects the X and Y coordinate values.
Now by moving the cursor to the side view of the robot, the
user can select the Z value of the point. With the position
of the tool point now known, the user repeats the procedure
and selects a point that the tool should point towards. The
selection of this point specifies the direction cosines of
the unit vector At this point a vector perpendicular to
£3g is displayed on the screen. This new vector represents a
potential choice for the orientation vector £5-7- The ag^
vector can be rotated about the £5g vector by pressing either
the up or down arrow keys on the keyboard until the desired
orientation is established. The pressing of any mouse
button confirms the selection.


86
X
"x
y
= M
y
_z_
_ z 1
where
M =
coscj)^ -sincj)^
sin^
0 0
0
0
1
(3.68)
(3.69)
Combination of (3.67) and (3.68) and the substitution of the
* *
known direction cosines of S, and a__ in the x y z
6 6 7
coordinate system gives
~X6 ~
"0
y6
= M A01A-A._AC.A,c
21 32 43 54 65
0
_Z6 _
1
X67
"1 -
y67
M A21A32A43A54A65
0
_Z67_
0
(3.70)
(3.71)
where
x6'y6'Z<
and x
67
'67'z67
are the direction cosines of
C
6 and a^ respectively in the fixed coordinate system.
The last parameter to be determined is the position
coordinates of the point Pi, the point of interest of the
tool attached to the manipulator. This is obtained by use
of the following vector equation:


Fig. 3-12 Displacement Profile 90
Fig. 4- 1 Telepresence System 107
Fig. 4- 2 Nine String Joystick 109
Fig. 4- 3 Scissor Joystick 109
Fig. 4- 4 System Configuration Ill
Fig. 4- 5 Animated Representation of MBA Manipulator .114
Fig. 4- 6 Obstacle Locations Deter, by Vision System .114
Fig. 4- 7 Display of Objects in Manipulator Workspace .116
Fig. 4- 8 Warning of Imminent Collision 119
Fig. 4- 9 Operation in Man-Controlled Mode 119
Fig. 4-10 Determination of Intersection 121
Fig. 4-11 Generation of Alternate Path 121
Fig. 4-12 Display of Computer Generated Path 125
Fig. 5- 1 Cincinnati Milacron T3-726 Manipulator . .131
Fig. 5- 2 Animated Representation of T3-726 Robot . .134
Fig. 5- 3 Collection of Rigid Bodies 135
Fig. 5- 4 Data Structure for Precision Points 138
Fig. 5- 5 Skeletal Model of T3-726 Manipulator . .139
Fig. 5- 6 Manipulator Workspace 143
Fig. 5- 7 Top and Side Views of Workspace 143
Fig. 5- 8 Three Roll Wrist 145
Fig. 5- 9 Orientation Limits 146
Fig. 5-10 Motion Behind Base 151
Fig. 5-11 Intersection of Planar Line Segments . .151
Fig. 5-12 Coordinate System for Singularity Analysis .160
Fig. 5-13 Display of Singularity Results 166
Fig. 5-14 Calculation of Roll Parameter 171
viii


55
(3.2)
(3.1)
(3.3)
(3.4)
Determinant notation is used to denote the scalar triple
product.
3.3. Mechanism Dimensions of the T3-776 Manipulator
Shown in Figure 3-3 is a sketch of the T3-776 robot.
The robot is described by the manufacturer as consisting of
a three roll wrist connected to ground by an elbow,
shoulder, and base revolute joint. Shown in Figure 3-4 is a
skeletal drawing of the manipulator. The three roll wrist
is modeled by a series of three revolute joints whose axes
of rotation all intersect at a point. The elbow, shoulder,
and base joints are each modeled by a revolute joint such
that the axis of rotation of the shoulder and elbow are
parallel.
In the skeletal model the joint axes are labeled
sequentially with the unit vectors S^ (i = 1,2..6). The
directions of the common normal between two successive joint
axes £S^ and are labeled with the unit vectors (ij =
12,23,..67). It must be noted that only the vectors a^ and
23 are shown in Figure 3-4 for simplicity of the diagram.


155
Many different methods could have been used to resolve
this problem. For example the change in orientation could
have been caused by an opposite rotation about the net
rotation axis or the orientation could have been held
constant until the end of the path in an effort to decouple
the rectilinear motion of the wrist from the orientation
change. The method which was eventually used involved
monitoring the motion to discover the exact joint angles
when the robot left its workspace (due to orientation) and
the exact set of joint angles when the robot reentered its
workspace (as it must since the precision point being moved
to was definitely in the robot workspace). Simple joint
interpolated motion was then performed between these two
positions. Utilization of this method then guaranteed that
a path would be generated which successfully moved the robot
between two given precision points.
5.7 Robot Configurations
As previously stated, the basic geometric form of the
Cincinnati Milacron T3-726 robot is identical to that of the
T3-776 robot. Because of this the reverse kinematic
analysis presented in Chapter III for the T3-776 robot can
be applied again by changing the mechanism dimensions.
In Chapter III it was shown that the total number of
solutions or configurations of the robot for a given
position and orientation was eight. This was reduced to


87
-PI = P2 + S666 + a6767 (3.72)
where Rp^ is
the vector to the tool point
of interest
and
Rp2 is the
vector to the
wrist point
P2.
Since
the
direction cosines of 3g and
a_^-j are known in
the
fixed
system, the
only unknown in
(3.72) is the
vector
P2 *
The
components of this vector in
the x'y'z'
coordinate system
are simply given by
P2 ^a23C2 + S44S2+3-*-' +
^a23S2 S44C2+3^-'
(3.73)
where i_' and k' are unit vectors along the x' and z' axes.
This vector may be transformed to the fixed coordinate
system by multiplying it by the rotation matrix M of
equation (3.69). With now known in the fixed system,
Rpl can be determined from (3.72).
The forward displacement analysis is now complete in
that the position and orientation of th hand of the
manipulator are uniquely determined for a given set of joint
displacements. It is important to note that the forward and
reverse solutions are complementary. That is, a set of
joint angles determined by the reverse analysis for a given
position and orientation of the hand must produce this same
position and orientation when used as input for the forward
analysis. This serves as a first verification of results.


84
direction cosines of the vectors £>g
and a,_.
67
This
analysis
is included
for completeness
and
will be
referenced in
Chapter IV
although it is
not
required
for
the path
generation problem.
The direction cosines of and a^ in the moving x y
z coordinate system (see Figure 3-11) are simply (0,0,1)
and (1,0,0) since the x and z axes are chosen to point
along the vectors a^ and respectively. These direction
cosines may be expressed in the x'y'z' coordinate system by
five successive applications of the rotation matrix
c .
3
-s .
3
0
s .c. .
3 13
c .c. .
3 13
-s. .
13
s .s. .
. 3 13
c s .
3 13
c. .
13
(3.66)
It may be recalled that the x'y'z1 coordinate system is
attached to the manipulator such that x' points along a_12
and z' along S^. Therefore a vector in the x y z system
can be expressed in the x'y'z' system using the
transformation equation
*x' '
i
*
X
i
*
y'
= A01A-A._A_.Arc
21 32 43 54 65
y
*
_ z' _
_ z
(3.67)
Further a vector expressed in the x'y'z' system may be
written in terms of the fixed xyz coordinate system via use
of the following transformation equation


33
manner the new polygon B' and the original polygon A would
no longer overlap and it would not matter in what order they
were drawn on the screen.
Numerous techniques exist for sorting polygons for
proper display. Algorithms have been developed based on two
primary techniques. The first involves sorting objects into
the correct order for display [14 16], while the second
technique concentrates on individual pixels (ray tracing
[17-18] and z-buffer algorithms [19-20]).
A sorting technique was used in this work for two
reasons, i.e. a rapid algorithm was required which did not
require a substantial amount of computer memory. The
algorithm which performs the sort is of necessity of order
2
n In general, every plane segment must be compared
against every other plane segment. To shorten this process,
however, a numbering scheme was employed so that, for
example, the sides of a cube would not be compared since it
would be impossible for them to cover each other. Similarly
it is not necessary to compare the five visible sides of a
ten sided cylinder.
The comparison of two plane elements to determine if
one of them overlaps or covers the other is accomplished by
applying a series of tests. The first test is to determine
whether a bounding box placed around the projection of one
of the objects is disjoint from a bounding box placed around
the projection of a second object. If the bounding boxes do
not overlap, then it is not possible for the two objects to
overlap and the comparison is complete.


127
Once an acceptable path is selected, joint angle
information is sent from the Silicon Graphics system to the
PDP 11/23 which controls the robot. The robot controller
adjusts the angles of the manipulator such that the
manipulator performs the same motion as was just previewed
on the graphics screen. A more detailed description of the
communication task and the operation of the MBA manipulator
are contained in reference [27].
4.5 Problems Encountered
It was very important that the animated scene
accurately reflect the actual situation of the robot and
workspace. Two types of system calibrations represented the
greatest problem encountered in accomplishing this task.
The first calibration problem involved the correct
positioning on the graphics screen of obstacles identified
by the vision system. A specialized calibration program was
written to correctly position animated obstacles with
respect to the animated robot. In this procedure, the
actual robot was positioned directly above a cube in the
workspace. The animated obstacles were then translated and
rotated as a group by moving a mouse input device until the
animated robot was also directly above the corresponding
animated cube. The procedure was repeated until the
animated scene was identical to the actual workspace. The
translation and rotation calibration data were then written
directly into the main program.


117
effector of the robot and places the object at that
position. This 'manual intervention' by the operator could
be replaced by utilizing force feedback information from the
gripper. Basically, when the gripper is closed when it is
near an object and a force is felt in the gripper, then the
graphics system would automatically attach the object in the
animated scene.
The final requirement for the graphics system was to
warn the operator in real time when any part of the
manipulator was close to an obstacle (sphere). The ability
to do this in real time was a significant attribute. This
feature was accomplished by placing a bounding box around
the sphere. The graphics system was then placed in 'select
mode' and the clipping region was set equal to the planes of
the bounding box.
Commands were then given to
redraw
the
robot.
When in
'select mode', no drawing
command
is
executed.
Rather,
any drawing command which
would cause
something to be drawn within the clipping planes, results in
a flag being placed in a buffer. Therefore, if the buffer
is empty after the robot is redrawn, then no part of the
robot fell within the bounding box surrounding the sphere.
In actual practice, two bounding boxes were used, one
whose size equaled the diameter of the sphere, and a larger
one whose size equaled twice the diameter of the sphere.
When the robot did not intersect either box, the color of
the sphere was white. If the robot intersected the larger
box, but not the smaller one, then the graphics system


154
points being generated. Since the graphics system generates
images at a relative constant rate, fewer intermediate
points would cause the viewer to observe a faster velocity
as the robot moved. The communication of the velocity data
and problems involved in the control of the actual robot
will be discussed later in this chapter.
As before, a reverse kinematic analysis was performed
at each of the intermediate positions and orientations.
Changes in joint angles were monitored and abrupt changes
resulted in a recursive call of the program in order to add
extra points in order to slow down any rapid angular changes
(which typically occurred near singularity points). One
major change was made however. In the method presented in
Chapter III, the program was terminated if an intermediate
position and orientation was outside of the workspace of the
robot. For this case however, a different strategy was
used.
Prior to determining intermediate points between two
successive precision points, it was already determined that
both precision points were in the robot workspace and that
the wrist could move on a straight line between them. It
must be noted however that the orientation change caused by
rotation about the net axis of rotation could result in an
orientation which the robot could not attain. In other
words, the wrist of the robot was guaranteed to lie in its
workspace, but the required orientation may be impossible to
achieve.


58
As previously stated, the link lengths a^j, the offsets
Sjj, and the twist angles are constants which are
specific to the geometry of a particular manipulator. The
values of these constants are tabulated below for the T3-776
robot.
S11 =
*
a12 =
0
in.
al2 =
90 deg.
S22
0 in.
a23
44,
.0
a23 =
0
S33
0
a34 "
0
a34 =
90
S44 "
55.0
a45
0
a45 =
61
S55 =
0
a56 =
0
56
61
* S
11 Wil1
be
computed
subsequently
(3.5)
In addition to the above constant dimensions,
S66 and
ag^ are selected such that the point at the end of vector
a^ is the point of interest of the tool connected to the
manipulator. For example this point may be the tip of a
welding rod that the manipulator is moving along a path.
Once a particular tool is selected, constant values for S
66
and are known.
Furthermore it is noted that the link lengths a34'
a^g, and a5g equal zero. However it is still necessary to
specify the direction of the unit vectors a^2, 34' 45' and
ar. in order to have an axis about which to measure the
56
corresponding twist angles. The vector a^j must be
perpendicular to the plane defined by the vectors and
and as such can have two possible directions. For the


73
a2323 -12 + S444 -12 -P2 -12
(3.33)
a2323 1 + S44-4 -1 = -P2 -1
(3.34)
The right sides of both the above equations
are known.
Expanding the scalar products on the left sides
of (3.33)
and (3.34) gives
a23C2 S44COS (02+3) = -P2 -12
(3.35)
a23S2 S44Sin (02+35 = -P2 -1
(3.36)
Expanding the sine and cosine of (2+<|>2) and
regrouping
gives
C2ta23-S44COS31 + S2[S44sinh3 = ^P2*^-12
(3.37)
C2t-S44sin3] + S2ta23"S44COS31 = -P2*-l
(3.38)
Using Cramer's rule to solve for the sine and cosine of 2
and recognizing that
I P21 = a23 + S44 2a23S44cos<^3
gives


98
obtain one set of joint angles at each position such that no
abrupt angular changes occur from one point along the path
to the next.
The user must specify what configuration the robot
shall be in at the initial point along the path. Methods of
assisting the operator in this selection process are
presented in Chapter V. At this point it is assumed that
the initial configuration of the robot has been specified.
With the initial configuration of the manipulator
known, each set of joint angles at the next position of the
robot along the path are compared to the initial joint
angles of the robot. For each set of joint angles, the
maximum angular change is recorded and assigned to the
variable gamma. The set with the smallest gamma angle is
identified as the set of joint angles which are closest to
the configuration at the start of the path. Restated for
the general case, each set of joint angles at the j+1
position along the path is compared to the already chosen
set of joint angles at the jth position. The set of joint
angles to be used at the j+1 position is the set with the
smallest gamma angle. Typical datum is shown in Table 3.1.
3.6.4 Special cases
At times, it may be the case that the closest set of
angles at the j+1 position may actually vary considerably
from the previous set of angles at the jth position. This
occurs when the manipulator approaches near a singularity


167
robot function, and velocity of the robot are also
accurately displayed. In other words, the animated robot
will open
and close
its
gripper,
delay,
and
move
with
different
velocities
as
specified
by the
user
when
the
precision
points were
entered.
Just as
in
the
work
described
in Chapter
11/
the operator also
has
complete
freedom to interact with the animation in order to change
viewing position, zoom in, or freeze the motion at any time.
If desired, the operator also has the option of
observing the path had the other initial configuration been
selected. A pop up menu offers the user this choice and a
second selection of this option will display the original
path again. Once the user is satisfied with the displayed
path, the pop up menu is again used and the operator can
elect to send the path data to the actual robot. Once
appropriate information is transmitted, the actual robot
will duplicate the path and programmed functions just as was
displayed on the graphics system.
5.12 Communication with Robot Controller
Communication of path data to the robot controller was
accomplished with the assistance of Mr. Mark Locke and Mr.
David Poole of the U.S. Army Belvoir Research and
Development Center. They developed the appropriate
interface protocol and procedures required to perform this
task.


Table 5-2: Direction Cosines
3
(0
0
1 )
-34
(1
0
9
0
4
(0
"S34 '
c34 >
-4 5
(c4
U*
U43
9
U43
is
(X4
Y4
z4 >
56
(w54 ,
-U*
U543
9
U543
i6
(X54 ,
Y54 '
Z54 >
-67
(W654 '
-U*
u6543
9
U6543
7
(X654 '
Y654 '
Z654 )
71
(W7654 f
-U*
U76543
9
U76543
ii
(X7654 '
Y7654 '
Z7654 }
-12
(W17654'
~U176543
9
U176543
2.2
(X17654'
Y17654'
Z17654)
-23
(C3
~S3
9
0
162


40
2.7 Program Modifications
It was previously noted that an increase in performance
by at least a factor of 100 was required in order to produce
images rapidly enough to result in pleasing animation. A
brief description of the graphics software library which is
included with the IRIS system will precede the discussion of
the specific data structure and software modifications which
were made.
2.7.1 IRIS coordinate transformations
The primary task in drawing images on the screen is the
transformation of coordinate values from local coordinate
systems to the screen coordinate system. The IRIS
workstation accomplishes this by manipulating data in terms
of homogeneous coordinates. Four coordinate values, [x, y,
z, w] are used to define the coordinates of a point. What
is normally thought of as the X coordinate value can be
calculated as x/w. Similarly, values for the Y and Z
coordinates of a point are readily determined. The
advantage of using homogeneous coordinates is that
rotations, translations, and scaling can all be accomplished
by 4x4 matrix multiplication.
The IRIS system constantly keeps track of the current
transformation matrix, M. This matrix represents the
transformation between some local coordinate system and the
screen coordinate system. When any graphics drawing command


169
can be readily determined since the six joint angles of the
robot are known.
The first step in the calculation of D involves
determining the direction cosines of the vector £g in terms
of the fixed coordinate system. Appropriate sets of
direction cosines are tabulated in [21] and thus the vector
S can be assumed to be known. The cosine of angle D is
o
simply the Z component of the unit vector £g. The sine of
angle D is quickly calculated as the magnitude of the cross
product of the £g vector and the k vector. It must be noted
that in this analysis, the sine of angle D will always be
positive and thus D will always lie between 0 (vertical up)
and 180 (vertical down) degrees.
The yaw orientation is defined as the angle, E, between
the fixed X axis and the projection of the £>g vector onto
the XY plane. The projected unit vector £g is simply
determined by setting the third component of £[g to zero and
then unitizing the resulting vector. The cosine of the
angle E is now equal to the dot product of £g and the i_
vector. The sine of angle E is determined by taking the
cross product of i^ and £g. The sine of E will equal the
magnitude of the resulting vector. The sine will then be
set to either a positive or negative value depending on
whether the vector formed during the cross product points
along or opposite to k.
A problem arises in the calculation of E if the value
of D is near 0 or 180. In other words, if the tool is


63
-7 ~ 67X6 (3.6)
With Si now known, application of (3.1) gives the following
expression for
c71 S7 £l (3.7)
A value of c^=+l immediately flags a singularity which will
be discussed subsequently. The unit vector a^^ is now
defined by
£7 x £1
-71
I£7 x ill
and thus by application of (3.2),
S71 ~ I71711
(3.8)
(3.9)
Utilizing the vector loop equation (see Figure 3-5),
Rpl + + a ^ 1 a ^ ^ ]_]_£]_ £ (3.10)
results in explicit expressions for the hypothetical link
a^^ and hypothetical offsets and S^,
77
1PI7111
/
S71
(3.11)
71
lelil
/
S71
(3.12)
11
1£771PI 1
/
S71
(3.13)


32
Figure 2-9
Backfacing
Polygons Removed


Table 3-1
Sample Angles for j and j+1 Positions
Position
-e-
02
93
04
65
96
j
-39.367
87.588
- 7.692
37.800
-131.780
2.812
j + 1
-38.744
87.810
- 7.924
-118.560
132.830
-151.962
j + 1
-38.744
87.810
- 7.924
37.539
-132.830
4.137


96
As previously stated, a displacement profile based on
the cosine curve was desired because of the smooth motion
which would result. The exact position of the jth
intermediate point is given by
r. = rT + (dist/2) t Sr (3.90)
J
where 'dist' is the total distance travelled as defined in
equation (3.74), Sr is a unit vector along the direction of
motion, and t is a dimensionless parameter given by
t = 1.0 cos((PI*j)/steps) (3.91)
When j = 0, t also equals 0, and the first position is at
the start of the path, r^.. When j = 'steps', the final
calculated position will be at the end of the path, r.
r
Values of j between 0 and 'steps' will give intermediate
positions based upon a cosine function distribution pattern.
A corresponding orientation for the manipulator must be
determined for each of the intermediate positions. The net
angle and axis of rotation which transforms the end effector
from its initial to final orientation have been previously
determined. The rotation matrix which transforms the
initial orientation vectors, £5^ and a^.^, to their values
at the jth position is obtained by application of equation
(3.80). The axis vector, k, to be used in this equation
remains constant. The angular value to be used in (3.80) at
the jth position is given by


156
four for the T3-776 robot due to joint limits. A further
reduction to a maximum of two possible robot configurations
resulted in applying the specific joint angle limitations of
the T3-726 robot. The total reduction from eight to two
configurations is due to the limits on 02 and 0^ which
prevent the robot from reaching overhead (the wrist point
cannot intersect the vector) and the limit on 0^ which
prevents the robot from achieving an 'elbow down'
configuration. The two alternate configurations which do
exist lie in the wrist of the robot and are due to the
result of equation (3.53) which allows for two values of 0,.
and subsequently two values for 0^ and 0g.
Once all precision points are entered, the user is
confronted with yet another choice. At the first point of
the path the user must specify which of the configurations
that the robot will start in. In Chapter III, this choice
was made somewhat haphazardly. In this case, the paths
which result when each of the choices are made must be
compared based on some criterion in order to assist the
operator in the choice of initial configuration. The
criterion selected is two-fold. First how well does each of
the paths avoid any singularity positions and secondly,
during cyclic motion, does the robot return to the same
configuration that it started from? It should be noted that
if the robot does not return to its initial configuration,
that simple joint interpolated motion is used to change the
configuration.


3.3 Mechanism Dimensions of the T3-776
Manipulator 55
3.4 Reverse Displacement Analysis 59
3.5 Forward Displacement Analysis 83
3.6 Path Generation 88
3.7 Results and Conclusions 101
IV ROBOTIC TELEPRESENCE 103
4.1 Introduction and Objective 103
4.2 Telepresence Concept 104
4.3 System Components 106
4.4 Method of Operation 110
4.5 Problems Encountered 127
4.6 Conclusions 128
V INTERACTIVE PATH PLANNING AND EVALUATION . .129
5.1 Introduction and Objective 129
5.2 Robot Animation 130
5.3 Program Structure 136
5.4 Workspace Considerations 142
5.5 Path Evaluation 147
5.6 Calculation of Intermediate Points . .153
5.7 Robot Configurations 155
5.8 Singularity Analysis 157
5.9 Interpretation of Singularity Results .163
5.10 Selection of Configuration 164
5.11 Preview of Motion 165
5.12 Communication with Robot Controller .167
IV


I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
an
cal Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Rlptt G. Seflfr:
Professor oVE Cc
Information Sci
enees


Figure 3-4: Skeletal Model of T3-776 Manipulator


126
The only disadvantage of this approach is that if the
distance |a-a' | is large with respect to | a_' | r then rapid
velocity changes along the Y axis may occur. In other
words, the end effector must move rapidly along the Y axis
in order to avoid the sphere. Dynamic considerations may
make it impossible for the actual manipulator to follow such
a path.
Once the joint angles for the manipulator are
determined for each point along the path, the motion of the
robot is displayed on the graphics screen. As the robot
moves, the 'bounding box' technique is again used to notify
the operator if any part of the robot arm comes close to the
sphere. Furthermore, while the robot is moving along the
path, the operator can observe the motion from any viewing
position and can zoom in for a close up view. In this
manner, the operator can evaluate the path to determine if
it is acceptable.
If for any reason the operator is dissatisfied with the
computer generated path, he has the option of modifying it
and then watching the animated robot move along this new
path. Two basic modifications can be made. First, the
operator can move points A and B (see Figure 4-11) closer or
farther from the sphere. Second, the points A and B can be
rotated around the sphere. In this way the operator has a
high level of control over the non-rectilinear path. During
operation of the system, paths are modified as desired until
one is found which results in a motion where no part of the
manipulator arm strikes the sphere.


REFERENCES
1. Heginbotham, W.B., Dooner, M., and Kennedy, D.N.,
"Analysis of Industrial Robot Behavior by Computer
Graphics," Third CISM IFToMM Symposium on Robotics, Warsaw,
1975.
2. Heginbotham, W.B., Dooner, M., and Kennedy, D.N.,
"Computer Graphics Simulation of Industrial Robot
Interactions," Third CIRT, Seventh ISIR, Tokyo, 1977.
3. Heginbotham, W.B., Dooner, M., and Case, K., "Rapid
Assessment of Industrial Robots Performance by Interactive
Computer Graphics," Ninth ISIR, Tokyo, 1979.
4. Warnecke, H.J., Schraft, R.D. and Schmidt-Streier, U.,
"Computer Graphics Planning of Industrial Robot
Applications," Third CISM IFToMM Symposium on Robots,
Warsaw, 1975.
5. Schraft, R.D. and Schmidt, U., "A computer-Aided Method
for the Selection of an Industrial Robot for the Automation
of a Working Place," Third CIRT, Sixth ISIR, Paris, 1976.
6. Liegeois, A., Fournier, A., Alden, M.J., and Barrel, P.,
"A System for Computer-Aided Design of Robots and
Manipulators," Tenth ISIR, Paris, 1980.
7. Haseyawa, Y., Masaki, I., and Iwasawa, M., "A
Computer-Aided Robot Operation Systems Design (Part I),"
Fifth ISIR, Tokyo, 1975.
8. Derby, S., "Kinematic Elasto-Dynamic Analysis and
Computer Graphics Simulation of General Purpose Robot
Manipulators," PhD dissertation, Rensselaer Polytechnic
Institute, 1981.
9. Leu, M., "Elements of Computer Graphic Robot
j Simulation," Proceedings of the 1985 ASME Computers in
Engineering Conference, Boston, August 1985.
10. Ravani, B., "Current Issues in Robotics," presentation
made at the 19th Mechanisms Conference, Columbus, Ohio, Oct
1986.
177


130
In order to accomplish this objective, certain
algorithms for robot intelligence which provide rapid
decision making capabilities had to be developed. These
algorithms address the following two areas:
1. Can the robot move along a rectilinear path between
two user specified points?
2. Does the robot pass through or near any singular
positions during the motion?
For this work the graphics software system and
subsequent algorithms were specifically applied to the
Cincinnati Milacron T3-726 industrial robot. As shown in
Figure 5-1 this is a six axis manipulator characterized by
the fact that the second and third joints are parallel while
the final three axes intersect at a point. The mechanism
dimensions of the manipulator as well as the joint angle
limitations are presented in Table 5-1. The final system
can readily be modified for any other robot manipulator with
little difficulty.
5.2 Robot Animation
An obvious component of any such path planning and
evaluation system is a realistic computer graphics
representation of the robot manipulator. This element of
the system will allow the user to observe a realistic
picture of the robot as it performs the specified task. In
this manner the operator can observe any problems or


48
A new and simplified method was developed for use on
the IRIS workstation. Once all backfacing polygons are
removed, what remains is a collection of objects. It was
desired to compare and sort the objects, not the individual
plane segments which compose the objects. An object is
defined as a collection of plane segments which cannot
overlap each other. An example of an object is the base
plate of the robot. A second example is the large box
shaped object in the base which rests on top of the base
plate. These examples point out that each of the rigid
bodies shown in Figure 2-3 are composed of a collection of
objects.
Once all objects were defined, a series of rules was
generated which describes how the image of the robot is to
be drawn. An example of such a rule is as follows:
If I am looking from above the robot, then the base
plate must be drawn before the box which rests on
top of it.
The 'if clause' of the above rule will be true if the X axis
of the fixed coordinate system is pointing towards the
viewer. This information is already known since it was
required in the determination of which polygons were
backfacing. Similar rules (again based on previously
calculated facts) make it possible to sort the objects
quickly and correctly. A total number of 12 basic rules


S,
Figure 5-5: Skeletal Model of T3-726 Manipulator
139


56
-THREE ROLL WRIST r- FRONT WRIST GEARBOX
WRIST DRIVE SUB-ASSEMBLY
ELBOW DRIVE SUB-ASSEMBLY
SHOULDER AXIS
BASE SWIVEL
SHOULDER HOUSING
SHOULDER ORIVE
SUBASSEMBLY
(BEHIND SHOULDER
HOUSING)
BASE HOUSING
TURNTABLE
GEARBOX
DRIVE
P.A.U.
Figure 3-3: Cincinnati Milacron T3-776 Manipulator


72
(3.29)
Thus there are two values of 2 which can position the
point P2 as specified and these two possibilities are
referred to as the "elbow up" and "elbow down" orientations.
However due to the limit on the rotation of 2, ie. 45 <
cj>2 <_ 150 degrees, only one value of cj^ and thus unique
values for the sine and cosine of <|>2 will be possible.
From (3.27) the sine and cosine of are given by
c^ = cos (cJ>2-90)
= sin 2
(3.30)
s^ = sin (290)
= -cos 3
(3.31)
Equations (3.30) and (3.31) will be used subsequently in
expressions for the angles in the wrist of the manipulator.
A solution for the unique value of 2 and thereby 2
which corresponds to each set of pairs of angles <{>^ and
is obtained by use of projection. It is shown in Figure 3-8
that the vector Rp2 can be written as
a2323 + S444 -P2
(3.32)
Projecting this equation upon a^2
following two equations:
and then gives the


Ill
Figure 4-4: System Configuration


70
Utilizing equations (3.16) and (3.17) gives the
following expressions for the sine and cosine of 0^,
cos 0^^ = cos (0^-cj)^) cos j)^ sin (0^-cj)^) sin ^ (3.25)
sin 0^ = sin cos ^ + cos (0^^-^) sin cj>^ (3.26)
It must be emphasized that is defined as the relative
angle between the vector a^2 and the hypothetical link a.^
defined in the previous section (see Figure 3-5). As such,
is calculated at this time for subsequent use in the
determination of the angles in the wrist (0^, ,., and
06).
Before proceeding with the analysis it is important to
note that the angles 2 and 2 (see Figure 3-4) are used
in addition to the second and third actuator joint
displacements 02 and 0^. These angles are related by the
following equation:
0j = j 90 deg. (j=2,3) (3.27)
The cosine of angle ^ is calculated by applying the
cosine law to the planar triangle shown in Figure 3-8. The
resulting expression is,
cos cj>3 (a23 + ^44 |J5.p21 ^ ^ ^a23^44^ (3.28)
Two corresponding values for the sine of ^
from the equation
are obtained


119
Figure 4-8: Warning of Imminent Collision
Figure 4-9: Operation
in Man-Controlled Mode


44
Similarly, the transformation matrices for bodies 2 through
6 are given by the following equations:
M2 = Rotx(90) Rot (-G2) M1 (2.13)
= Translate (a23, 0* 0) Rotz(93) M2 (2.14)
M4 = Rotz(94) Rotx(90) Translate (0, -S44, 0) M3 (2.15)
M5 = Rotz(5) Rotx(61) M4 (2.16)
Mg = Rotz(6) Rtx(61) M5 (2.17)
At this point, all transformation matrices are known
and the image of the robot can be drawn. It is important to
note that the method described here is virtually identical
to that discussed in section 2.3. The improvement, however,
is that all the matrix multiplications required to transform
the coordinates of some point from a local coordinate system
to the screen coordinate system will be accomplished by
specially designed chips. In this way the multitude of
matrix multiplications can be accomplished in a minimal
amount of time.
2.7.2 IRIS data structure
The data structure of the robot animation program was
also modified in order to take advantage of the unique
capabilities of the IRIS system. As previously noted, the
entire image of the Cincinnati Milacron T3-776 robot can be
formed from a series of n sided polygons. The IRIS graphics


89
along a straight line. Furthermore, the tool attached to
the end effector should start at rest and accelerate to some
maximum velocity before slowing down and coming to rest at
the final manipulator pose. Due to the desired motion
characteristics, a displacement function based on the cosine
curve was selected. This displacement function is shown in
Figure 3-12.
3.6.1 Determination of number of intermediate points
A first step in the analysis is to determine how many
points should be inserted between the initial and final
poses of the manipulator. Too many points will cause the
motion of the animated manipulator to appear quite slow.
Too few points will result in fast velocities which will
make the animation appear to 'flicker'.
The number of intermediate points was selected based on
two factors, ie. the total straight line distance travelled
and the total change in orientation of the end effector.
Since the initial and final position of the tool tip are
specified, the total straight line distance is readily
calculated as follows:
dist |r£ rjl (3.74)
The number of steps based on translation is found by
dividing this distance by some standard step size,


52
3.2 Notation
The notation used throughout this analysis is that
developed by J. Duffy as presented in reference [21].
Briefly stated, a manipulator is composed of a series of
rigid links. One such link is shown in Figure 3-1. In this
figure it is shown that the link connects the two kinematic
pair (joint) axes and The perpendicular distance
between the pair axes is a^j and the vector along this
mutual perpendicular is a^j. The twist angle between the
pair axes is labelled a and is measured in a right handed
sense about the vector a^.
The particular kinematic pair under consideration is
the revolute joint which is shown in Figure 3-2. The
perpendicular distance between links, or more specifically
the perpendicular distance between the vectors eu j and a^*
is labelled as the offset distance S^. The relative angle
between two links is shown as ^ and is measured in a right
handed sense about the vector 3.. .
Four types of parameters, ie. joint angles (j), twist
angles ( a_), offsets (S^) and link lengths (a^j) describe
the geometry of the manipulator. It is important to note
that for a manipulator comprised of all revolute joints,
that only the joint displacement angles are unknown
quantities. The twist angles, offsets, and link lengths
will be known constant values. Furthermore, the values for
the sine and cosine of a twist angle a and an angular
joint displacement ^ may be obtained from the equations


Figure 5-8: Three Roll Wrist
145


115
all position and orientation data. This information was
then transmitted to the Silicon Graphics system so that each
obstacle could be displayed.
Of utmost importance was the calibration of the vision
system. The relationship between the obstacles and the
robot had to be accurately reproduced so that an operator
could manipulate real world objects while only watching the
graphics screen. A slight calibration error would make it
extremely difficult for the operator to pick up and maneuver
objects correctly. A unique calibration program was written
which reduces this error to a minimum. Figure 4-7 shows the
MBA manipulator together with various objects in the
workspace of the manipulator.
As objects are picked up by the actual manipulator, it
is important that this also be shown by the graphics
representation. Presently, when the operator picks up an
object with the actual robot, he must let the Silicon
Graphics system know of this event by selecting an
appropriate option from a pop-up menu. This selection
causes the Silicon Graphics system to scan the workspace to
determine if an object exists which is within two inches of
the tip of the gripper of the robot. If such an object
exists, the animated image is modified to show the object as
being rigidly attached to the end effector. Similarly, when
the operator drops an object from the gripper, he must again
notify the graphics system of the event. The Silicon
Graphics system calculates the present position of the end


88
3.6 Path Generation
The reverse analysis of the manipulator will serve as
the primary tool required to cause the manipulator to move
along a specified path. Simply stated, a set of
intermediate positions and orientations of the robot will be
selected along some path between two user specified end
points. A reverse analysis will be performed at each of the
intermediate positions in order to determine the set of
joint angles which will position the manipulator as
required.
For this analysis, it will be assumed that the user has
defined the initial and final pose of the manipulator.
Specifically, this requires that the user has specified the
coordinates of the tool tip and the directions of the unit
vectors £3^ and a^. These nine numbers, six of which are
independent, completely describe the position and
orientation of the manipulator. Throughout the remainder of
this chapter the initial and final positions and
orientations of the manipulator will be assumed to be known
quantities and will be referred to as rT, S,T, a,_T, and r,
X o 1 D / 1 r
£gF, a.g^F respectively. Many methods exist for the user to
input these values. Alternate methods will be discussed in
Chapter V.
Many strategies can be used in order to determine a
series of intermediate positions and orientations of the
manipulator between two user specified poses. For this
analysis, it was desired to cause the tool point to move


43
from the fixed robot coordinate system to the screen
coordinate system. A translate command can be called to
center the image as desired on the screen, a scale command
will allow for .zooming in, and a series of rotate commands
will allow for any desired orientation of the robot base
with respect to the screen. The program is written so that
the parameters to these commands are modified by rolling the
mouse device. In this manner, the user can change the
orientation and scale of the drawing as desired. Since the
images will be drawn in real time, the user has the
capability to interact with the system and alter the viewing
position also in real time.
Once the matrix M represents the fixed coordinate
system, the base of the robot can be drawn. A series of
move and draw commands can be called, using local coordinate
data as input. However, since solid color images are
desired, the order that solid polygons are drawn is of
importance. Because of this, the matrix M is simply saved
and given the name M^. When any part of the base of the
robot is to be drawn, however, the matrix must be
reinstated as the current transformation matrix M.
The transformation from the matrix to the coordinate
system attached to Body 1 (see Figure 2-3) is a simple task.
The transformation matrix for Body 1, M^, is given by the
following equation:
= Rotz(<}>i> Mf
(2.12)


60
limits of rotation for the angles <^, 2, and ^ (see
Figure 3-4) are as follows:
-135 < i < 135 (degrees)
30 < 2 < 117
-45 < 3 < 60
3.4.1 Specification of position and orientation
The first step of the analysis is to establish a fixed
coordinate system. For this analysis a fixed coordinate
system is established as shown in Figure 3-4 such that the
origin is at the intersection of the vectors and S^* The
Z axis is chosen to be parallel to the £3^ vector and the X
axis bisects the allowable range of rotation of the angle
<|)^. Throughout the rest of this analysis, this coordinate
system will be referred to as the fixed coordinate system.
Using this fixed coordinate system it is possible to
specify the location and orientation of the hand by
specifying the vector to the tool tip, Rp^, (see Figure 3-5)
and the direction cosines of the vectors and a^.
Although R Sg, and a^ have a total of nine components,
the latter two are related by the three conditions,
6 *6 = 1
-67 -67
1
-6 -67
0


116
Figure 4-7: Display of Objects in Manipulator Workspace


.146
and cannot point 'inside' cone
Figure 5-9: Orientation Limits


36
Figure 2-10: Plane Segments with Infinite Planes.
a) segment 1 (+), segment 2 (+) ;
b) segment 1 (+), segment 2 (+/-) ;
c) segment 1 (-), segment 2 (+/-) .


CHAPTER V
INTERACTIVE PATH PLANNING AND EVALUATION
5.1 Introduction and Objective
In the previous chapter a method for better controlling
a robot manipulator in man-controlled mode was discussed.
Specifically, the computer graphics system was used to
provide a realistic, real time image of the work area so
that the operator could efficiently maneuver the robot mani
pulator. In this chapter, this concept will be extended and
an aid for path planning and evaluation will be developed.
The primary objective of this effort is the development
of an interactive computer graphics software system which
assists an operator who is planning robot motions. The
system must allow the user to specify where the robot is to
move as well as other factors such as the velocity along a
path and the function of the robot at a certain point.
After the operator enters this datum, the graphics system
must display the robot as it performs the task. In this
manner the user can evaluate the motion of the robot prior
to the execution of the task by the actual robot. The
significant advantage of such a system is the fact that
complex paths and tasks can be programmed and evaluated off
line in a manner which is much faster than existing robot
task planning methods.
129


17
struct plane_segment
{int name ;
int number ;
int color ;
float normal point [3] ;
union
{struct circletype cir ;
struct polytype pol ;
struct cyltype cyl ;
} dat ;
struct plane_segment *next
}
struct circletype
{int centerpt ;
float radius ;
}
struct polytype
{int sides ;
float points[][3] ;
}
struct cyltype
{int endpts[2] ;
float radius ;
}
indicates whether it is a
polygon, cylinder or
circle
identifies the object that
the plane segment
belongs to
the color of the plane
segment
local coordinates of the
normal point
contains circle data
contains polygon data
contains cylinder data
; pointer to next plane
segment
the index of the center
point
the number of sides of
the polygon
array of local coordinate
values
the index of the cylinder
endpoints
Figure 2-4: Graphics Data Structure


158
= J w (5.4)
where ^ is a 6x1 screw which represents the net motion of
the end effector and w is a 6x1 vector comprised of the six
angular velocity terms.
Another way of interpreting equation (5.4) is by
stating that the net screw motion of the end effector of the
manipulator is equal to the linear combination of the six
infinitesimal rotations about each of the six joints of the
robot. This is written as
i = wl1 + w22 + + w6^-6
where is a 6x1 vector which represents the Plucker line
coordinates (screw coordinates) of the first joint, etc.
Comparison of equations (5.4) and (5.5) leads to the
conclusion that the 6x6 Jacobian matrix is simply comprised
of the six screws ...,
As presented in reference [29], the screw coordinates
of each revolute joint of the robot are comprised of the six
scalar quantities L, M, N, P, Q, R. The quantities L, M, N
represent the direction cosines of a unit vector which
points along the direction of the joint axis while P, Q, R
represent the respective moments of this unit vector with
respect to the origin of some coordinate system and thus are
origin dependent terms. If the six joint angles of the


BIOGRAPHICAL SKETCH
Carl D. Crane III was born in Troy, New York, on 14
April 1956. After graduating from Canajoharie High School
he began his college studies at Rensselaer Polytechnic
Institute. He received his Bachelor of Science degree in
1978 and his Master of Engineering degree in 1979 in the
field of mechanical engineering. During the years 1979
through 1984 he served as an officer in the U.S. Army Corps
of Engineers. In 1984 he continued his graduate studies at
the University of Florida.
180


Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY
By
CARL D. CRANE III
MAY 1987
Chairman: Dr. Joseph Duffy
Major Department: Mechanical Engineering
It is often necessary in a hazardous environment for an
operator to effectively control the motion of a robot
manipulator which cannot be observed directly. The
manipulator may be either directly guided via use of a
joystick or similar device, or it may be autonomously
controlled in which case it is desirable to preview and
monitor robot motions. A computer graphics based system has
been developed which provides an operator with an improved
method of planning, evaluating, and directly controlling
robot motions.
During the direct control of a remote manipulator with
a joystick device, the operator requires considerable
sensory information in order to perform complex tasks.
Visual feedback which shows the manipulator and surrounding
workspace is clearly most important. A graphics program
ix


69
The angle j^ is defined as the angle between the fixed
X axis and the vector a^2 measured as a positive twist about
the vector S^. Application of equations (3.3) and (3.4)
gives the following expressions for the sine and cosine of
tl:
cos ^ = i*a.12 (3.22)
sin ^ = |iai2SlI (3.23)
The only unknown in these
equations is the
vector
-12'
Since the vectors Rp2, a2'
and S^ all lie
in the
same
plane, it must
be the case
that a^2 is either
parallel or
antiparallel to
the projection of Rp2 on the XY
plane.
Thus
the vector a^2
is given by
+ I (Rp2i)i
(3.
24)
-12
C * (Rp2'i>2r5
Substitution of
the two possible values of a^2
into
(3.22)
and (3.23) will result in two possible distinct values for
^ and it can be shown that these two values will differ by
180 degrees. It is apparent that one of the calculated
values of ^ may not fall in the allowable range of
rotation of -135 to +135 degrees. If this occurs then there
is an immediate reduction from a maximum of four to a
maximum of two possible configurations of the robot which
will position the hand as specified.


147
5.5 Path Evaluation
Once two precision points are entered, both of which
are in the workspace of the robot, it must be the case that
the T3-726 robot can move between them. The simple example
of joint interpolated motion proves this. However, just
because two precision points lie in the workspace, it is not
guaranteed that the robot can perform some controlled path
motion between them.
As detailed in Chapter III, motion of the robot along
some specified path can be accomplished by generating a
series of intermediate positions and orientations along the
desired path. The example of rectilinear motion of the tool
tip was discussed in detail. In the introduction to this
chapter it was stated that one objective of this effort was
to establish an algorithm which could rapidly determine
whether motion between two precision points was possible.
This algorithm must be rapid and must not simply generate a
series of intermediate points and check whether each is in
the workspace. Such a method suffers from problems with
choice of step size and computer accuracy.
The approach to be
taken here is to
determine
if
the
desired path intersects
any part of the
boundary
of
the
reachable workspace. The first selection
to be made
is
the
type of path. Movement
along a straight
line, circular
arc,
or along any polynomial
curve should be
possible.
Motion
along a straight line was considered, but it introduced one
new problem. The reachable workspace of the. robot was now a


6
C. Anderson [11] modeled a workcell consisting of the
ESAB MA C2000 arc welding robot. Displacement, velocity,
acceleration, force, and torque data were utilized in the
model as calculated by the Automatic Dynamic Analysis of
Mechanical Systms (ADAMS) software package. Rapid wire
frame animations were obtained on Evans and Sutherland
vector graphics terminals. Solid color representations with
shading were also generated; however, real time animation of
these images was not possible.
A further off line animation system, entitled WORKMATE,
has been developed at the Stanford Research Institute by S.
Smith [12]. A goal of this effort is to implement a graphic
based simulation program through which an inexperienced user
can plan and debug robot workcell programs off line. The
program is run on a Silicon Graphics IRIS workstation which
has the capability of rapidly rendering solid color images
with shading. A significant feature of WORKMATE is that
collisions between objects in the workspace can be
identified for the user in real time. This feature avoids
the need for the user to perform the tedious visual
inspection of the robot motion in order to verify that no
collision occurs along the path.
Several companies have recently entered the robotic
simulation market. Silma, Inc., of Los Altos, California,
was formed in 1983 to develop software which would model
robotic workcells. This group recognized the problem that
each robot manufacturer uses a different robot language to


42
Scale (S ,S ,S )
x y z
sx 0 0 0
0 Sy 0 0
0 0 S 0
z
0 0 0 1
(2.8)
Rot (9)
x '
10 0 0
0 cosG sin 0
0 -sin cos 0
0 0 0 1
(2.9)
Rot ()
y
cos 0
0 1
sin 0
0 0
-sin 0
0 0
cos 0
0 1
(2.10)
Rotz()
cos sin 0 0
-sin cos 0 0
0 0 10
0 0 0 1
(2.11)
With these three basic transformations, it is an easy
matter to cause the matrix M to represent the transformation


30
The removal of the backward facing polygons is a quick
and simple task. As indicated in the data structure shown
in Figure 2-4, the coordinates of a normal point are
specified for each plane segment. A vector normal to the
surface (and outward pointing) can be formed by subtracting
the coordinates of the origin of the local coordinate system
from the coordinates of the normal point. Just as all the
vertices were transformed from the local coordinate system
to the screen coordinate system, the normal points are also
transformed. Comparison of the Z coordinate of the normal
point with that of the origin of the local coordinate system
(both now in the screen coordinate system) determines
whether the particular plane segment is visible.
Application of this method results in a greatly
simplified image. Shown in Figure 2-8 is an edge drawing of
the T3-776 manipulator. Figure 2-9 shows the same drawing
with backfacing polygons removed.
2.5 Sorting of Plane Segments
A characteristic of raster type graphics displays is
that whatever is drawn last will appear to be on top. For
example if two polygons, A and B, exist and polygon A is
closer to the viewer and overlaps polygon B, then polygon B
must be drawn on the screen prior to polygon A. The only
other alternative would be to redefine polygon B so that the
regions overlapped by polygon A were subtracted. In this


144
the selection of precision points. One fact that must be
emphasized to a new user is that only the wrist must lie
within the wrist workspace region. The tool tip may be
chosen outside of this region as long as the orientation of
the tool is such that the wrist point falls within its
allowable region.
Previously it was stated that in order for the T3-726
robot to be within its workspace, the wrist point must lie
within the wrist workspace region. This is a necessary but
not sufficient condition. In other words it is possible for
the wrist of the T3-726 robot to be within its workspace and
yet the orientation of the tool is such that the tool cannot
be properly oriented. This problem is due to the geometry
of the wrist itself. Shown in Figure 5-8 is a skeletal
drawing of the wrist of the manipulator. Of concern are the
relative directions of the unit vectors S, and S,. It is
4 6
apparent from the figure that it is physically impossible
for £4 and £g to be anti-parallel, due to the fact that the
angles and equal 61 degrees. Because of these twist
angles, there is a group of orientations of £g with respect
to £4 that cannot be attained. This group of impossible
orientations is represented by a cone as shown in Figure
5-9. Mathematically it can be stated that a given position
and orientation will be outside of the allowable workspace
if the dot product of the two unit vectors £4 and is less
than the cosine of 122 degrees.


81
It is interesting to note that both the denominator and
numerator of (3.54) and (3.55) vanish simultaneously when
y = 0 and xc2+3+Z6S2+3 = 0 (3.56)
This constitutes what will be defined as an algebraic
indeterminacy for 9^. It can be shown that these
relationships are only satisfied simultaneously when S^. is
colinear with £3^, or in other words when 9,. = +180 degrees.
Thus a value of c.-=-l calculated from (3.53) will act as a
o
flag to signal when equations (3.54) and (3.55) may not be
used to determine 9..
4
It is readily apparent that when £3g and £>^ are
colinear, a degree of freedom of the manipulator is lost in
that it is possible to spin the vector £5 about the £4^5
line without changing the position and orientation of the
hand. Thus the choice of 9^ is arbitrary. This problem
can be overcome by setting 9^ to its last previously
calculated value prior to the robot moving into the
indeterminate position.
The only remaining angle to be determined in the wrist
is the corresponding value for 9g. Utilizing the unified
notation [21], the following subsidiary expressions may be
written for a spherical heptagon:


132
Table 5-1: Mechanism Dimensions and
Joint Limitations of T3-726 Manipulator
11
*
a12
= 0 cm
al2 "
90
22
0 cm
a23
= 51.0
a23 =
0
33
0
a34
= 0
34 "
90
44 =
51.0
a45
= 0
a45 =
61
55
0
a56
= 0
a56
61
* to be determined when closing the loop
-142.5 < ^ < 142.5 deg.
0.0 < 2 < 90.0
-60.0 < 3
< 60.0


19
2.3.1 Local to fixed coordinate transformation
As shown in Figure 2-3, the representation of the robot
manipulator is made up of a series of rigid bodies. The
coordinates of each vertex are known in terms of the
coordinate system attached to each of the bodies. The first
task to be completed is the determination of the coordinates
of every vertex in terms of the fixed coordinate system
attached to the base of the robot.
Since it is assumed that the six joint angles of the
robot are known, the transformation of local point data to
the fixed reference system is a straightforward task. The
local coordinate systems shown in Figure 2-3 are named .
through Cg. These local coordinate systems were carefully
selected so as to simplify the transformation of data to the
fixed reference system.
Shown in Figure 2-5 is a skeletal model of the T3-776
manipulator. The vectors along each of the joint axes are
labeled j3^ through £g and the vectors perpendicular to each
successive pair of joint axes are labeled a^ through a^
(not all are shown in the figure). The variable joint
displacements 0_ through Q, (9.) are measured as the
b 3
angles between the vectors a., and a., in a right handed
l j 3 k
sense about the vector £[ ^. The first angular displacement,
^, is measured as the angle between the X axis of the
fixed reference system and the vector a^* As Previously
stated, it is assumed that the joint displacements ^
through 9g are known values.


78
The direction cosines of the vector £g which are given
quantities in the fixed system can now be written in the
moving x'y'z' system by application of (3.43) and,
x6 = x6C0Sl + *6sinh
y = -Xgsincj^ + ygcos^
(3.44)
where Xg,yg,Zg and x, y, z are respectively the
direction cosines of £3g in the fixed and moving systems.
The direction cosines of Sg in the second moving
coordinate system, x£, y£, zg are related to the direction
cosines of £g in the first moving coordinate system by three
successive applications
of the
rotation matrix
c .
3
s .c. .
3 13
s .s. .
3 13
-s .
3
c c .
3 13
c s .
3 13
(3
0
-s. .
13
c. .
i3j
which yields
~x"~
X6
-V 1
X6
-
A34A23A12
y
Z6
(3.46)


CHAPTER II
DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS
2.1 Introduction and Objective
In the previous chapter, improvements in computer
hardware were discussed which have particular application to
the problem of real time animation of solid color objects.
The goal of this chapter is to detail the development
(hardware and software) of real time, interactive computer
graphics animations of industrial robots.
Any computer graphics simulation must possess two
characteristics. First it must be realistic. The image on
the screen must contain enough detail so as to give the user
an accurate and obvious image. Second, the images must be
generated rapidly enough so that smooth animation will
result. Picture update rates of 10 frames per second have
provided satisfactory animation on a 30 Hz video monitor
(each picture is shown three times before changing).
Throughout this chapter, all applications will be aimed
at the development of a solid color representation of a
Cincinnati Milacron T3-776 industrial robot. The first part
of this chapter will focus on the development of appropriate
data structures, viewing transformations, and hidden surface
The specific programming techniques
11
removal methods.


75
as required. This reduction from four sets of values to a
maximum of two possible sets is significant in that it
reduces the computational time involved in the reverse
position analysis of the T3-776 robot.
3.4.4 Analysis of wrist
The remaining task of the Reverse Displacement Analysis
is to determine the values of the three angles in the wrist
which correspond to each set of values of the angles 1,
2* and Complete rotation is possible about each of
the three axes £3^, S^f and £g so that the result will not be
affected by joint limitations as in the previous section.
Figure 3-9 shows a more detailed view of the three roll
wrist. It is important here to reemphasize how 9^, 9,.,
and 9, are measured. Each of the angles 9. (j=4,5f6) are
o 1
measured by a right handed rotation of the link a^j into the
link a., about the vector S..
J K ~3
Two moving coordinate systems are attached to the robot
as shown in Figure 3-10 such that x always points along a^2
and z' points along and analogously x" always points
along a^ and z" along The relationship between the
fixed XYZ system and the moving x'y'z' system is given by
the following equations:
x' = x cos y' = -x sincj^ + y costj^ (3.43)
z' =
z


5.13 Roll, Pitch, Yaw Calculations 168
5.14 Results and Conclusions 172
VI DISCUSSION AND CONCLUSIONS 174
REFERENCES 177
BIOGRAPHICAL SKETCH 180
V


101
magnitude of the maximum angular change. This approach was
utilized when the manipulator moved from a region of four
possible configurations to a region of only two.
3.7 Results and Conclusions
The path generation method discussed in this chapter is
based on a closed form solution of the inverse kinematics of
the manipulator. The advantage of this approach is that for
any given position and orientation of the manipulator, all
possible configurations are found. The only disadvantage is
in speed. The closed form solution requires more
calculations than iterative techniques. Furthermore,
knowledge of all configurations requires that time be spent
on filtering the sets of joint angles at some location in
order to find the set which is closest to the joint angles
at the previous location.
Typical path calculation time on the Silicon Graphics
system was 10 to 15 seconds. The same program on a VAX
11/750 computer took approximately 2 seconds. The addition
of a floating point accelerator in the Silicon Graphics
system would reduce its calculation time by a factor of 4 to
approximately 2.5 to 3.5 seconds.
The purpose of the path generation program was to
generate a sequence of joint angles for the manipulator
which would produce rectilinear motion of the end effector
between two user specified positions and orientations. Once
the angles were calculated, they would serve as input to the
graphics program described in Chapter II so that the motion


159
robot are known, it is a straightforward task to generate
the coordinates of the six screws of the robot.
It was previously stated that the robot would be in a
singularity position if the set of screws which represent
the joints of the robot are linearly dependent. This
dependency can be determined by monitoring the determinant
of the Jacobian matrix. If the determinant of the Jacobian
matrix equals zero, then the robot is in a singularity
configuration.
The values of each coefficient of the Jacobian matrix
is dependent on the selection of the coordinate system to be
used. The greatest simplification of the Jacobian resulted
when the coordinate system was attached to the manipulator
such that the origin was at the intersection of the vectors
£2 and S4 as shown in Figure 5-12. The X axis lies along
a^4 and the Z axis along Use of this coordinate system
resulted in the following expressions for the six screws of
the system:
1

<£l
/
-a2323 X -1
2
=
(S2
9
~a2323 X -2
3
=
(k
9
0 0 0)
4
=
<4

9
0 0 0)
5
=
(*5

9
S444 x -5}
6
=
(s6
9
444 x


37
Clearly, the comparison task is lengthy and time
consuming. The case of two objects whose bounding boxes are
not disjoint and yet do not actually overlap takes
considerable time. In addition, the equation of the
infinite plane for each plane segment had to be calculated
for each image to be drawn based on the position of the
robot and of the viewer. On the average, for a particular
drawing of the T3-776 robot there are 85 plane segments to
compare and sort. Due to this large number, the execution
time of this algorithm on a VAX 11/750 computer is
approximately 10 seconds per drawing.
Clearly, the sorting of plane segments in software will
not allow images to be generated rapidly enough to provide
proper animation. An improvement by at least a factor of
100 is
necessary
in order to
reach
the minimum
animation
rate of
10 frames
per second.
A
second drawback of the
algorithm is that
it will fail
if
there exists
a cyclic
overlap
of plane
segments.
For
example, if
segment A
overlaps B which overlaps C which in turn overlaps segment
A, then the algorithm as written will fall into a recursive
trap. This problem can be corrected in software, but the
additional calculations will only serve to further increase
the computation time.
A solution to the problem was found via application of
special purpose computer graphics hardware. The animation
program was modified to run on a Silicon Graphics IRIS model
2400 workstation. Proper modifications of the database and


which operates on a Silicon Graphics IRIS workstation has
been developed which provides this visual imagery. The
graphics system is capable of generating a solid color
representation of the manipulator at refresh rates in excess
of 10 Hz. This rapid image generation rate is important in
that it allows the user to zoom in, change the vantage
point, or translate the image in real time. Each image of
the manipulator is formed from joint angle datum that is
supplied continuously to the graphics system. In addition,
obstacle location datum is communicated to the graphics
system so that the surrounding workspace can be accurately
displayed.
A unique obstacle collision warning feature has also
been incorporated into the system. Obstacles are monitored
to determine whether any part of the manipulator comes close
or strikes the object. The collision warning algorithm
utilizes custom graphics hardware in order to change the
color of the obstacle and produce an audible sound as a
warning if any part of the manipulator approaches closer
than some established criterion. The obstacle warning
calculations are performed continuously and in real time.
The graphics system which has been developed has
advanced man-machine interaction in that improved operator
efficiency and confidence has resulted. Continued
technological developments and system integration will
result in much more advanced interface systems in the
future.
x




94
degrees, a different method will be used to determine the
vector k.
Equating the diagonal elements of the matrix R with
equation (3.80) gives
2
R.. = k vers + cos
11 x
2
R22 = ky vers + cos (3.86)
2
R__ = k vers + cos
33 z
Solving for the components of k gives
kx = + t (Ri;l-cos)/(1-cos) ]0,5
ky = + [(R22-cos)/(l-cos)]0,5 (3.87)
k = + [(R -cos)/(l-cos)]0,5
Z j j
The largest component of k can be determined from (3.87).
For this largest component, the sign of the radical can be
determined from equation (3.84). Although corresponding
values for the smaller two components of k could also be
obtained from the set (3.87) more accurate results will be
obtained from an alternate approach.
Summing pairs of off diagonal elements from (3.80)
gives
R01+R10 = 2k k vers
21 12 x y
R32+R23 = 2kykzvers (3.88)
R-t+Rt- = 2k k vers
31 13 z x
With these equations,
the
solution method
consists of
obtaining the value of
the
largest
component of k
from
(3.87) with the appropriate
sign
of the
radical
being
obtained from (3.84).
The
values
of the
remaining
two
components of k can now
be obtained
by solving
(3.88).
This


80
Equating the right sides of (3.47) and (3.49) and rearranging gives
the following three equations:
X5 = c4(x6c2+3+z6s2+3) + S4{~^6)
(3.50)
2+3+Z6S2+3
(3.51)
(3.52)
Z5 X6S2+3 Z6C2+3
Equation (3.52) is significant in that it contains cc as its
D
only unknown. Substituting for from set (3.49) and
solving for c^ gives
c5 ^c45c56-x6s2+3+z
Equation (3.53) gives two possible values for 9^ and these
two values determine two configurations of the wrist for a
specified end effector position.
The next task is to find the corresponding value for
9. for each value of 9-. This is readily accomplished by
utilizing Cramer's rule to solve for s^ and c^ in equations
(3.50) and (3.51). The resulting equations are as follows:
c
4
(3.54)
s
4
(3.55)


4
Chapter 4; Robotic Telepresence. A telepresence system
was developed to allow a user to control a remote robot
manipulator in two distinct modes, viz. man-controlled and
autonomous. This chapter details the use of the graphics
system as an aid to the user. Visual feedback of the work
area is provided together with real time warning of an
imminent collision between the robot and an object in the
workspace.
Chapter 5: Interactive Path Planning and Evaluation.
An interactive computer graphics software system was
developed which assists an operator who is planning robot
motions. The user must specify the path of the manipulator
together with velocity and function information. Once a
task is previewed on the graphics screen, the path datum is
communicated to the actual robot controller. Specific
application to the Cincinnati Milacron T3-726 manipulator is
described in detail.
1.2 Review of Previous Efforts
Early work dealing with the computer graphics animation
of industrial robots occurred in the 1970's. Indicative of
these efforts were reports published from England [1-3],
West Germany [4-5], France [6], and Japan [7]. Common to
this work was the use of computer graphic storage tube
terminals. Hardware limitations resulted in slow animation
rates with bright flashes occurring as the screen is cleared
for each image.


143
Figure 5-6: Manipulator Workspace
Figure 5-7: Top and Side Views of Workspace


176
control. The combination of the graphics system with such a
sensor integrated control system is a topic for much further
research.


157
5.8 Singularity Analysis
A singularity position of a robot manipulator is
defined as a particular position of the robot such that the
set of screws which represent the joints of the robot are
not linearly independent. In other words, the robot is in a
position such that typically at least one degree of freedom
is lost.
At such singularity positions the normal reverse
kinematic analysis fails. For example it is possible for
the 4th and 6th joints of the T3-726 robot to become
colinear. When this
occurs, the value of
4
may be
arbitrarily
selected
and
a corresponding
value
Of e6
calculated.
Typically
when
the manipulator
moves
near a
singularity
position,
rapid
changes in the
joint
angles
within the wrist of the robot will occur. For this reason
it is desirable to select a path which avoids any
singularity positions as much as possible.
The identification of singularities for the T3-726 type
of robot is a relatively easy task. This analysis was
performed by R. Soylu [28] and is presented here for
completeness.
In reference [29] it is shown that the relationship
between the angular velocities of each of the six joints of
the T3-726 manipulator and the resulting screw that the end
effector of the robot will move along is represented by the
6x6 Jacobian matrix, J. This statement is expressed by the
equation


123
If equation (4.7) does not hold true, the infinite line does
indeed intersect the sphere, but the line segment from PI to
P2 lies on one side of the obstacle.
If it is determined that straight line motion is not
possible, then an alternate path must be calculated. As
shown in Figure 4-11, two new points A and B are determined.
These points lie in the same plane as points Pi, P2, P3, and
P4. Points A and B are found by first determining points
C if
A B A', and B'. These additional points are given by
the expressions
a
*
= £3
- RS
(4.8)
b
= £3
+ RS
(4.9)
a'
= £4
- RS
(4.10)
b'
= £4
+ RS
(4.11)
The points A and B are then given by
a = a
+ KRX
/
I2i|
(4.12)
*
b = b
+ KRX
/
mi
(4.13)
where K is a constant which governs how far the path is to
be stretched from the sphere.
With points A and B now known, straight line motion
will be performed between these points while a smooth
non-rectilinear path is generated between points PI and A
and points B and P2. A continuous smooth velocity profile


64
Utilizing the results of (3.8) and equations (3.3) and (3.4)
gives the following expressions for the sine and cosine of
67 71 (3 14)
s7 = IS.72.677ll (3.15)
In addition, by projection the expressions for the sine and
cosine of (Q^1|>-^) are
cos (9^-^) = a.^ £ (3.16)
sin(01-(})1) = I S.3_ a.71i | (3.17)
where _i is the unit vector in the direction of the X axis.
It was mentioned earlier that a singular condition is
flagged when c^^=+l (and therefore s^=0). This occurs when
the vectors and are either parallel or antiparallel
and there are thus an infinity of possible links a^^ which
are perpendicular to both and S^. However the constraint
S77=0 can be
imposed
to obtain a unique
result as
shown in
Figure 3-6.
Forming
the inner product
of (3.10)
with ^
yields,
11
= -R
PI
£l
(3.18)


165
assist the user in evaluating and comparing the two
alternatives.
In a manner similar to that discussed in Chapter III, a
linked list of joint angles was generated for each of the
two possible starting configurations. Each element of each
of the two linked lists represents the joint angles for the
robot at a particular location along the path. For each of
the linked lists, a plot of the value of the sine of 9^ for
each of the elements of the list was displayed for the user.
The user could then select the starting configuration which
resulted in the motion which best avoided the condition of
sine 0£. equal to zero. An example of the type of
singularity plot provided to the user is shown in Figure
5-13.
5.11 Preview of Motion
Once the starting configuration is selected, the
resulting motion of the robot can be displayed on the
graphics screen. Pictures of the robot are generated by
traversing the appropriate linked list corresponding to the
selected initial configuration. For each element of the
linked list, a picture of the robot was drawn with joint
angles set as specified in the element of the list.
In this method, the user is able to observe the
manipulator's motion prior to any movement by the actual
manipulator. In addition to the robot motion, the path,


61
Figure 3-5: Hypothetical Closure Link


3
production' during the planning phase. Assembly line down
time is minimized as the new tasks can be quickly
communicated to the manipulator. The graphics system offers
a convenient and rapid method of planning workcell layouts
and manipulator tasks. The ability to interact with the
system allows the user to reposition objects within the
workspace to verify that all important points can be reached
by the robot. Cycle times can be calculated and compared in
order to improve productivity.
Following a review of previous work dealing with the
animation of industrial robots, subsequent chapters will
detail the development and application of an interactive
computer graphics animation system. A brief description of
each chapter is as follows:
Chapter 2: Development of Animated Displays of
Industrial Robots. This chapter describes the development
of the interactive animation program. The database
development is detailed for the particular case of modeling
the Cincinnati Milacron T3-776 industrial robot. Graphics
techniques are described with emphasis on the removal of
backfacing polygons and the sorting of solid objects.
Chapter 3: Path Generation. A method of generating
sets of joint angles which will move a manipulator along a
user specified path is described. Specific issues deal with
motion near singularity positions and the selection of the
robot configuration at each point along the path.


S,
S4
S6
N)
O
Figure 2-5: Skeletal Model of T3-776 Manipulator


5
More recently, a program named GRASP was developed by
S. Derby [8]. The program was written in FORTRAN on a Prime
Computer with an Imlac Dynagraphics graphics terminal.
Vector images (wire frames) were generated as raster
technology had not yet developed to be able to produce rapid
images. This program allowed an experienced user to design
and simulate a new robot, or modify existing robot
geometries. Robot motions were calculated and displayed
based on closed form kinematic solutions for certain robot
geometries. A generic iterative technique was used for arms
having a general geometry.
The animation programming of M.C. Leu [9] is indicative
of the current work in the field. A hardware configuration
consisting of two DEC VAX computers, vector graphics
terminals, and raster graphics terminals was utilized to
produce wire frame and solid color images. The program
allows for off line programming of new or existing robot
designs. In addition, a swept volume method was utilized to
detect collisions of the robot arm and any object in the
workspace.
Further improvements in the simulation of robotic
workcells have been made by B. Ravani [10]. Animations have
been developed on an Evans and Sutherland graphics terminal
which can rapidly produce color vector images. Significant
improvements in database development and user interaction
with the computer have made this a versatile simulation
program.


118
beeped once and the color of the sphere was changed to
yellow. If the robot entered the smaller bounding box, then
the graphics system beeped continuously and the color of the
sphere was changed to red. Figure 4-8 shows this obstacle
warning feature while Figure 4-9 shows the telepresence
system as it is being operated in man-controlled mode. The
ability of the graphics system to warn the operator of
potential collisions significantly impacted on operator
performance and confidence.
4.4.2 Autonomous mode
For autonomous operation of the MBA manipulator, it was
desired to specify two positions and orientations and have
the manipulator move autonomously between them. Typically
the first position was taken as the current location of the
robot. In this instance it can be assumed that the position
of the sphere is known in relation to the robot (supplied by
vision system). A geometrical approach based on vector
manipulations was used to solve this problem.
The tasks required to be performed are as follows:
(a) Determine whether straight line motion between two
specified points is possible without colliding with a
spherical obstacle.
(b) If not, generate a collision free end effector path.
(c) Observe the animated robot move along the path.
(d) Modify the path if desired.
(e) Command actual MBA robot to perform previewed motion.


166
Figure 5-13: Display of Singularity Results


175
awkward interaction by the user. Future trends in graphics
hardware development, however, will make such applications
practical and feasible. Improvements in graphics
performance by a factor of 1.5 are anticipated within the
next year. Inventive new approaches to the workcell
modeling task will be forthcoming.
Planning tasks for two cooperating manipulators is an
additional task which has not been addressed. An example of
a task for two robots would be passing a work piece back and
forth. The graphics system must allow the user to plan some
actions for the first robot and then some for the second.
The database must be developed to store each robot's task
data plus communication items between the robots.
It was previously mentioned that certain points planned
on a graphics
screen and i
communicated to the robot
must be
'touched
up'
. This is
because
certain items
in the
workspace
of
the robot
are not
positioned exactly as
displayed
by
the graphics
system.
This problem is
magnified
for the case of two cooperating robots. Slight inaccuracies
in position data can result in excessive forces being placed
on the work piece being held by the two robots. This
problem cannot be readily 'touched up' by modifying certain
points. Rather force information must be monitored and each
robot's position modified in real time in order to reduce
the net non-gravitational force and torque to zero. The
graphics system then serves as a global planner, but a
robust control system must still perform the immediate robot


24
where
M
cos(j>^ -sin<{>i 0
sin^ 0
0 0 1
(2.4)
Proper use of equations (2.1) and (2.3) will result in the
determination of all vertex data for the robot in terms of
the fixed coordinate system attached to the robot base.
2.3.2 Fixed to viewing coordinate transformation
Assuming that the point to look at and the point to
view from are known in terms of the fixed coordinate system,
all vertices of the robot are now determined in terms of a
viewing coordinate system. The use of this coordinate
system will greatly simplify the eventual projection of the
robot onto the screen.
As shown in Figure 2-6, the origin of the viewing
coordinate system is the point to view from. The Z axis of
the coordinate system is the vector from the point being
looked at to the point being viewed from. With the Z axis
known, the XY plane is defined.
The exact directions of the X and Y axes are immaterial
at this point. Typically, however, the Y axis of the
viewing coordinate system will point "up." In other words,
for the robot to be drawn with the base along the bottom of


46
2.7.3 Backface polygon removal on the IRIS
In section 2.4 a method of backface polygon removal was
discussed. A normal point was selected such that the vector
from the origin of the local coordinate system to the normal
point represented a vector normal to the particular polygon
in question. Transformation of the origin point and the
normal point to the screen coordinate system would determine
if the polygon was facing the viewer.
This method was again used on the IRIS workstation with
slight modification. From observing Figure 2-3, it is
apparent that most of the polygons which form each of the
rigid bodies have one of the coordinate vectors as their
normal vector. Therefore, associated with each polygon is
the character string 'x', 'y', 'z', or 'other'. In this
manner, not every polygon will have to have its normal
vector calculated. Allowing three normal vector
calculations for each of the coordinate axes of each rigid
body (21 total), plus the normal calculations of the 'other'
cases (50 total), the normal vector calculations have been
reduced from a previous total of 237 to the new total of 71.
Knowing the transformation matrix, for each of the
rigid bodies, the transformation of the normal points could
be carried out in software via matrix multiplication. This
process, however, would be too time consuming and would
greatly slow down the animation rate. An alternative method
was found whereby the Geometry Engine chips of the IRIS
workstation could be used to perform the matrix
multiplication in hardware.


168
One problem that had to be resolved however was the
fact that it was not possible to simply transmit joint
angles to the robot controller. Rather, the robot required
the X, Y, Z position of the tool together with orientation
data defined as roll, pitch, and yaw. When given the
position and orientation data, the robot controller would
then calculate its own joint angles and position the robot
accordingly.
The lack of ability to communicate joint angles posed a
serious problem in that the user no longer had control over
the configuration that the robot would be in. In other
words, the robot would move to the correct position and
orientation as desired, but it would select the initial
configuration based on some predefined criterion. Typically
the robot would move to the configuration that was 'closest'
to the home position configuration.
A second problem involved the transformation from joint
angle information to the correct roll, pitch, and yaw
orientation data. This transformation introduced several
new algebraic indeterminacies which only serve to complicate
the process.
5.13 Roll, Pitch, Yaw Calculations
For the T3-726 manipulator, the pitch orientation datum
is defined as the angle between the fixed vertical Z axis
and the S, vector. The sine and cosine of this angle, D,


112
graphics system upon request. Comparisons of cycle times
indicated that the graphics system was never waiting on the
PDP computer to process data. In other words, potentiometer
voltages could be converted to digital values at a rate
faster than the'Silicon Graphics system required. A more
detailed discussion of the communication protocol can be
found in reference [27].
With knowledge of the joint angles of the actual robot,
a C language computer program was written on the Silicon
Graphics system which provides the operator with the visual
information required to control the robot. At the onset of
this project, it was desired that this animation have the
following characteristics:
1. Show the obstacles (spheres and cubes) that were in
the workspace of the actual robot.
2. Be able to select any viewing position from which to
observe the manipulator and workspace.
3. Be capable of zooming in or out in order to improve
the resolution of the image.
4. Cause the animated manipulator to 'attach' or 'drop' a
cube in the gripper when the actual robot picked up or
dropped a cube.
5. Warn the operator in real time when any part of the
end effector approached within a specified distance of an
obstacle (sphere).


TABLE OF CONTENTS
Page
ACKNOWLEDGEMENTS ii
LIST OF TABLES vi
LIST OF FIGURES vii
ABSTRACT ix
CHAPTERS
I INTRODUCTION 1
1.1 Background 1
1.2 Review of Previous Efforts 4
II DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS 11
2.1 Introduction and Objective 11
2.2 Database Development 12
2.3 Coordinate Transformations 18
2.4 Backface Polygon Removal 29
2.5 Sorting of Plane Segments 30
2.6 Description of Hardware System 38
2.7 Program Modifications . 40
2.8 Results and Conclusions 49
III PATH GENERATION 51
3.1 Introduction and Objective 51
3.2 Notation 52
iii


13
Figure 2-1: Cincinnati Milacron T3-776 Manipulator


62
so that the three vectors (Rpl, Sgr ag7) represent the 9-3=6
independent parameters necessary to locate a rigid body in
space.
3.4.2 Closing the loop
Once the position and orientation of the hand is
specified, the manipulator is connected to ground by a
hypothetical link. The problem of determining the sets of
joint displacements to position and orient the hand is thus
transformed to the problem of analyzing an equivalent
spatial mechanism with mobility equal to one. The concept
of closing the loop is not new. Pieper and Roth [22] were
the first to point out the implicit correspondence between
manipulators and spatial mechanisms using homogeneous
transfer matrices. The method of closing the loop which is
presented here was published by Duffy and Lipkin in
reference [23].
It is now necessary to determine the five constraint
parameters S77, a71, S^, a71f and (Se
ttle loop together with the input angle of the spatial
mechanism, 9^. The first step of the completion algorithm
is to establish a direction for the unit vector S_^. This
vector will act as the axis of rotation of the hypothetical
revolute joint which serves to close the loop. The
direction of is arbitrary so long as it lies in the plane
which is perpendicular to ag7. For this analysis £7 is
selected such that otg7 equals 90 degrees and thus the
direction cosines of £[7 may be obtained from the equation
.) that complete


47
The IRIS workstation is placed in "feedback mode."
When in feedback mode, graphics commands are not drawn on
the screen, but rather data items are stored in a buffer.
The command 'xfpt' accepts the local coordinates of a point
as input. The homogeneous coordinates [x, y, z, w] of the
point in terms of the screen coordinate system are stored as
output in the buffer. The Z value of the normal point (z/w)
is compared with the Z value of the origin point of the
local system after both points have been transformed to the
screen coordinate system by the Geometry Engine.
Comparisons of these Z values determines whether the normal
vector is pointing towards the viewer and thereby determines
if a particular polygon is visible. It should be noted that
when a parallel projection is used, as it is in this
example, that the homogeneous coordinate 'w' will always
equal 1 and that the division is therefore not necessary.
2.7.4 Modified Sorting of Plane Segments
After backfacing polygons are removed, the remaining
plane segments must be drawn on the screen in proper order
so that polygons closer to the viewer are drawn last.
Section 2.5 detailed a method for accomplishing this
sorting. A lengthy series of tests were made to compare
every pair of plane segments. Although the sorting
algorithm produced correct results, the computational time
was unacceptable.


49
were required to produce accurate images of the robot.
These 12 rules form the basis of a forward chaining
production system. It must be re-emphasized that the
correct ordering can be accomplished in a negligible amount
of time because all data required by the 'if clause' of each
rule were calculated previously.
2.8 Results and Conclusions
The resulting representation of the Cincinnati Milacron
T3-776 robot is shown in Figures 2-11 and 2-12. Pictures
are generated at a rate of 10 frames per second which
results in satisfactory animation. As previously stated,
the user is capable of interacting with the system in real
time to alter the viewing position, freeze the motion, or to
zoom in.
Many applications exist for such a graphics system.
Two particular applications, the control of teleoperated
robots and the off line planning of robot tasks, will be
presented in Chapters IV and V. Additional applications in
the areas of designing workstations, and the evaluation of
new robot designs (based on factors such as workspace
envelope, dexterity capability, and interference checking)
make such a graphics system a valuable tool.


MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY
BY
CARL DAVID CRANE III
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1987

ACKNOWLEDGEMENTS
The author wishes to thank his wife, Sherry, and his
children Theodore, Elisabeth, and Stephanie for their
patience and support. Also, special thanks go to his
committee chairman, Professor Joseph Duffy, for his
encouragement and guidance. He is also grateful to the
members of his graduate committee who all showed great
interest and provided considerable insight.
This work was funded by the U.S. Army Belvoir Research
as
and Development Center, McDonnell Dougl
Company, and Honeywell Corporation.
Astronautics

TABLE OF CONTENTS
Page
ACKNOWLEDGEMENTS ii
LIST OF TABLES vi
LIST OF FIGURES vii
ABSTRACT ix
CHAPTERS
I INTRODUCTION 1
1.1 Background 1
1.2 Review of Previous Efforts 4
II DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS 11
2.1 Introduction and Objective 11
2.2 Database Development 12
2.3 Coordinate Transformations 18
2.4 Backface Polygon Removal 29
2.5 Sorting of Plane Segments 30
2.6 Description of Hardware System 38
2.7 Program Modifications . 40
2.8 Results and Conclusions 49
III PATH GENERATION 51
3.1 Introduction and Objective 51
3.2 Notation 52
iii

3.3 Mechanism Dimensions of the T3-776
Manipulator 55
3.4 Reverse Displacement Analysis 59
3.5 Forward Displacement Analysis 83
3.6 Path Generation 88
3.7 Results and Conclusions 101
IV ROBOTIC TELEPRESENCE 103
4.1 Introduction and Objective 103
4.2 Telepresence Concept 104
4.3 System Components 106
4.4 Method of Operation 110
4.5 Problems Encountered 127
4.6 Conclusions 128
V INTERACTIVE PATH PLANNING AND EVALUATION . .129
5.1 Introduction and Objective 129
5.2 Robot Animation 130
5.3 Program Structure 136
5.4 Workspace Considerations 142
5.5 Path Evaluation 147
5.6 Calculation of Intermediate Points . .153
5.7 Robot Configurations 155
5.8 Singularity Analysis 157
5.9 Interpretation of Singularity Results .163
5.10 Selection of Configuration 164
5.11 Preview of Motion 165
5.12 Communication with Robot Controller .167
IV

5.13 Roll, Pitch, Yaw Calculations 168
5.14 Results and Conclusions 172
VI DISCUSSION AND CONCLUSIONS 174
REFERENCES 177
BIOGRAPHICAL SKETCH 180
V

LIST OF TABLES
Page
Table 2-1 T3-776 Mechanism Dimensions 22
Table 2-2 Plane Segment Sorting Cases 35
Table 3-1 Sample Angles for j and j+1 Positions 99
Table 5-1 T3-726 Mechanism Dimensions 132
Table 5-2 Direction Cosines 162
vi

LIST OF FIGURES
Page
Fig. 2- 1 Cincinnati Milacron T3-776 Manipulator ... 13
Fig. 2- 2 Conceptual Sketch 14
Fig. 2- 3 Collection of Rigid Bodies 15
Fig. 2- 4 Graphics Data Structure 17
Fig. 2- 5 Skeletal Model of T3-776 Manipulator .... 20
Fig. 2- 6 Transformation to Viewing Coord. System ... 25
Fig. 2- 7 Parallel and Perspective Transformation ... 28
Fig. 2- 8 Wire Frame Model of T3-776 Manipulator ... 31
Fig. 2- 9 Backfacing Polygons Removed 32
Fig. 2-10 Plane Segments with Infinite Planes ..... 36
Fig. 2-11 Animated Representation of T3-776 Manipulator 50
Fig. 2-12 Animated Representation of T3-776 Manipulator 50
Fig. 3- 1 Spatial Link 53
Fig. 3- 2 Revolute Pair 54
Fig. 3- 3 Cincinnati Milacron T3-776 Manipulator ... 56
Fig. 3- 4 Skeletal Model of T3-776 Manipulator .... 57
Fig. 3- 5 Hypothetical Closure Link 61
Fig. 3- 6 Hypothetical Closure when £5^ | | S^ 65
Fig. 3- 7 Location of Wrist Point 68
Fig. 3- 8 Determination of 2nd and 3rd Joint Angles . 71
Fig. 3- 9 Three Roll Wrist 76
Fig. 3-10 Moving and Fixed Coordinate Systems 77
Fig. 3-11 Forward Analysis 85
vii

Fig. 3-12 Displacement Profile 90
Fig. 4- 1 Telepresence System 107
Fig. 4- 2 Nine String Joystick 109
Fig. 4- 3 Scissor Joystick 109
Fig. 4- 4 System Configuration Ill
Fig. 4- 5 Animated Representation of MBA Manipulator .114
Fig. 4- 6 Obstacle Locations Deter, by Vision System .114
Fig. 4- 7 Display of Objects in Manipulator Workspace .116
Fig. 4- 8 Warning of Imminent Collision 119
Fig. 4- 9 Operation in Man-Controlled Mode 119
Fig. 4-10 Determination of Intersection 121
Fig. 4-11 Generation of Alternate Path 121
Fig. 4-12 Display of Computer Generated Path 125
Fig. 5- 1 Cincinnati Milacron T3-726 Manipulator . .131
Fig. 5- 2 Animated Representation of T3-726 Robot . .134
Fig. 5- 3 Collection of Rigid Bodies 135
Fig. 5- 4 Data Structure for Precision Points 138
Fig. 5- 5 Skeletal Model of T3-726 Manipulator . .139
Fig. 5- 6 Manipulator Workspace 143
Fig. 5- 7 Top and Side Views of Workspace 143
Fig. 5- 8 Three Roll Wrist 145
Fig. 5- 9 Orientation Limits 146
Fig. 5-10 Motion Behind Base 151
Fig. 5-11 Intersection of Planar Line Segments . .151
Fig. 5-12 Coordinate System for Singularity Analysis .160
Fig. 5-13 Display of Singularity Results 166
Fig. 5-14 Calculation of Roll Parameter 171
viii

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY
By
CARL D. CRANE III
MAY 1987
Chairman: Dr. Joseph Duffy
Major Department: Mechanical Engineering
It is often necessary in a hazardous environment for an
operator to effectively control the motion of a robot
manipulator which cannot be observed directly. The
manipulator may be either directly guided via use of a
joystick or similar device, or it may be autonomously
controlled in which case it is desirable to preview and
monitor robot motions. A computer graphics based system has
been developed which provides an operator with an improved
method of planning, evaluating, and directly controlling
robot motions.
During the direct control of a remote manipulator with
a joystick device, the operator requires considerable
sensory information in order to perform complex tasks.
Visual feedback which shows the manipulator and surrounding
workspace is clearly most important. A graphics program
ix

which operates on a Silicon Graphics IRIS workstation has
been developed which provides this visual imagery. The
graphics system is capable of generating a solid color
representation of the manipulator at refresh rates in excess
of 10 Hz. This rapid image generation rate is important in
that it allows the user to zoom in, change the vantage
point, or translate the image in real time. Each image of
the manipulator is formed from joint angle datum that is
supplied continuously to the graphics system. In addition,
obstacle location datum is communicated to the graphics
system so that the surrounding workspace can be accurately
displayed.
A unique obstacle collision warning feature has also
been incorporated into the system. Obstacles are monitored
to determine whether any part of the manipulator comes close
or strikes the object. The collision warning algorithm
utilizes custom graphics hardware in order to change the
color of the obstacle and produce an audible sound as a
warning if any part of the manipulator approaches closer
than some established criterion. The obstacle warning
calculations are performed continuously and in real time.
The graphics system which has been developed has
advanced man-machine interaction in that improved operator
efficiency and confidence has resulted. Continued
technological developments and system integration will
result in much more advanced interface systems in the
future.
x

CHAPTER I
INTRODUCTION
1.1 Background
There have been significant advances in the broad range
of technologies associated with robot manipulators in such
areas as kinematics and dynamics, control, vision, pattern
recognition, obstacle avoidance, and artificial
intelligence. A major objective is to apply these
technologies to improve the precision of operation and the
control of manipulators performing various tasks.
Just as significant an advance has been made recently
in the field of computer graphics hardware. Application of
VLSI technology has resulted in a dramatic increase in
graphics performance of up to 100 times faster than
conventional hardware. Low cost workstations ($20-50K) have
been developed which can generate real time raster images
which are formed by the illumination of discrete picture
elements. Although raster generated images may have less
picture resolution than images produced by vector refresh
devices, they do allow for the generation of solid color
images with shading and hidden surface removal. Application
of these and other computer graphics techniques have
resulted in improved image generation and realism and allow
1

2
for a wide variety of new applications in the robotics
field. This dissertation addresses the use of such computer
graphics hardware in the following two areas:
a) telepresence system development
b) robotic'workcell modeling
Telepresence systems deal with the direct
man-controlled and autonomous operation of remote robot
manipulators. During man-controlled operation, the user
controls the manipulator directly by guiding the end
effector via use of a joystick or similar device. The
operator moves the joystick as a "master" and the robot
follows correspondingly as a "slave." The graphics system
aids the operator by providing a real time visualization of
the manipulator and surrounding work area. Critical
information such as approaching a joint limit or exceeding
payload capabilities can be displayed immediately as an aid
to the user. For autonomous operations of a remote
manipulator, the graphics system is used to plan manipulator
tasks. Once a task is specified, the user can preview the
task on the graphics screen in order to verify motions and
functions. Modifications to the previewed task can be made
prior to the execution of the task by the actual
manipulator.
The modeling of robotic workcells is a second
application for the animation system. In a manufacturing
environment it is desirable to plan new manipulator tasks
off line. In this manner the manipulators can continue 'old

3
production' during the planning phase. Assembly line down
time is minimized as the new tasks can be quickly
communicated to the manipulator. The graphics system offers
a convenient and rapid method of planning workcell layouts
and manipulator tasks. The ability to interact with the
system allows the user to reposition objects within the
workspace to verify that all important points can be reached
by the robot. Cycle times can be calculated and compared in
order to improve productivity.
Following a review of previous work dealing with the
animation of industrial robots, subsequent chapters will
detail the development and application of an interactive
computer graphics animation system. A brief description of
each chapter is as follows:
Chapter 2: Development of Animated Displays of
Industrial Robots. This chapter describes the development
of the interactive animation program. The database
development is detailed for the particular case of modeling
the Cincinnati Milacron T3-776 industrial robot. Graphics
techniques are described with emphasis on the removal of
backfacing polygons and the sorting of solid objects.
Chapter 3: Path Generation. A method of generating
sets of joint angles which will move a manipulator along a
user specified path is described. Specific issues deal with
motion near singularity positions and the selection of the
robot configuration at each point along the path.

4
Chapter 4; Robotic Telepresence. A telepresence system
was developed to allow a user to control a remote robot
manipulator in two distinct modes, viz. man-controlled and
autonomous. This chapter details the use of the graphics
system as an aid to the user. Visual feedback of the work
area is provided together with real time warning of an
imminent collision between the robot and an object in the
workspace.
Chapter 5: Interactive Path Planning and Evaluation.
An interactive computer graphics software system was
developed which assists an operator who is planning robot
motions. The user must specify the path of the manipulator
together with velocity and function information. Once a
task is previewed on the graphics screen, the path datum is
communicated to the actual robot controller. Specific
application to the Cincinnati Milacron T3-726 manipulator is
described in detail.
1.2 Review of Previous Efforts
Early work dealing with the computer graphics animation
of industrial robots occurred in the 1970's. Indicative of
these efforts were reports published from England [1-3],
West Germany [4-5], France [6], and Japan [7]. Common to
this work was the use of computer graphic storage tube
terminals. Hardware limitations resulted in slow animation
rates with bright flashes occurring as the screen is cleared
for each image.

5
More recently, a program named GRASP was developed by
S. Derby [8]. The program was written in FORTRAN on a Prime
Computer with an Imlac Dynagraphics graphics terminal.
Vector images (wire frames) were generated as raster
technology had not yet developed to be able to produce rapid
images. This program allowed an experienced user to design
and simulate a new robot, or modify existing robot
geometries. Robot motions were calculated and displayed
based on closed form kinematic solutions for certain robot
geometries. A generic iterative technique was used for arms
having a general geometry.
The animation programming of M.C. Leu [9] is indicative
of the current work in the field. A hardware configuration
consisting of two DEC VAX computers, vector graphics
terminals, and raster graphics terminals was utilized to
produce wire frame and solid color images. The program
allows for off line programming of new or existing robot
designs. In addition, a swept volume method was utilized to
detect collisions of the robot arm and any object in the
workspace.
Further improvements in the simulation of robotic
workcells have been made by B. Ravani [10]. Animations have
been developed on an Evans and Sutherland graphics terminal
which can rapidly produce color vector images. Significant
improvements in database development and user interaction
with the computer have made this a versatile simulation
program.

6
C. Anderson [11] modeled a workcell consisting of the
ESAB MA C2000 arc welding robot. Displacement, velocity,
acceleration, force, and torque data were utilized in the
model as calculated by the Automatic Dynamic Analysis of
Mechanical Systms (ADAMS) software package. Rapid wire
frame animations were obtained on Evans and Sutherland
vector graphics terminals. Solid color representations with
shading were also generated; however, real time animation of
these images was not possible.
A further off line animation system, entitled WORKMATE,
has been developed at the Stanford Research Institute by S.
Smith [12]. A goal of this effort is to implement a graphic
based simulation program through which an inexperienced user
can plan and debug robot workcell programs off line. The
program is run on a Silicon Graphics IRIS workstation which
has the capability of rapidly rendering solid color images
with shading. A significant feature of WORKMATE is that
collisions between objects in the workspace can be
identified for the user in real time. This feature avoids
the need for the user to perform the tedious visual
inspection of the robot motion in order to verify that no
collision occurs along the path.
Several companies have recently entered the robotic
simulation market. Silma, Inc., of Los Altos, California,
was formed in 1983 to develop software which would model
robotic workcells. This group recognized the problem that
each robot manufacturer uses a different robot language to

7
control the robot. To aid the user, Silma Inc. developed a
task planning and simulation language which was independent
of the type of robot being modeled. Once a task was planned
on the graphics screen, a post processor would translate the
task from the generic planning language to the specific
language for the robot controller. This approach simplifies
operations in a situation where many diverse types of robots
must work together in an assembly operation. The software
was written for the Apollo computer series with an IBM PC/AT
version to be completed in the near future.
AutoSimulations, Inc., of Bountiful, Utah, offers a
software package which runs on the Silicon Graphics IRIS
workstation. This system emphasizes the total factory
integration. The robot workcell is just one component of
the factory. General robot tasks can be modeled at each
robot workcell and cycle times are recorded. Autonomously
guided vehicles (AGVs) are incorporated in the factory plan
together with parts storage devices and material handling
stations. The user is able to model the entire factory
operation and observe the system to identify any bottlenecks
or parts backups. Additional robot workcells can be readily
added or repositioned and AGVs can be reprogrammed in order
to alleviate any system problems. At present time, the
software package offers an excellent model of the entire
factory; however, less emphasis is placed on the individual
workcell. Detailed manipulator tasks cannot be planned and
communicated to the robot controller. Additional programs

8
will be integrated with the software package in order to
address these issues.
Intergraph Corporation of Huntsville, Alabama, offers a
hardware and software solution to the workcell modeling
problem. The unique feature of the Intergraph hardware is
the use of two 19 inch color raster monitors in the
workstation. This feature greatly enhances man-machine
interface which is the primary purpose of the graphics
system. Intergraph offers a complete line of CAD/CAM
software in addition to the robot workcell modeling
software. Applications of the system to workcell planning
have been performed at the NASA Marshall Flight Center,
Huntsville, Alabama.
Simulation software has also been developed by the
McDonnell Douglas Manufacturing Industry Systems Company,
St. Louis, Missouri. Ninety four robots have been modeled
to date. McDonnell Douglas has acquired considerable
experience in planning workcell operations for automobile
assembly lines. For example, a robot may be assigned
fifteen specific welds on a car body. The user must decide
where the car body should stop along the assembly line with
respect to the manipulator in order to accomplish these
welds. The simulation software allows the user to attach
the robot to one of the weld points and then move the auto
body with the robot still attached. The user is notified if
the weld point goes outside the workspace of the
manipulator. By repeating the process, the operator can

9
determine the precise position of the car body with respect
to the manipulator so that all weld points are in reach of
the manipulator. The software system generates path datum
which is communicated directly to the robot controller
together with cycle time datum which is accurate to within
5%.
McDonnell Douglas has learned from experience that
tasks that are taught to the robot in this way must be
"touched up". For example, the car body may not stop
precisely at the position that was determined during the
simulation process. The manual touch up of certain points
along the manipulator path can be accomplished with use of
the teach pendant.
Typically
an
average of fifteen
to
twenty minutes is
required
for
this
manual touch
up
operation. A second method
of
path
upgrade can
be
accomplished by attaching a mechanical probe to the robot.
This probe measures the actual position of an object with
respect to the robot and then updates positional commands as
necessary. Early application of this technique required the
user to replace the tool of the robot with the mechanical
probe. This was often a time consuming and labor intensive
task. New measuring probes have been applied by McDonnell
Douglas to remove this problem.
A final example of robot workcell simulation software
is that developed by Deneb Robotics Inc., Troy, Michigan
[13]. This software runs on the Silicon Graphics IRIS
workstation. The interactive nature of the program allows

10
the user to rapidly build new robot geometries, modify
actuator limits, or reposition objects within the workspace.
Detailed solid color images with satisfactory animation
rates are obtainable. In addition the user can be warned of
collisions or near misses between parts of the robot and
objects in the workspace although the animation rate slows
down as a function of the number of collision checks being
made. The strength of the Deneb software is its rapid
animation of solid shaded images together with its ease of
use for the operator. As such, it is one of the more
advanced manipulator simulation software packages.

CHAPTER II
DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS
2.1 Introduction and Objective
In the previous chapter, improvements in computer
hardware were discussed which have particular application to
the problem of real time animation of solid color objects.
The goal of this chapter is to detail the development
(hardware and software) of real time, interactive computer
graphics animations of industrial robots.
Any computer graphics simulation must possess two
characteristics. First it must be realistic. The image on
the screen must contain enough detail so as to give the user
an accurate and obvious image. Second, the images must be
generated rapidly enough so that smooth animation will
result. Picture update rates of 10 frames per second have
provided satisfactory animation on a 30 Hz video monitor
(each picture is shown three times before changing).
Throughout this chapter, all applications will be aimed
at the development of a solid color representation of a
Cincinnati Milacron T3-776 industrial robot. The first part
of this chapter will focus on the development of appropriate
data structures, viewing transformations, and hidden surface
The specific programming techniques
11
removal methods.

12
utilized and demonstrated on a VAX 11-750 computer will be
discussed in detail. The second part of this chapter is
concerned with the modification of the initial work in order
to apply it
to
a Silicon
Graphics IRIS model
2400
workstation.
The
modifications
take full advantage of
the
built in hardware
capabilities
of the IRIS workstation
and
result in significantly improved performance.
2.2 Database Development
The first step in the generation of a computer graphics
simulation of a robot manipulator is the drawing of an
artist's concept of what the simulation should look like.
Shown in Figure 2-1 is a drawing of the T3-776 robot and in
Figure 2-2 is a sketch of the desired graphics simulation.
Enough detail is provided for a realistic representation of
the robot. Also shown in Figure 2-2 is a coordinate system
attached to the base of the robot such that the origin is
located at the intersection of the first two joints. The Z
axis is vertical and the X axis bisects the front of the
base. This coordinate system is named the 'fixed coordinate
system' and will be referred to repeatedly.
The robot manipulator is made up of a collection of
rigid bodies as shown in Figure 2-3. Also shown in the
figure is a local coordinate system attached to each body.
The coordinate values of each vertex point of the
manipulator are invariant with respect to the coordinate

13
Figure 2-1: Cincinnati Milacron T3-776 Manipulator


%
2'
Co
iVe
cti
oO
ot
BO
aieS
3

16
system attached to each body. For this reason, local
coordinate data can be collected and permanently stored for
future use. It should be noted that the coordinate values
of the vertices were obtained from actual measurement and
from scale drawings of the robot. In this way, the computer
graphics simulation is as accurate as possible.
It is apparent from Figure 2-3 that the simulated robot
is made up of a series of primitives, i.e. n sided polygons,
circles, and cylinders. An n sided polygon was defined to
be a plane segment. A circle was defined as a 20 sided
polygon, and cylinders as a set of 10 four sided polygons.
Thus it is possible to define all parts of the simulation in
terms of plane segments.
Each primitive (polygon, circle, cylinder) which makes
up the simulation must have certain data items associated
with it. The datum was managed by placing each primitive
into a node of a linked list as shown in Figure 2-4. Each
node of the linked list is a variant structure and contains
specific information such as the type of element, the body
it belongs to, and the number of the vertices which comprise
it. The linked list format, which is a standard feature of
the C programming language, was used because of its
versatility and dynamic memory allocation characteristics.
With every element of the simulation now defined in
terms of some local coordinate system, the following three
tasks must now be performed in order to obtain a realistic
color image:

17
struct plane_segment
{int name ;
int number ;
int color ;
float normal point [3] ;
union
{struct circletype cir ;
struct polytype pol ;
struct cyltype cyl ;
} dat ;
struct plane_segment *next
}
struct circletype
{int centerpt ;
float radius ;
}
struct polytype
{int sides ;
float points[][3] ;
}
struct cyltype
{int endpts[2] ;
float radius ;
}
indicates whether it is a
polygon, cylinder or
circle
identifies the object that
the plane segment
belongs to
the color of the plane
segment
local coordinates of the
normal point
contains circle data
contains polygon data
contains cylinder data
; pointer to next plane
segment
the index of the center
point
the number of sides of
the polygon
array of local coordinate
values
the index of the cylinder
endpoints
Figure 2-4: Graphics Data Structure

18
1. For a given position of the robot and of the viewer,
transform all local vertex data to a screen coordinate
system so that it can be properly displayed.
2. Delete all plane segments which are non-visible
(backward facing).
3. Sort all remaining plane segments so that the proper
overlap of surfaces is obtained.
Each of these three tasks will now be discussed in detail.
2.3 Coordinate Transformations
In order to produce a drawing of the robot, certain
input data must be known. First the angle of each of the
six revolute joints of the robot must be selected. Chapter
3 details a method of calculating joint angles so as to
cause the robot to move along some desired trajectory. For
the purposes of this chapter, it will be assumed that the
set of joint angles is known for the picture to be drawn at
this instant.
The second input item which is required is the point to
be looked at and the point to view from. Knowledge of these
points determines from what vantage point the robot will be
drawn. The selection and modification of these items allows
the user to view the image from any desired location.

19
2.3.1 Local to fixed coordinate transformation
As shown in Figure 2-3, the representation of the robot
manipulator is made up of a series of rigid bodies. The
coordinates of each vertex are known in terms of the
coordinate system attached to each of the bodies. The first
task to be completed is the determination of the coordinates
of every vertex in terms of the fixed coordinate system
attached to the base of the robot.
Since it is assumed that the six joint angles of the
robot are known, the transformation of local point data to
the fixed reference system is a straightforward task. The
local coordinate systems shown in Figure 2-3 are named .
through Cg. These local coordinate systems were carefully
selected so as to simplify the transformation of data to the
fixed reference system.
Shown in Figure 2-5 is a skeletal model of the T3-776
manipulator. The vectors along each of the joint axes are
labeled j3^ through £g and the vectors perpendicular to each
successive pair of joint axes are labeled a^ through a^
(not all are shown in the figure). The variable joint
displacements 0_ through Q, (9.) are measured as the
b 3
angles between the vectors a., and a., in a right handed
l j 3 k
sense about the vector £[ ^. The first angular displacement,
^, is measured as the angle between the X axis of the
fixed reference system and the vector a^* As Previously
stated, it is assumed that the joint displacements ^
through 9g are known values.

S,
S4
S6
N)
O
Figure 2-5: Skeletal Model of T3-776 Manipulator

21
Twist angles are defined as the relative angle between
two successive joint axes, measured about their mutual
perpendicular. For example, the twist angle a^2 is
measured as the angle between the vectors £3^ and ^ as seen
in a right handed sense along the vector a12- In general,
all twist angles will be constant for an industrial robot
under the assumption of rigid body motion.
Two additional parameters, link lengths and offset
lengths, are shown in the skeletal model of the manipulator.
A link length, a^j, is defined as the perpendicular distance
between the pair axes and j3 ^ All link lengths are known
constant values for a manipulator. An offset length, Sjj'
is defined as the perpendicular distance between the two
vectors a., and a... For revolute joints, offsets are
l3 ~jk
constant values. Shown in Table 2-1 are the specific
constant link length, offset, and twist angle values for the
Cincinnati Milacron T3-776 robot manipulator.
A systematic selection of each local coordinate system
was made based on the skeletal model of the manipulator.
The coordinate system was established such that the Z
axis was aligned with the vector and the X axis was along
a^j. With this definition for each coordinate system, the
coordinates of a point known in the Cj system can be found
in the system by applying the following matrix equation:

22
Table 2-1: T3-776 Mechanism Dimensions
S
S
S
S
S
11
*
a12
S
0 in.
12 90
22
0 in.
a23
5=
44
23 0
33
0
a34
=
0
o,34 90
44
55
a45
ss
0
45 = 61
55
0
a56
S5
0
a__ = 61
56
* to be determined when closing the loop

23
X .
1
X .
3
fdx..1
Di
*i
= Aji
yj
+
dyji
z.
1 _
z .
L 3 j
dz .
L 31J
(2.1)
where
A. =
Di
-s
s.c. c.c. -s. .
3 ID 3 13 ID
s.s . c.s .
. 3 13 3 13
13 A
(2.2)
The vector [dx^, dy^, ^zji^ represents the
coordinates of the origin of the Cj system as measured in
the C. coordinate system. Also the terms s. and c.
i 3 3
represent the sine and cosine of 9. and the terms s.. and
3 13
c^j represent the sine and cosine of aij* This notation
will be used repeatedly throughout subsequent chapters.
Since all joint angles and twist angles are assumed to
be known, equation (2.1) can be repeatedly used to transform
all vertex data to the first coordinate system, C^. A point
known in terms of the coordinate system, [x^, y^, z^],
can be found in terms of the fixed coordinate system [x^,
yf, zf], as follows:
1
X
Hi
1
*xr
*f
= M
*i
1
N
l-h
1
1
1i
N
1
(2.3)

24
where
M
cos(j>^ -sin<{>i 0
sin^ 0
0 0 1
(2.4)
Proper use of equations (2.1) and (2.3) will result in the
determination of all vertex data for the robot in terms of
the fixed coordinate system attached to the robot base.
2.3.2 Fixed to viewing coordinate transformation
Assuming that the point to look at and the point to
view from are known in terms of the fixed coordinate system,
all vertices of the robot are now determined in terms of a
viewing coordinate system. The use of this coordinate
system will greatly simplify the eventual projection of the
robot onto the screen.
As shown in Figure 2-6, the origin of the viewing
coordinate system is the point to view from. The Z axis of
the coordinate system is the vector from the point being
looked at to the point being viewed from. With the Z axis
known, the XY plane is defined.
The exact directions of the X and Y axes are immaterial
at this point. Typically, however, the Y axis of the
viewing coordinate system will point "up." In other words,
for the robot to be drawn with the base along the bottom of

NJ
U1
Figure 2-6: Transformation to Viewing Coordinate System

26
the screen, the Y axis of the viewing coordinate system must
correspond to the Z axis of the fixed coordinate system.
This association is accomplished by selecting a zenith point
(point B in Figure 2-6) which is high above the robot. As
shown in the figure, the direction of the X axis is obtained
as the cross product of the vector along line OA with the
vector along the line OB. With the X and Z axes now known,
the Y axis can be determined.
As described, vectors along the X, Y, and Z axes of the
viewing coordinate system are known in terms of the fixed
coordinate system. A 3x3 matrix, V, can be formed such that
the first column of the matrix is made up of the known
direction cosines of the unit vector along the X axis
(measured in terms of the fixed coordinate system).
Similarly, the second and third columns of V are made up of
the direction cosines of unit vectors along the Y and Z
axes. Recognizing that V is an orthogonal rotation matrix,
the transformation from the fixed coordinate system to the
viewing coordinate system is given by
X
V
~x£ dx£v~
>
1)
- d^fv
z
V _
_Zf dz£v_
where the vector [dxfv,dyfv,dzfv] represents the coordinates
of the origin of the viewing coordinate system as measured
in the fixed coordinate system.

27
At this point, the coordinates of all vertices of the
robot are known in terms of the viewing coordinate system.
All that remains is to transform the data one more time to
the screen coordinate system so that it can be properly
displayed.
2.3.3 Viewing to screen coordinate system transformation
The screen coordinate system is defined such that the
origin of the system is located at the lower left corner of
the terminal screen. The X axis points to the right, the Y
axis points up, and the Z axis point out from the screen.
The scale of the axes is dependent on the type of terminal
being used. All data points must be transformed to this
coordinate system so that they may be properly displayed on
the screen. Two types of projective transformations may be
used to perform the transformation between the coordinate
systems. These projective transformations are perspective
and parallel projections and are shown in Figure 2-7.
A parallel projection is the simplest type of
transformation. The conversion to the screen coordinate
system is simply accomplished by ignoring the Z component of
the data from the viewing coordinate system. In other
words, the X and Y values of points in the viewing
coordinate system are simply plotted on the graphics screen
(accompanied by any desired translation and scaling). The
resulting image is the same as would be obtained if the

(a)
(b)
Figure 2-7: Parallel and Perspective Transformation.
a) Parallel Projection ;
b) Perspective Projection.

29
viewer was standing at infinity with respect to the robot.
Parallel lines will remain parallel on the screen.
The perspective transformation is accomplished by
projecting points onto a plane (screen). One point is
selected as shown in Figure 2-7 as the center of the
projection. The screen coordinates of any point are
determined by finding the coordinates of the point of
intersection of the plane (screen) with the line between the
point in question and the center of projection. This
transformation can again be accomplished via matrix
multiplication (coupled with any desired translation on the
screen).
For the purposes of this work, the parallel projection
method was used for determining the data in terms of the
screen coordinate system. This choice was made because of
the reduced number of calculations required to perform
subsequent sorting algorithms used for eventual solid color
representations of the robot.
2.4 Backface Polygon Removal
The next task that must be accomplished is the
filtering of the linked list of plane segments such that the
elements which are
backward facing are
removed.
In
other
words, at any time
approximately one
half of
the
plane
segments will not be visible to the viewer. The list of
visible plane segments changes each instant that the robot
or the viewer changes position.

30
The removal of the backward facing polygons is a quick
and simple task. As indicated in the data structure shown
in Figure 2-4, the coordinates of a normal point are
specified for each plane segment. A vector normal to the
surface (and outward pointing) can be formed by subtracting
the coordinates of the origin of the local coordinate system
from the coordinates of the normal point. Just as all the
vertices were transformed from the local coordinate system
to the screen coordinate system, the normal points are also
transformed. Comparison of the Z coordinate of the normal
point with that of the origin of the local coordinate system
(both now in the screen coordinate system) determines
whether the particular plane segment is visible.
Application of this method results in a greatly
simplified image. Shown in Figure 2-8 is an edge drawing of
the T3-776 manipulator. Figure 2-9 shows the same drawing
with backfacing polygons removed.
2.5 Sorting of Plane Segments
A characteristic of raster type graphics displays is
that whatever is drawn last will appear to be on top. For
example if two polygons, A and B, exist and polygon A is
closer to the viewer and overlaps polygon B, then polygon B
must be drawn on the screen prior to polygon A. The only
other alternative would be to redefine polygon B so that the
regions overlapped by polygon A were subtracted. In this

31
Figure 2-8: Wire Frame Model of T3-776 Manipulator

32
Figure 2-9
Backfacing
Polygons Removed

33
manner the new polygon B' and the original polygon A would
no longer overlap and it would not matter in what order they
were drawn on the screen.
Numerous techniques exist for sorting polygons for
proper display. Algorithms have been developed based on two
primary techniques. The first involves sorting objects into
the correct order for display [14 16], while the second
technique concentrates on individual pixels (ray tracing
[17-18] and z-buffer algorithms [19-20]).
A sorting technique was used in this work for two
reasons, i.e. a rapid algorithm was required which did not
require a substantial amount of computer memory. The
algorithm which performs the sort is of necessity of order
2
n In general, every plane segment must be compared
against every other plane segment. To shorten this process,
however, a numbering scheme was employed so that, for
example, the sides of a cube would not be compared since it
would be impossible for them to cover each other. Similarly
it is not necessary to compare the five visible sides of a
ten sided cylinder.
The comparison of two plane elements to determine if
one of them overlaps or covers the other is accomplished by
applying a series of tests. The first test is to determine
whether a bounding box placed around the projection of one
of the objects is disjoint from a bounding box placed around
the projection of a second object. If the bounding boxes do
not overlap, then it is not possible for the two objects to
overlap and the comparison is complete.

34
If the two bounding boxes are not disjointed then all
the points of one object are substituted into the equation
of the infinite plane that contains the second object. If
the resulting value of the equation for all points is
greater than zero (assuming that the viewer's side of the
infinite plane is positive), then the first object may cover
the second object. Similarly, the points of the second
object are substituted into the equation of the infinite
plane containing the first object. Again whether the sign
of the equation is greater or less than zero determines
whether one object may overlap the other. Shown in Table
2-2 are all the possible combinations of signs that may
occur. Figure 2-10 shows a representative sample of the
types of overlap conditions that can occur for two plane
segments.
If it is concluded from the previous test that no
overlap can occur, then the comparison is complete. However
if an overlap may occur, then the projections of the two
objects onto the screen are checked to determine if they do
indeed overlap. This is done by determining whether any
lines of either of the two projections cross. If any of the
lines do cross, then the plane segments do overlap. If none
of the lines cross, then it may be the case that one of the
projections lies completely inside the other. One point of
each of the projected plane segments is checked to determine
whether it is inside the other projected polygon.

35
Table 2-2: Plane Segment Sorting Cases
This table indicates whether all the vertices of plane
segment 1 are on the origin side (+ side) or the opposite
side (- side) of the infinite plane containing plane segment
2. Similarly, the vertices of plane segment 2 are compared
to the infinite plane containing plane segment 1.
segment 1
+
segment 2
+
result
no overlap
no overlap
+/-
+/-
+/-
+
+
+/-
+/
+
+
1 may overlap 2
1 may overlap 2
1 may overlap 2
2 may overlap 1
2 may overlap 1
2 may overlap 1
+/-
overlap may occur

36
Figure 2-10: Plane Segments with Infinite Planes.
a) segment 1 (+), segment 2 (+) ;
b) segment 1 (+), segment 2 (+/-) ;
c) segment 1 (-), segment 2 (+/-) .

37
Clearly, the comparison task is lengthy and time
consuming. The case of two objects whose bounding boxes are
not disjoint and yet do not actually overlap takes
considerable time. In addition, the equation of the
infinite plane for each plane segment had to be calculated
for each image to be drawn based on the position of the
robot and of the viewer. On the average, for a particular
drawing of the T3-776 robot there are 85 plane segments to
compare and sort. Due to this large number, the execution
time of this algorithm on a VAX 11/750 computer is
approximately 10 seconds per drawing.
Clearly, the sorting of plane segments in software will
not allow images to be generated rapidly enough to provide
proper animation. An improvement by at least a factor of
100 is
necessary
in order to
reach
the minimum
animation
rate of
10 frames
per second.
A
second drawback of the
algorithm is that
it will fail
if
there exists
a cyclic
overlap
of plane
segments.
For
example, if
segment A
overlaps B which overlaps C which in turn overlaps segment
A, then the algorithm as written will fall into a recursive
trap. This problem can be corrected in software, but the
additional calculations will only serve to further increase
the computation time.
A solution to the problem was found via application of
special purpose computer graphics hardware. The animation
program was modified to run on a Silicon Graphics IRIS model
2400 workstation. Proper modifications of the database and

38
sorting method to take advantage of the hardware
improvements resulted in the rapid generation of full color,
solid images at a rate of over 10 frames per second. The
hardware system and software modifications will be detailed
in the following sections of this chapter.
2.6 Description of Hardware System
The Silicon Graphics IRIS model 2400 workstation is a
68010 based 3-D system designed to function as a stand-alone
graphics computer. It is capable of generating three
dimensional, solid color images in real time without the
need for a main frame computer.
The unique component of the IRIS is a custom VLSI chip
called the Geometry Engine. A pipeline of twelve Geometry
Engines accepts points, vectors, and polygons in user
defined coordinate systems and transforms them to the screen
coordinate system at a rate of 69,000 3-D floating point
coordinates per second.
The display of the IRIS system is a 19 inch monitor
with a screen resolution of 1024 pixels on each of 768
horizontal lines. The monitor is refreshed at a rate of 60
Hz and provides flicker free images. The image memory
consists of eight 1024 x 1024 bit planes (expandable to 32
g
bit planes). An eight bit plane system will allow for 2
(256) colors to be displayed simultaneously on the screen.

39
Animation is obtained by setting the IRIS system into
double buffer mode. In this mode, half of the bit planes
are used for screen display and half for image generation.
In other words, while the user is observing an image
(contained on the front 4 bit planes), the next image is
being drawn on the back 4 bit planes. When the image is
complete, the front and back sets of bit planes are swapped
and the user sees the new picture. The complexity of the
image to be drawn governs the speed at which the bit planes
are swapped. Experience has shown that the swapping should
occur at a rate no slower than 8 Hz in order to result in
satisfactory animation.
The one drawback of double buffer mode is that there
are only half as many bit planes available for generating an
image. The reduced number of bit planes further limits the
number of colors that may be displayed on the screen at
once. An IRIS system with only 8 bit planes, such as the
4
system at the University of Florida, can only display 2
(16) colors on the screen at once while in double buffer
mode. It should be noted, however, that a fully equipped
system with 32 bit planes can display 2^ (65,536) colors
simultaneously in double buffer mode. This capability
should far exceed user requirements in almost all instances.

40
2.7 Program Modifications
It was previously noted that an increase in performance
by at least a factor of 100 was required in order to produce
images rapidly enough to result in pleasing animation. A
brief description of the graphics software library which is
included with the IRIS system will precede the discussion of
the specific data structure and software modifications which
were made.
2.7.1 IRIS coordinate transformations
The primary task in drawing images on the screen is the
transformation of coordinate values from local coordinate
systems to the screen coordinate system. The IRIS
workstation accomplishes this by manipulating data in terms
of homogeneous coordinates. Four coordinate values, [x, y,
z, w] are used to define the coordinates of a point. What
is normally thought of as the X coordinate value can be
calculated as x/w. Similarly, values for the Y and Z
coordinates of a point are readily determined. The
advantage of using homogeneous coordinates is that
rotations, translations, and scaling can all be accomplished
by 4x4 matrix multiplication.
The IRIS system constantly keeps track of the current
transformation matrix, M. This matrix represents the
transformation between some local coordinate system and the
screen coordinate system. When any graphics drawing command

41
is given, as for example 'pnt(50, 20, 40)' which draws a
point at the local position (50, 20, 40), the coordinate
datum is multiplied by the matrix M in order to determine
the screen coordinate values. The transformation is
represented by the following equation:
[x,y,z,w] = [x',y',z',w'] M (2.6)
The basic problem then is to make the matrix M represent the
transformation between the coordinate system attached to
each of the rigid bodies of the robot. For example, when M
represents the transformation between the fixed coordinate
system attached to the base of the robot and the screen
coordinate system, the base of the robot can be drawn in
terms of a series of move and draw commands, all of which
will use local coordinate data as input.
When the IRIS system is initialized, the matrix M is
set equal to the identity matrix. Three basic commands,
translate, rotate, and scale, are called to modify M.
Calling one of the three basic commands causes the current
transformation matrix, M, to be pre-multiplied by one of the
following matrices:
Translate (T ,T ,T )
x y z
1
0
0
0 0
1 0
0 1
0
0
0
1
(2.7)

42
Scale (S ,S ,S )
x y z
sx 0 0 0
0 Sy 0 0
0 0 S 0
z
0 0 0 1
(2.8)
Rot (9)
x '
10 0 0
0 cosG sin 0
0 -sin cos 0
0 0 0 1
(2.9)
Rot ()
y
cos 0
0 1
sin 0
0 0
-sin 0
0 0
cos 0
0 1
(2.10)
Rotz()
cos sin 0 0
-sin cos 0 0
0 0 10
0 0 0 1
(2.11)
With these three basic transformations, it is an easy
matter to cause the matrix M to represent the transformation

43
from the fixed robot coordinate system to the screen
coordinate system. A translate command can be called to
center the image as desired on the screen, a scale command
will allow for .zooming in, and a series of rotate commands
will allow for any desired orientation of the robot base
with respect to the screen. The program is written so that
the parameters to these commands are modified by rolling the
mouse device. In this manner, the user can change the
orientation and scale of the drawing as desired. Since the
images will be drawn in real time, the user has the
capability to interact with the system and alter the viewing
position also in real time.
Once the matrix M represents the fixed coordinate
system, the base of the robot can be drawn. A series of
move and draw commands can be called, using local coordinate
data as input. However, since solid color images are
desired, the order that solid polygons are drawn is of
importance. Because of this, the matrix M is simply saved
and given the name M^. When any part of the base of the
robot is to be drawn, however, the matrix must be
reinstated as the current transformation matrix M.
The transformation from the matrix to the coordinate
system attached to Body 1 (see Figure 2-3) is a simple task.
The transformation matrix for Body 1, M^, is given by the
following equation:
= Rotz(<}>i> Mf
(2.12)

44
Similarly, the transformation matrices for bodies 2 through
6 are given by the following equations:
M2 = Rotx(90) Rot (-G2) M1 (2.13)
= Translate (a23, 0* 0) Rotz(93) M2 (2.14)
M4 = Rotz(94) Rotx(90) Translate (0, -S44, 0) M3 (2.15)
M5 = Rotz(5) Rotx(61) M4 (2.16)
Mg = Rotz(6) Rtx(61) M5 (2.17)
At this point, all transformation matrices are known
and the image of the robot can be drawn. It is important to
note that the method described here is virtually identical
to that discussed in section 2.3. The improvement, however,
is that all the matrix multiplications required to transform
the coordinates of some point from a local coordinate system
to the screen coordinate system will be accomplished by
specially designed chips. In this way the multitude of
matrix multiplications can be accomplished in a minimal
amount of time.
2.7.2 IRIS data structure
The data structure of the robot animation program was
also modified in order to take advantage of the unique
capabilities of the IRIS system. As previously noted, the
entire image of the Cincinnati Milacron T3-776 robot can be
formed from a series of n sided polygons. The IRIS graphics

45
command which draws a solid polygon in the currently defined
color is as follows:
polf (n, parray) (2.18)
where n is an integer which represents the number of sides
of the polygon and parray is an array of size nx3 which
contains the local coordinates of the vertices of the
polygon.
Since all polygons are to be defined in terms of their
local coordinates, all polygons were defined once at the
beginning of the program. For example, there exist 126 four
sided polygons in the representation of the T3-776 robot.
Therefore the variable 'p4array' was declared to be of size
[126][4][3]. Each four sided polygon was given a number
(name) and the local X,Y,Z coordinates of each of the four
points were stored in the array.
An obvious disadvantage of this scheme is that point
data will be duplicated, thereby requiring more computer
memory. For example, a cube is defined by eight points and
six polygons. In the method used, each point will appear in
the variable 'p4array' three times, i.e. as a member of each
of three sides of the cube. The advantage of this method,
however, is that of speed. The datum for a particular
polygon is pre-formatted for immediate use in the 'polf'
command. No additional data manipulation is required.

46
2.7.3 Backface polygon removal on the IRIS
In section 2.4 a method of backface polygon removal was
discussed. A normal point was selected such that the vector
from the origin of the local coordinate system to the normal
point represented a vector normal to the particular polygon
in question. Transformation of the origin point and the
normal point to the screen coordinate system would determine
if the polygon was facing the viewer.
This method was again used on the IRIS workstation with
slight modification. From observing Figure 2-3, it is
apparent that most of the polygons which form each of the
rigid bodies have one of the coordinate vectors as their
normal vector. Therefore, associated with each polygon is
the character string 'x', 'y', 'z', or 'other'. In this
manner, not every polygon will have to have its normal
vector calculated. Allowing three normal vector
calculations for each of the coordinate axes of each rigid
body (21 total), plus the normal calculations of the 'other'
cases (50 total), the normal vector calculations have been
reduced from a previous total of 237 to the new total of 71.
Knowing the transformation matrix, for each of the
rigid bodies, the transformation of the normal points could
be carried out in software via matrix multiplication. This
process, however, would be too time consuming and would
greatly slow down the animation rate. An alternative method
was found whereby the Geometry Engine chips of the IRIS
workstation could be used to perform the matrix
multiplication in hardware.

47
The IRIS workstation is placed in "feedback mode."
When in feedback mode, graphics commands are not drawn on
the screen, but rather data items are stored in a buffer.
The command 'xfpt' accepts the local coordinates of a point
as input. The homogeneous coordinates [x, y, z, w] of the
point in terms of the screen coordinate system are stored as
output in the buffer. The Z value of the normal point (z/w)
is compared with the Z value of the origin point of the
local system after both points have been transformed to the
screen coordinate system by the Geometry Engine.
Comparisons of these Z values determines whether the normal
vector is pointing towards the viewer and thereby determines
if a particular polygon is visible. It should be noted that
when a parallel projection is used, as it is in this
example, that the homogeneous coordinate 'w' will always
equal 1 and that the division is therefore not necessary.
2.7.4 Modified Sorting of Plane Segments
After backfacing polygons are removed, the remaining
plane segments must be drawn on the screen in proper order
so that polygons closer to the viewer are drawn last.
Section 2.5 detailed a method for accomplishing this
sorting. A lengthy series of tests were made to compare
every pair of plane segments. Although the sorting
algorithm produced correct results, the computational time
was unacceptable.

48
A new and simplified method was developed for use on
the IRIS workstation. Once all backfacing polygons are
removed, what remains is a collection of objects. It was
desired to compare and sort the objects, not the individual
plane segments which compose the objects. An object is
defined as a collection of plane segments which cannot
overlap each other. An example of an object is the base
plate of the robot. A second example is the large box
shaped object in the base which rests on top of the base
plate. These examples point out that each of the rigid
bodies shown in Figure 2-3 are composed of a collection of
objects.
Once all objects were defined, a series of rules was
generated which describes how the image of the robot is to
be drawn. An example of such a rule is as follows:
If I am looking from above the robot, then the base
plate must be drawn before the box which rests on
top of it.
The 'if clause' of the above rule will be true if the X axis
of the fixed coordinate system is pointing towards the
viewer. This information is already known since it was
required in the determination of which polygons were
backfacing. Similar rules (again based on previously
calculated facts) make it possible to sort the objects
quickly and correctly. A total number of 12 basic rules

49
were required to produce accurate images of the robot.
These 12 rules form the basis of a forward chaining
production system. It must be re-emphasized that the
correct ordering can be accomplished in a negligible amount
of time because all data required by the 'if clause' of each
rule were calculated previously.
2.8 Results and Conclusions
The resulting representation of the Cincinnati Milacron
T3-776 robot is shown in Figures 2-11 and 2-12. Pictures
are generated at a rate of 10 frames per second which
results in satisfactory animation. As previously stated,
the user is capable of interacting with the system in real
time to alter the viewing position, freeze the motion, or to
zoom in.
Many applications exist for such a graphics system.
Two particular applications, the control of teleoperated
robots and the off line planning of robot tasks, will be
presented in Chapters IV and V. Additional applications in
the areas of designing workstations, and the evaluation of
new robot designs (based on factors such as workspace
envelope, dexterity capability, and interference checking)
make such a graphics system a valuable tool.

50
Figure 2-11: Animated Representation of T3-776 Manipulator
Figure 2-12: Animated Representation of T3-776 Manipulator

CHAPTER III
PATH GENERATION
3.1 Introduction and Objective
This chapter is concerned with the calculation of a
series of joint angles for a robot manipulator which will
cause the manipulator to move along a user specified path.
These calculations will serve as input to the robot
animation program described in the previous chapter. In
this manner, the user will be able to observe and evaluate
the robot motion on the graphics screen prior to any
movement of the actual robot manipulator. As with the
previous chapter, the specific application to the Cincinnati
Milacron T3-776 manipulator will be presented in detail.
The first problem to be considered will be the reverse
kinematic analysis of the robot manipulator. This analysis
determines the necessary joint displacements required to
position and orient the end effector of the robot as
desired. The problem of path generation is then reduced to
the careful selection of robot positions and orientations
along some desired path. A reverse kinematic analysis is
then performed for each of these locations.
51

52
3.2 Notation
The notation used throughout this analysis is that
developed by J. Duffy as presented in reference [21].
Briefly stated, a manipulator is composed of a series of
rigid links. One such link is shown in Figure 3-1. In this
figure it is shown that the link connects the two kinematic
pair (joint) axes and The perpendicular distance
between the pair axes is a^j and the vector along this
mutual perpendicular is a^j. The twist angle between the
pair axes is labelled a and is measured in a right handed
sense about the vector a^.
The particular kinematic pair under consideration is
the revolute joint which is shown in Figure 3-2. The
perpendicular distance between links, or more specifically
the perpendicular distance between the vectors eu j and a^*
is labelled as the offset distance S^. The relative angle
between two links is shown as ^ and is measured in a right
handed sense about the vector 3.. .
Four types of parameters, ie. joint angles (j), twist
angles ( a_), offsets (S^) and link lengths (a^j) describe
the geometry of the manipulator. It is important to note
that for a manipulator comprised of all revolute joints,
that only the joint displacement angles are unknown
quantities. The twist angles, offsets, and link lengths
will be known constant values. Furthermore, the values for
the sine and cosine of a twist angle a and an angular
joint displacement ^ may be obtained from the equations

53
Figure 3-1: Spatial Link

54
Figure 3-2: Revolute Pair
I O

55
(3.2)
(3.1)
(3.3)
(3.4)
Determinant notation is used to denote the scalar triple
product.
3.3. Mechanism Dimensions of the T3-776 Manipulator
Shown in Figure 3-3 is a sketch of the T3-776 robot.
The robot is described by the manufacturer as consisting of
a three roll wrist connected to ground by an elbow,
shoulder, and base revolute joint. Shown in Figure 3-4 is a
skeletal drawing of the manipulator. The three roll wrist
is modeled by a series of three revolute joints whose axes
of rotation all intersect at a point. The elbow, shoulder,
and base joints are each modeled by a revolute joint such
that the axis of rotation of the shoulder and elbow are
parallel.
In the skeletal model the joint axes are labeled
sequentially with the unit vectors S^ (i = 1,2..6). The
directions of the common normal between two successive joint
axes £S^ and are labeled with the unit vectors (ij =
12,23,..67). It must be noted that only the vectors a^ and
23 are shown in Figure 3-4 for simplicity of the diagram.

56
-THREE ROLL WRIST r- FRONT WRIST GEARBOX
WRIST DRIVE SUB-ASSEMBLY
ELBOW DRIVE SUB-ASSEMBLY
SHOULDER AXIS
BASE SWIVEL
SHOULDER HOUSING
SHOULDER ORIVE
SUBASSEMBLY
(BEHIND SHOULDER
HOUSING)
BASE HOUSING
TURNTABLE
GEARBOX
DRIVE
P.A.U.
Figure 3-3: Cincinnati Milacron T3-776 Manipulator

Figure 3-4: Skeletal Model of T3-776 Manipulator

58
As previously stated, the link lengths a^j, the offsets
Sjj, and the twist angles are constants which are
specific to the geometry of a particular manipulator. The
values of these constants are tabulated below for the T3-776
robot.
S11 =
*
a12 =
0
in.
al2 =
90 deg.
S22
0 in.
a23
44,
.0
a23 =
0
S33
0
a34 "
0
a34 =
90
S44 "
55.0
a45
0
a45 =
61
S55 =
0
a56 =
0
56
61
* S
11 Wil1
be
computed
subsequently
(3.5)
In addition to the above constant dimensions,
S66 and
ag^ are selected such that the point at the end of vector
a^ is the point of interest of the tool connected to the
manipulator. For example this point may be the tip of a
welding rod that the manipulator is moving along a path.
Once a particular tool is selected, constant values for S
66
and are known.
Furthermore it is noted that the link lengths a34'
a^g, and a5g equal zero. However it is still necessary to
specify the direction of the unit vectors a^2, 34' 45' and
ar. in order to have an axis about which to measure the
56
corresponding twist angles. The vector a^j must be
perpendicular to the plane defined by the vectors and
and as such can have two possible directions. For the

59
vectors a.12' 34' 45' and 56 th;-s direction is arbitrarily
selected as the direction parallel to the vector S^xjSj. The
values for the corresponding twist angles a12' 34' a45' and
alisted in (3.5) were determined based upon this
convention.
3.4. Reverse Displacement Analysis
For the reverse displacement analysis the position and
orientation of the hand of the manipulator are specified.
It is desired to determine the necessary values for the
relative displacements of the joints that will position the
hand as desired, ie. to determine sets of values for the six
quantities ^, 2, 0^, 4, ^, and g. The analysis
is complicated by the fact that there are most often more
than one set of displacements which will place the hand in
the desired position. An advantage of this reverse
displacement analysis is that all displacement sets will be
determined as opposed to an iteration method which would
find only one set of joint displacements.
It turns out that for the T3-776 robot there are a
maximum of four possible sets of angular displacements which
will position and orient the hand as specified. The unique
geometry of the robot, that is parallel to and
£4,£5,£6 intersecting at a point, allows for eight possible
sets. However the limits of rotation of the first three
joints reduces the solution to a maximum of four. The

60
limits of rotation for the angles <^, 2, and ^ (see
Figure 3-4) are as follows:
-135 < i < 135 (degrees)
30 < 2 < 117
-45 < 3 < 60
3.4.1 Specification of position and orientation
The first step of the analysis is to establish a fixed
coordinate system. For this analysis a fixed coordinate
system is established as shown in Figure 3-4 such that the
origin is at the intersection of the vectors and S^* The
Z axis is chosen to be parallel to the £3^ vector and the X
axis bisects the allowable range of rotation of the angle
<|)^. Throughout the rest of this analysis, this coordinate
system will be referred to as the fixed coordinate system.
Using this fixed coordinate system it is possible to
specify the location and orientation of the hand by
specifying the vector to the tool tip, Rp^, (see Figure 3-5)
and the direction cosines of the vectors and a^.
Although R Sg, and a^ have a total of nine components,
the latter two are related by the three conditions,
6 *6 = 1
-67 -67
1
-6 -67
0

61
Figure 3-5: Hypothetical Closure Link

62
so that the three vectors (Rpl, Sgr ag7) represent the 9-3=6
independent parameters necessary to locate a rigid body in
space.
3.4.2 Closing the loop
Once the position and orientation of the hand is
specified, the manipulator is connected to ground by a
hypothetical link. The problem of determining the sets of
joint displacements to position and orient the hand is thus
transformed to the problem of analyzing an equivalent
spatial mechanism with mobility equal to one. The concept
of closing the loop is not new. Pieper and Roth [22] were
the first to point out the implicit correspondence between
manipulators and spatial mechanisms using homogeneous
transfer matrices. The method of closing the loop which is
presented here was published by Duffy and Lipkin in
reference [23].
It is now necessary to determine the five constraint
parameters S77, a71, S^, a71f and (Se
ttle loop together with the input angle of the spatial
mechanism, 9^. The first step of the completion algorithm
is to establish a direction for the unit vector S_^. This
vector will act as the axis of rotation of the hypothetical
revolute joint which serves to close the loop. The
direction of is arbitrary so long as it lies in the plane
which is perpendicular to ag7. For this analysis £7 is
selected such that otg7 equals 90 degrees and thus the
direction cosines of £[7 may be obtained from the equation
.) that complete

63
-7 ~ 67X6 (3.6)
With Si now known, application of (3.1) gives the following
expression for
c71 S7 £l (3.7)
A value of c^=+l immediately flags a singularity which will
be discussed subsequently. The unit vector a^^ is now
defined by
£7 x £1
-71
I£7 x ill
and thus by application of (3.2),
S71 ~ I71711
(3.8)
(3.9)
Utilizing the vector loop equation (see Figure 3-5),
Rpl + + a ^ 1 a ^ ^ ]_]_£]_ £ (3.10)
results in explicit expressions for the hypothetical link
a^^ and hypothetical offsets and S^,
77
1PI7111
/
S71
(3.11)
71
lelil
/
S71
(3.12)
11
1£771PI 1
/
S71
(3.13)

64
Utilizing the results of (3.8) and equations (3.3) and (3.4)
gives the following expressions for the sine and cosine of
67 71 (3 14)
s7 = IS.72.677ll (3.15)
In addition, by projection the expressions for the sine and
cosine of (Q^1|>-^) are
cos (9^-^) = a.^ £ (3.16)
sin(01-(})1) = I S.3_ a.71i | (3.17)
where _i is the unit vector in the direction of the X axis.
It was mentioned earlier that a singular condition is
flagged when c^^=+l (and therefore s^=0). This occurs when
the vectors and are either parallel or antiparallel
and there are thus an infinity of possible links a^^ which
are perpendicular to both and S^. However the constraint
S77=0 can be
imposed
to obtain a unique
result as
shown in
Figure 3-6.
Forming
the inner product
of (3.10)
with ^
yields,
11
= -R
PI
£l
(3.18)

z,s,
Figure 3-6: Hypothetical Closure when || Sy

66
Further, from equation (3.10)
(3.19)
and provided that a^^ ^ 0
(3.20)
The remaining angles ^ and (^-(j>^) can again be
calculated using equations (3.14) through (3.17).
Finally when the axes of and S_^ are collinear, the
condition 3.^=0 flags a further singularity. The direction
of the unit vector a^^ in the plane normal to the axis of
is now arbitrary. In this case it is convenient to impose
the additional constraint that 9^=0 making a^^ equal to
ag^. The remaining angle (^cj^) can again be calculated
using (3.16) and (3.17).
Equations (3.7) through (3.17) plus the special
analysis developed for parallel to S.^ determine all the
necessary parameters of the hypothetical closure link which
is shown in Figure 3-5. In addition, a unique value for the
angle ^ has been determined. Thus the reverse
displacement solution of the open chain manipulator has been
transformed to the solution of a closed loop mechanism with
a known input angle ^. Well documented methods for
analyzing the closed loop mechanism can thus be used to

67
determine all possible sets of joint displacements which can
position the hand as specified.
3.4.3 Determination of cj>^, Q^i and 9-,
At this point the next step of a standard reverse
position analysis would be to analyze the closed loop
mechanism formed by the addition of the hypothetical closure
link to the open chain manipulator. However due to the
relatively simple geometry of the T3-776 robot, a shorter
and more direct approach will be taken.
It should be noted from Figure 3-4 that since the
direction cosines of vectors £g and a^ are known in the
fixed coordinate system together with the length of offset
Sgg and link ag^, that the coordinates of point P2, the
center of the three roll wrist, are readily known. The
vector Rp2 (see Figure 3-7) from the origin of the fixed
coordinate system to point P2,is given by
P2 = -PI S666 36767 (3.21)
It is also shown in Figure 3-7 that the vectors Bp2' 12'
23' 4' 3nd 1 311 same Plane* This is due to
the unique geometry of this robot whereby the vectors £2 an<3
£>2 are parallel. Because of this, simple planar
relationships can be utilized to determine the three
relative displacement angles ^, 2, and G^.

z

00
Figure 3-7: Location of Wrist Point

69
The angle j^ is defined as the angle between the fixed
X axis and the vector a^2 measured as a positive twist about
the vector S^. Application of equations (3.3) and (3.4)
gives the following expressions for the sine and cosine of
tl:
cos ^ = i*a.12 (3.22)
sin ^ = |iai2SlI (3.23)
The only unknown in these
equations is the
vector
-12'
Since the vectors Rp2, a2'
and S^ all lie
in the
same
plane, it must
be the case
that a^2 is either
parallel or
antiparallel to
the projection of Rp2 on the XY
plane.
Thus
the vector a^2
is given by
+ I (Rp2i)i
(3.
24)
-12
C * (Rp2'i>2r5
Substitution of
the two possible values of a^2
into
(3.22)
and (3.23) will result in two possible distinct values for
^ and it can be shown that these two values will differ by
180 degrees. It is apparent that one of the calculated
values of ^ may not fall in the allowable range of
rotation of -135 to +135 degrees. If this occurs then there
is an immediate reduction from a maximum of four to a
maximum of two possible configurations of the robot which
will position the hand as specified.

70
Utilizing equations (3.16) and (3.17) gives the
following expressions for the sine and cosine of 0^,
cos 0^^ = cos (0^-cj)^) cos j)^ sin (0^-cj)^) sin ^ (3.25)
sin 0^ = sin cos ^ + cos (0^^-^) sin cj>^ (3.26)
It must be emphasized that is defined as the relative
angle between the vector a^2 and the hypothetical link a.^
defined in the previous section (see Figure 3-5). As such,
is calculated at this time for subsequent use in the
determination of the angles in the wrist (0^, ,., and
06).
Before proceeding with the analysis it is important to
note that the angles 2 and 2 (see Figure 3-4) are used
in addition to the second and third actuator joint
displacements 02 and 0^. These angles are related by the
following equation:
0j = j 90 deg. (j=2,3) (3.27)
The cosine of angle ^ is calculated by applying the
cosine law to the planar triangle shown in Figure 3-8. The
resulting expression is,
cos cj>3 (a23 + ^44 |J5.p21 ^ ^ ^a23^44^ (3.28)
Two corresponding values for the sine of ^
from the equation
are obtained

Figure 3-8: Determination of 2nd and 3rd Joint Angles

72
(3.29)
Thus there are two values of 2 which can position the
point P2 as specified and these two possibilities are
referred to as the "elbow up" and "elbow down" orientations.
However due to the limit on the rotation of 2, ie. 45 <
cj>2 <_ 150 degrees, only one value of cj^ and thus unique
values for the sine and cosine of <|>2 will be possible.
From (3.27) the sine and cosine of are given by
c^ = cos (cJ>2-90)
= sin 2
(3.30)
s^ = sin (290)
= -cos 3
(3.31)
Equations (3.30) and (3.31) will be used subsequently in
expressions for the angles in the wrist of the manipulator.
A solution for the unique value of 2 and thereby 2
which corresponds to each set of pairs of angles <{>^ and
is obtained by use of projection. It is shown in Figure 3-8
that the vector Rp2 can be written as
a2323 + S444 -P2
(3.32)
Projecting this equation upon a^2
following two equations:
and then gives the

73
a2323 -12 + S444 -12 -P2 -12
(3.33)
a2323 1 + S44-4 -1 = -P2 -1
(3.34)
The right sides of both the above equations
are known.
Expanding the scalar products on the left sides
of (3.33)
and (3.34) gives
a23C2 S44COS (02+3) = -P2 -12
(3.35)
a23S2 S44Sin (02+35 = -P2 -1
(3.36)
Expanding the sine and cosine of (2+<|>2) and
regrouping
gives
C2ta23-S44COS31 + S2[S44sinh3 = ^P2*^-12
(3.37)
C2t-S44sin3] + S2ta23"S44COS31 = -P2*-l
(3.38)
Using Cramer's rule to solve for the sine and cosine of 2
and recognizing that
I P21 = a23 + S44 2a23S44cos<^3
gives

74
C2 ~ ^(a23"S44COS3) (-P2 -12*
(S44sin<|>3) (Rp2,S1)]/|Rp2|2 (3.39)
S2 = ,^a23S44COS3^-P2 -1* +
(S44sin3) (Rp2*a12)]/|Rp2|2 (3.40)
Equations (3.39) and (3.40) result in a unique value for 2
corresponding to each pair of calculated values of ^ and
3 From (3.27) the sine and cosine of 2 can be written
as
cos 2 = cos (e2+90)
= -s2 (3.41)
sin <|>2 = sin(2+90)
= c 2 (3.42)
As before each calculated value of 2 must be checked to
see if it is in the allowable range of rotation. If it is
not, then the maximum number of possible configurations of
the robot which can position the hand as specified is
further reduced.
At this point up to two sets of the three displacement
angles ^, 2, and 3 are known which position the point
P2 as required. However if there were no joint angle
limitations at joints 1, 2, and 3, there would be four
possible sets of displacements which would position point P2

75
as required. This reduction from four sets of values to a
maximum of two possible sets is significant in that it
reduces the computational time involved in the reverse
position analysis of the T3-776 robot.
3.4.4 Analysis of wrist
The remaining task of the Reverse Displacement Analysis
is to determine the values of the three angles in the wrist
which correspond to each set of values of the angles 1,
2* and Complete rotation is possible about each of
the three axes £3^, S^f and £g so that the result will not be
affected by joint limitations as in the previous section.
Figure 3-9 shows a more detailed view of the three roll
wrist. It is important here to reemphasize how 9^, 9,.,
and 9, are measured. Each of the angles 9. (j=4,5f6) are
o 1
measured by a right handed rotation of the link a^j into the
link a., about the vector S..
J K ~3
Two moving coordinate systems are attached to the robot
as shown in Figure 3-10 such that x always points along a^2
and z' points along and analogously x" always points
along a^ and z" along The relationship between the
fixed XYZ system and the moving x'y'z' system is given by
the following equations:
x' = x cos y' = -x sincj^ + y costj^ (3.43)
z' =
z


s,
-J
Figure 3-10: Moving and Fixed Coordinate Systems

78
The direction cosines of the vector £g which are given
quantities in the fixed system can now be written in the
moving x'y'z' system by application of (3.43) and,
x6 = x6C0Sl + *6sinh
y = -Xgsincj^ + ygcos^
(3.44)
where Xg,yg,Zg and x, y, z are respectively the
direction cosines of £3g in the fixed and moving systems.
The direction cosines of Sg in the second moving
coordinate system, x£, y£, zg are related to the direction
cosines of £g in the first moving coordinate system by three
successive applications
of the
rotation matrix
c .
3
s .c. .
3 13
s .s. .
3 13
-s .
3
c c .
3 13
c s .
3 13
(3
0
-s. .
13
c. .
i3j
which yields
~x"~
X6
-V 1
X6
-
A34A23A12
y
Z6
(3.46)

79
Substituting the values for a12,
into (3.46) gives
a2j, and from set (3.5)
"*r
.
C4C2+3
-S4
C4S2+3
*6
=
-S4C2+3
0
1
_S4S2+3
n
(3.47)
Jl.
S2+3
0
"C2+3_
where S2+3 an<^ c2+3 represent the sine and cosine of
(99+9_). As already stated, the abbreviations s. and c.
D D
in (3.45) denote the sine and cosine of 9j which measures
the relative angular displacement between successive links
a. and a., At this point all parameters on the right side
1J j K
of equation (3.47) are known with the exception of the sine
and cosine of 9..
4
Alternate expressions for the direction of S, in the
D
second moving coordinate system may be obtained by simple
projection. These relationships are as follows:
-
X6
X5
*6
II
in
1
N
CTlS
1
_*5_
where
X5 = s56s5
*5 = _(S45C56+C45S56C5)
X5 = C45C56-s45s56c5
(3.49)

80
Equating the right sides of (3.47) and (3.49) and rearranging gives
the following three equations:
X5 = c4(x6c2+3+z6s2+3) + S4{~^6)
(3.50)
2+3+Z6S2+3
(3.51)
(3.52)
Z5 X6S2+3 Z6C2+3
Equation (3.52) is significant in that it contains cc as its
D
only unknown. Substituting for from set (3.49) and
solving for c^ gives
c5 ^c45c56-x6s2+3+z
Equation (3.53) gives two possible values for 9^ and these
two values determine two configurations of the wrist for a
specified end effector position.
The next task is to find the corresponding value for
9. for each value of 9-. This is readily accomplished by
utilizing Cramer's rule to solve for s^ and c^ in equations
(3.50) and (3.51). The resulting equations are as follows:
c
4
(3.54)
s
4
(3.55)

81
It is interesting to note that both the denominator and
numerator of (3.54) and (3.55) vanish simultaneously when
y = 0 and xc2+3+Z6S2+3 = 0 (3.56)
This constitutes what will be defined as an algebraic
indeterminacy for 9^. It can be shown that these
relationships are only satisfied simultaneously when S^. is
colinear with £3^, or in other words when 9,. = +180 degrees.
Thus a value of c.-=-l calculated from (3.53) will act as a
o
flag to signal when equations (3.54) and (3.55) may not be
used to determine 9..
4
It is readily apparent that when £3g and £>^ are
colinear, a degree of freedom of the manipulator is lost in
that it is possible to spin the vector £5 about the £4^5
line without changing the position and orientation of the
hand. Thus the choice of 9^ is arbitrary. This problem
can be overcome by setting 9^ to its last previously
calculated value prior to the robot moving into the
indeterminate position.
The only remaining angle to be determined in the wrist
is the corresponding value for 9g. Utilizing the unified
notation [21], the following subsidiary expressions may be
written for a spherical heptagon:

82
Z6 = Z4321
(3.57)
X6 = X43217
(3.58)
Expanding the left sides of the above two equations and
solving for the cosine and sine of 9g respectively gives
c6 (C56'Z4321> 7 s56 <3'59)
s6 X43217 7 s56 (3.60)
The right hand side of equations (3.59) and (3.60) can be
expanded in terms of known quantities by use of the
following sets of equations:
X43217 X4321C7 Y4321S7
(3.61)
4321
X432C1 Y432S1
4321
C71(X432S1+Y432C1)
S71Z432
4321
S71(X432S1+Y432C1)
+ C71Z432
432
X43C2 Y43S2
432
c12(X43S2+Y43c2)
- S12Z43
= Z .
43
432
s12(X43s2+Y43c2)
+ C12Z43
X._s_+y c_
43 2 43 2

83
X43
=
X4C3 Y4s3
Y43
ss
C23(X4S3 + Y4c3) S23Z4
=
X4S3 + Y4C3
(3.64)
Z43
=
S23(X4S3 + V3} + C23Z4
Z4
X4
=
S45S4
Y4
ss
~(S34C45 + C34S45C4}
(3.65)
=
"C45
Z4
*
C34C45 C34S45C4
=
-S45C4
At this point the reverse displacement analysis is
complete. For any given location and orientation of the
hand, all displacement angles of the manipulator are known.
If the hand is in the effective workspace of the robot, then
there will be up to four possible configurations. That is,
there will be up to two sets of values for the angles ^,
2, and 9^. For each of these sets there will be two
corresponding sets of values for the angles 9^, 9^, and
V
3.5. Forward Displacement Analysis
For the forward displacement analysis it is assumed
that the values for the angles ^, 9^, 9^, 9^, and 9g
are known. It is desired to determine the location and
orientation of the hand, i.e. the coordinates of a reference
point of the tool attached to the manipulator and the

84
direction cosines of the vectors £>g
and a,_.
67
This
analysis
is included
for completeness
and
will be
referenced in
Chapter IV
although it is
not
required
for
the path
generation problem.
The direction cosines of and a^ in the moving x y
z coordinate system (see Figure 3-11) are simply (0,0,1)
and (1,0,0) since the x and z axes are chosen to point
along the vectors a^ and respectively. These direction
cosines may be expressed in the x'y'z' coordinate system by
five successive applications of the rotation matrix
c .
3
-s .
3
0
s .c. .
3 13
c .c. .
3 13
-s. .
13
s .s. .
. 3 13
c s .
3 13
c. .
13
(3.66)
It may be recalled that the x'y'z1 coordinate system is
attached to the manipulator such that x' points along a_12
and z' along S^. Therefore a vector in the x y z system
can be expressed in the x'y'z' system using the
transformation equation
*x' '
i
*
X
i
*
y'
= A01A-A._A_.Arc
21 32 43 54 65
y
*
_ z' _
_ z
(3.67)
Further a vector expressed in the x'y'z' system may be
written in terms of the fixed xyz coordinate system via use
of the following transformation equation

Figure 3-11: Forward Analysis

86
X
"x
y
= M
y
_z_
_ z 1
where
M =
coscj)^ -sincj)^
sin^
0 0
0
0
1
(3.68)
(3.69)
Combination of (3.67) and (3.68) and the substitution of the
* *
known direction cosines of S, and a__ in the x y z
6 6 7
coordinate system gives
~X6 ~
"0
y6
= M A01A-A._AC.A,c
21 32 43 54 65
0
_Z6 _
1
X67
"1 -
y67
M A21A32A43A54A65
0
_Z67_
0
(3.70)
(3.71)
where
x6'y6'Z<
and x
67
'67'z67
are the direction cosines of
C
6 and a^ respectively in the fixed coordinate system.
The last parameter to be determined is the position
coordinates of the point Pi, the point of interest of the
tool attached to the manipulator. This is obtained by use
of the following vector equation:

87
-PI = P2 + S666 + a6767 (3.72)
where Rp^ is
the vector to the tool point
of interest
and
Rp2 is the
vector to the
wrist point
P2.
Since
the
direction cosines of 3g and
a_^-j are known in
the
fixed
system, the
only unknown in
(3.72) is the
vector
P2 *
The
components of this vector in
the x'y'z'
coordinate system
are simply given by
P2 ^a23C2 + S44S2+3-*-' +
^a23S2 S44C2+3^-'
(3.73)
where i_' and k' are unit vectors along the x' and z' axes.
This vector may be transformed to the fixed coordinate
system by multiplying it by the rotation matrix M of
equation (3.69). With now known in the fixed system,
Rpl can be determined from (3.72).
The forward displacement analysis is now complete in
that the position and orientation of th hand of the
manipulator are uniquely determined for a given set of joint
displacements. It is important to note that the forward and
reverse solutions are complementary. That is, a set of
joint angles determined by the reverse analysis for a given
position and orientation of the hand must produce this same
position and orientation when used as input for the forward
analysis. This serves as a first verification of results.

88
3.6 Path Generation
The reverse analysis of the manipulator will serve as
the primary tool required to cause the manipulator to move
along a specified path. Simply stated, a set of
intermediate positions and orientations of the robot will be
selected along some path between two user specified end
points. A reverse analysis will be performed at each of the
intermediate positions in order to determine the set of
joint angles which will position the manipulator as
required.
For this analysis, it will be assumed that the user has
defined the initial and final pose of the manipulator.
Specifically, this requires that the user has specified the
coordinates of the tool tip and the directions of the unit
vectors £3^ and a^. These nine numbers, six of which are
independent, completely describe the position and
orientation of the manipulator. Throughout the remainder of
this chapter the initial and final positions and
orientations of the manipulator will be assumed to be known
quantities and will be referred to as rT, S,T, a,_T, and r,
X o 1 D / 1 r
£gF, a.g^F respectively. Many methods exist for the user to
input these values. Alternate methods will be discussed in
Chapter V.
Many strategies can be used in order to determine a
series of intermediate positions and orientations of the
manipulator between two user specified poses. For this
analysis, it was desired to cause the tool point to move

89
along a straight line. Furthermore, the tool attached to
the end effector should start at rest and accelerate to some
maximum velocity before slowing down and coming to rest at
the final manipulator pose. Due to the desired motion
characteristics, a displacement function based on the cosine
curve was selected. This displacement function is shown in
Figure 3-12.
3.6.1 Determination of number of intermediate points
A first step in the analysis is to determine how many
points should be inserted between the initial and final
poses of the manipulator. Too many points will cause the
motion of the animated manipulator to appear quite slow.
Too few points will result in fast velocities which will
make the animation appear to 'flicker'.
The number of intermediate points was selected based on
two factors, ie. the total straight line distance travelled
and the total change in orientation of the end effector.
Since the initial and final position of the tool tip are
specified, the total straight line distance is readily
calculated as follows:
dist |r£ rjl (3.74)
The number of steps based on translation is found by
dividing this distance by some standard step size,

Displacement
Figure 3-12: Displacement Profile

91
stepstrang = dist / (step size) (3.75)
For the Cincinnati Milacron T3-776 manipulator, a step size
of 1.9 inches produced satisfactory results. For example,
if the tool tip must move a total distance of 50 inches,
then the number of steps based on this translation is equal
to 50/1.9 or 27 steps.
The number of steps based on the change in orientation
of the tool was also calculated. It is possible to
determine the axis and net angle of rotation about this axis
that will cause the tool of the manipulator to change
orientation as desired (see reference [24]).
The first step of this procedure is
to determine
the
3x3 rotation
matrix, R, which aligns the
final orientation
of the end
effector with the
initial
orientation.
Two
additional
matrices, Cj and
Cp, are
first defined
as
follows:
o
M
II
[-61 -671 (-6Ix-
67I) J
(3.76)
n
ii
[-6F 67F (-6FX-
67F}[]
The elements
of matrix R can
be now
be determined
as
follows:

92
Rjl-
lCFj
CI2
CI3 1
/
lCI
Rj2
ICI1
CFj
CI3l
/
lCI
Rj 3
ICI1
CI2
CFj 1
/
lCI
where represents the first column of the matrix Cj and
CFj represents the jth column of the matrix Cp, as j varies
from 1 to 3. The matrix R can be readily verified since the
following equations must hold true:
6
the
6F R -61
(3.78)
67F = R -671
(3.79)
It is shown in reference [24] that a rotation of angle
about an arbitrary axis vector k can be represented by
relationship
Rot(k,9)
k k versO+cos
x x
k k vers9+k sin
x y z
k k vers9-k sin
L x z y
k k verse-k sin
y x z
k k verse+cos
y y
k k verse+k sin
y z x
k k vers+k sin
z x y
kzkyVers-kxsin
k k verse+cos
Z Z (3.8GT
where vers = (1-cos). The problem now is that of
equating the matrix R with equation (3.80) and solving for
k k k the axis of rotation, and for which is the
x y z
angle of rotation.
Summing the diagonal elements of the matrix R and of
equation (3.80) yields
R..+R_ _+R_ = (k2+k2+k2)vers +
11 22 33 x y z'
3cos (3.81)
Substituting for vers yields

93
Rll+R22+R33 1 + 2cose (3.82)
Solving for cos gives
cos = 0.5(R1;l+R22+R33-1) (3.83)
The angle of rotation will be defined to be positive about
the vector k such that is in the range of 0 to 180
degrees. With this additional constraint, equation (3.83)
will yield a unique result for the angle .
Three additional equations may be obtained by
subtracting pairs of off diagonal elements as follows:
R32 R23 2kxsine
R13 R3^ = 2kySin (3.84)
R01 R..= 2k sin
Z X 12 Z
The components of k may be obtained from the set (3.84) to
yield
X
a
ii
(r32-
-R23)
/
2sin
ti
(R13'
"R31}
/
2sin
(3.85)
k =
z
(R21
_R12)
/
2sin
When
the
angle
of rotation
is small, the axis of
rotation is not well defined since the denominator of set
(3.85) approaches zero. A similar problem with set (3.85)
exists if the angle of rotation approaches 180 degrees,
although the axis of rotation is indeed well defined in this
case. Therefore when the angle of rotation exceeds 90

94
degrees, a different method will be used to determine the
vector k.
Equating the diagonal elements of the matrix R with
equation (3.80) gives
2
R.. = k vers + cos
11 x
2
R22 = ky vers + cos (3.86)
2
R__ = k vers + cos
33 z
Solving for the components of k gives
kx = + t (Ri;l-cos)/(1-cos) ]0,5
ky = + [(R22-cos)/(l-cos)]0,5 (3.87)
k = + [(R -cos)/(l-cos)]0,5
Z j j
The largest component of k can be determined from (3.87).
For this largest component, the sign of the radical can be
determined from equation (3.84). Although corresponding
values for the smaller two components of k could also be
obtained from the set (3.87) more accurate results will be
obtained from an alternate approach.
Summing pairs of off diagonal elements from (3.80)
gives
R01+R10 = 2k k vers
21 12 x y
R32+R23 = 2kykzvers (3.88)
R-t+Rt- = 2k k vers
31 13 z x
With these equations,
the
solution method
consists of
obtaining the value of
the
largest
component of k
from
(3.87) with the appropriate
sign
of the
radical
being
obtained from (3.84).
The
values
of the
remaining
two
components of k can now
be obtained
by solving
(3.88).
This

95
method will yield accurate results when 9 is greater than
90 degrees. When 9 is less than 90 degrees, equations
(3.85) will yield accurate results for k except when 9
approaches 0.
The number of intermediate points due to rotation is
based on the magnitude of the net angle of rotation. The
number of steps is given by
stepsrot = 9 / (rotation size) (3.89)
Satisfactory results were found to occur when the rotation
size was chosen to be equal to 5 degrees.
The actual number of intermediate points inserted
between the user specified initial and final manipulator
poses was chosen as the larger number of steps based on
translation and on rotation (equations (3.75) and (3.89)).
This strategy ensured smooth animation of the manipulator
for all cases. In particular, the special case where the
tool tip position remained unchanged, but the orientation of
the tool changed considerably was handled with little
difficulty. For this case, the number of steps based on
translation was 0, but the number of steps due to the change
in orientation was significant.
3.6.2 Determination of intermediate positions and
orientations
The number of intermediate positions and orientations
of the manipulator are now known. The problem now is how to
distribute the intermediate positions in order to produce
the desired motion characteristics.

96
As previously stated, a displacement profile based on
the cosine curve was desired because of the smooth motion
which would result. The exact position of the jth
intermediate point is given by
r. = rT + (dist/2) t Sr (3.90)
J
where 'dist' is the total distance travelled as defined in
equation (3.74), Sr is a unit vector along the direction of
motion, and t is a dimensionless parameter given by
t = 1.0 cos((PI*j)/steps) (3.91)
When j = 0, t also equals 0, and the first position is at
the start of the path, r^.. When j = 'steps', the final
calculated position will be at the end of the path, r.
r
Values of j between 0 and 'steps' will give intermediate
positions based upon a cosine function distribution pattern.
A corresponding orientation for the manipulator must be
determined for each of the intermediate positions. The net
angle and axis of rotation which transforms the end effector
from its initial to final orientation have been previously
determined. The rotation matrix which transforms the
initial orientation vectors, £5^ and a^.^, to their values
at the jth position is obtained by application of equation
(3.80). The axis vector, k, to be used in this equation
remains constant. The angular value to be used in (3.80) at
the jth position is given by

97
Oj = (9/2.0) t (3.92)
where 9 is the net angle of rotation and t' is as defined
in (3.91). The.orientation vectors at the jth position are
therefore simply determined by multiplying the initial
vectors S,.T and by the rotation matrix determined from
(3.92) for the jth position. In this manner, the change of
orientation of the end effector will be performed smoothly.
The change in orientation will begin slowly, accelerate to
some maximum value at the middle of the motion, and finally
slow down and gradually come to rest.
At this point the position and orientation of the
manipulator at each of the intermediate locations along the
rectilinear path are known. The reverse kinematic analysis
described in section 3.4 is applied to each of the
intermediate points to determine the sets of joint angles
which will position the manipulator as required.
3.6.3 Selection of configuration
During the description of the reverse analysis, it was
pointed out that up to four configurations of the Cincinnati
Milacron T3-
776 robot can position
and
orient
the
end
effector as
required.
Therefore,
for
each
of
the
intermediate
points along
the rectilinear
path, up
to
four
sets of joint angles have been calculated. The problem now
is to filter the sets of joint angles. It is necessary to

98
obtain one set of joint angles at each position such that no
abrupt angular changes occur from one point along the path
to the next.
The user must specify what configuration the robot
shall be in at the initial point along the path. Methods of
assisting the operator in this selection process are
presented in Chapter V. At this point it is assumed that
the initial configuration of the robot has been specified.
With the initial configuration of the manipulator
known, each set of joint angles at the next position of the
robot along the path are compared to the initial joint
angles of the robot. For each set of joint angles, the
maximum angular change is recorded and assigned to the
variable gamma. The set with the smallest gamma angle is
identified as the set of joint angles which are closest to
the configuration at the start of the path. Restated for
the general case, each set of joint angles at the j+1
position along the path is compared to the already chosen
set of joint angles at the jth position. The set of joint
angles to be used at the j+1 position is the set with the
smallest gamma angle. Typical datum is shown in Table 3.1.
3.6.4 Special cases
At times, it may be the case that the closest set of
angles at the j+1 position may actually vary considerably
from the previous set of angles at the jth position. This
occurs when the manipulator approaches near a singularity

Table 3-1
Sample Angles for j and j+1 Positions
Position
-e-
02
93
04
65
96
j
-39.367
87.588
- 7.692
37.800
-131.780
2.812
j + 1
-38.744
87.810
- 7.924
-118.560
132.830
-151.962
j + 1
-38.744
87.810
- 7.924
37.539
-132.830
4.137

100
position (discussed in Chapter V) or when the manipulator
moves from a region where four configurations are possible
to a region where only two configurations exist.
Such cases are identified by establishing a maximum
allowable angular change for any joint angle as the
manipulator moves from the jth to the j+1 position.
Satisfactory results were obtained when this maximum
allowable change was set at 8 degrees.
If it is found that any joint angle changes by more
than 8 degrees from the jth to the closest j+1 position, the
program first tries to install more intermediate points.
This is done by a recursive call to the motion program
itself, with the jth position representing the initial
position and orientation and the j+1 position representing
the final position and orientation of the manipulator.
Since the translational distance between the jth and j+1
.position is usually quite small, a minimum of two
intermediate points are inserted, regardless of the number
of steps calculated due to translation and orientation
change. If the rapid change in joint angles was due to a
singularity condition, then this method will produce
satisfactory results and effectively slow down the
manipulator.
If the recursive call was unsuccessful in solving the
problem, then simple joint interpolated motion is executed
between the jth and j+1 positions. The number of
intermediate interpolated positions is based on the

101
magnitude of the maximum angular change. This approach was
utilized when the manipulator moved from a region of four
possible configurations to a region of only two.
3.7 Results and Conclusions
The path generation method discussed in this chapter is
based on a closed form solution of the inverse kinematics of
the manipulator. The advantage of this approach is that for
any given position and orientation of the manipulator, all
possible configurations are found. The only disadvantage is
in speed. The closed form solution requires more
calculations than iterative techniques. Furthermore,
knowledge of all configurations requires that time be spent
on filtering the sets of joint angles at some location in
order to find the set which is closest to the joint angles
at the previous location.
Typical path calculation time on the Silicon Graphics
system was 10 to 15 seconds. The same program on a VAX
11/750 computer took approximately 2 seconds. The addition
of a floating point accelerator in the Silicon Graphics
system would reduce its calculation time by a factor of 4 to
approximately 2.5 to 3.5 seconds.
The purpose of the path generation program was to
generate a sequence of joint angles for the manipulator
which would produce rectilinear motion of the end effector
between two user specified positions and orientations. Once
the angles were calculated, they would serve as input to the
graphics program described in Chapter II so that the motion

102
could be observed by the user. Joint angles can be
calculated off line, prior to the animation sequence. An
off line calculation time of two or three seconds is not
significant and for this reason the closed form solution
method was chosen. Further work in path planning and
obstacle avoidance benefits considerably from the knowledge
of all possible configurations of the robot at any given
position and orientation. For this additional reason, the
closed form solution approach was recommended.

CHAPTER IV
ROBOTIC TELEPRESENCE
4.1 Introduction and Objective
This chapter deals with a particular application of the
robot animation and path planning techniques discussed in
the preceding two chapters. The interactive computer
graphics program developed in Chapter II was enhanced to
allow an operator to more readily control the motions of a
remote robot manipulator in two distinct modes, viz.
man-controlled and autonomous. In man-controlled mode, he
robot is guided by a joystick or similar device. As the
robot moves, actual joint angle information is measured and
supplied to the graphics system which accurately duplicates
the robot motion. Obstacles are placed in the actual and
animated workspace and the operator is warned of imminent
collisions by sight and sound via the graphics system. In
autonomous mode, a collision free path between specified
points is obtained by previewing robot motions on the
graphics system. Once a satisfactory path is selected, the
path characteristics are transmitted to the actual robot and
the motion is executed. A detailed explanation of
man-controlled and autonomous operation will follow a brief
description of the objectives of a telepresence system.
103

104
4.2 Telepresence Concept
It is often necessary in a hazardous environment for an
operator to effectively control the motion of a robot
manipulator which he cannot observe directly. The
manipulator may be either directly guided via use of a
joystick or similar device, or it may be autonomously
controlled in which case it is desirable to preview and
monitor robot motions. The operator must be provided with
an accurate picture of the workspace environment and of the
robot motion, together with other sensory information such
as touch and sound. These inputs will allow the operator to
experience the motion and forces acting upon the distant
robot. In this context, the word telepresence is used to
describe the type of system which permits the operator to
effectively control the motion of the remote robot
manipulator.
The primary sensory feedback that an operator requires
to control robot motion is vision. In many instances,
vision alone will give the operator sufficient information
in order to avoid obstacles and to manipulate objects as
desired. Logically, this vision could be provided by video
cameras which surround the workspace of the manipulator or
which are attached to the manipulator itself.
The ability of the video camera to zoom in and provide
enlarged visualizations of the work area would also assist
in the control of the manipulator. Use of other sensors
such as force feedback devices together with the video
equipment would supplement the system.

105
It should be noted that the use of video cameras to
provide vision feedback does have certain disadvantages.
Primarily, more than one camera must be used to provide
sufficient viewing positions for the operator. Each of
these cameras will most likely have two degrees of freedom
in addition to the zooming capability thereby giving the
operator numerous additional parameters to control.
Secondly, environmental conditions may cause the video image
to be blurred or poor lighting and contrast may make the
image confusing and unclear. Because of these and other
limitations, a system has been developed which replaces the
video feedback with a realistic computer graphics
representation of the robot and the workspace.
The use of a computer graphics system offers three
distinct advantages. First, images are clear and sharp.
The use of solid color representations of the robot and work
area with hidden surface removal provides a clear image of
the scene. Colors can be selected to provide obvious
contrast and clarity for the operator. Secondly, the
computer graphics system readily allows for viewing of the
image from any desired vantage point. This ability removes
the requirement for a multitude of video cameras and allows
the operator to focus his attention on only one monitor
screen. Lastly, the computer graphics system can provide
additional feedback to the operator. For example, when the
manipulator is moved close to an obstacle, an audio signal
can be given and the color of the obstacle can be changed.

106
This additional feedback to the operator can significantly
improve performance.
The remainder of this chapter will describe a computer
graphics based telepresence system which has been developed
at the University of Florida with the cooperation of
numerous individuals. Emphasis will be placed on the
graphics system, and on the autonomous planning of potential
collision free paths.
4.3 System Components
Figure 4-1 illustrates the telepresence system as it
exists in the laboratory. The system serves as a test bed
to verify the feasibility of the concept and consists of
four distinct technologies and components as follows:
4.3.1. Robot manipulator
The manipulator chosen for the telepresence system was
an MBAssociates (MBA) robotic manipulator. The MBA
manipulator is a hydraulic six degree of freedom device.
The second and third joint axes are parallel and the final
three axes intersect at a common point. The robot is
controlled by a PDP 11/23 computer which regulates hydraulic
fluid flow so as to minimize the error between the desired
and actual joint angles. The MBA robot was selected for
this study because of its simple geometry and versatile
control system.

107
Figure 4-1: Telepresence System

108
4.3.2. Six degree of freedom universal joystick
Two six degree of freedom joysticks were utilized in
the telepresence system. These are shown in Figures 4-2 and
4-3. The gripper of each joystick can be translated and
twisted in any desired manner thereby giving the operator
six degrees of freedom. The controlling computer calculates
the position of the joystick and then determines the desired
position and orientation of the end effector of the
manipulator. Detailed descriptions of these joystick
devices are presented in references [25] and [26].
4.3.3. Obstacle detection vision system
A vision system was used to identify obstacles and then
transmit the position and orientation of each to the
computer graphics system. For this study, cubes and spheres
were the only items placed in the workspace of the robot.
The objects were all of known size and were painted white
and placed on a black background. Further information
dealing with the vision system can be found in reference
[27] .
4.3.4. Computer graphics system
The Silicon Graphics IRIS 2400 computer graphics
workstation was selected to provide the interactive
animation capability required for the system. As previously
discussed, the ability of this workstation to rapidly draw
shaded polygons was essential in order to allow for real
time interaction with the animated display.

109
Figure 4-2: Nine String Joystick
Figure 4-3: Scissor Joystick

110
4.4 Method of Operation
Shown in Figure 4-4 is a drawing of the configuration
of the telepresence system. This system configuration
allows for the operation of the manipulator in the two
distinct modes, viz. man-controlled and autonomous. In both
modes the problem to be solved is to identify and pick up a
cube in the workspace and then maneuver the object to a
desired
point without colliding
with a
sphere.
For
the
purposes
of this investigation,
it was
assumed
that
all
objects
in the workspace of the
robot were
i located
on a
flat
horizontal surface.
4.4.1 Man-controlled mode
In man-controlled mode the operator must control the
robot via use of a joystick while observing only the
graphics display screen. In order to duplicate the motion
of the actual manipulator, it is necessary to be able to
accurately measure the joint angles of the robot and supply
these values to the Silicon Graphics system in real time.
This was accomplished by reading voltages from
potentiometers which are attached to each joint of the robot
and which provide feedback information to the robot's
control system. Each of the potentiometer signals was
monitored and converted to digital angular values by a PDP
11/23 computer. A communications protocol was established
between this PDP computer and the Silicon Graphics system so
that angular values would be immediately available to the

Ill
Figure 4-4: System Configuration

112
graphics system upon request. Comparisons of cycle times
indicated that the graphics system was never waiting on the
PDP computer to process data. In other words, potentiometer
voltages could be converted to digital values at a rate
faster than the'Silicon Graphics system required. A more
detailed discussion of the communication protocol can be
found in reference [27].
With knowledge of the joint angles of the actual robot,
a C language computer program was written on the Silicon
Graphics system which provides the operator with the visual
information required to control the robot. At the onset of
this project, it was desired that this animation have the
following characteristics:
1. Show the obstacles (spheres and cubes) that were in
the workspace of the actual robot.
2. Be able to select any viewing position from which to
observe the manipulator and workspace.
3. Be capable of zooming in or out in order to improve
the resolution of the image.
4. Cause the animated manipulator to 'attach' or 'drop' a
cube in the gripper when the actual robot picked up or
dropped a cube.
5. Warn the operator in real time when any part of the
end effector approached within a specified distance of an
obstacle (sphere).

113
The animation requirements were realized by writing
efficient computer code which maximizes the graphics
productivity of the Silicon Graphics system. Briefly, a
subroutine was written which accepts the six joint angles of
the robot as parameters. Given these six angles, a
representation of the robot is produced. This subroutine is
repeatedly called until a halt flag is received by the
program. After a set of joint angles is received, certain
parameters are modified depending on signals received from a
rolling mouse device. The modification of these parameters
gives the operator the ability to change viewing positions
and to zoom in as desired in real time. Such a capability
was shown to be of great importance when operating the robot
while looking only at the computer graphics representation.
Figure 4-5 shows the graphics representation of the MBA
manipulator.
Of equal importance in the effective operation of the
robot, was the ability to accurately display the obstacles
(spheres and cubes) in the robot workspace. This is
accomplished by activating the vision system which consists
of a camera located directly above the robot and an image
processor. The vision system establishes a threshold such
that the objects of interest appear as white objects on a
black background as shown in Figure 4-6. A spiral search
pattern is initiated to locate all white objects in the
field of view of the camera. Analysis of scan line data
identifies each object as a sphere or a cube and determines

114
Figure 4-5: Animated Representation of MBA Manipulator
Figure 4-6: Obstacle Locations Determined by Vision System

115
all position and orientation data. This information was
then transmitted to the Silicon Graphics system so that each
obstacle could be displayed.
Of utmost importance was the calibration of the vision
system. The relationship between the obstacles and the
robot had to be accurately reproduced so that an operator
could manipulate real world objects while only watching the
graphics screen. A slight calibration error would make it
extremely difficult for the operator to pick up and maneuver
objects correctly. A unique calibration program was written
which reduces this error to a minimum. Figure 4-7 shows the
MBA manipulator together with various objects in the
workspace of the manipulator.
As objects are picked up by the actual manipulator, it
is important that this also be shown by the graphics
representation. Presently, when the operator picks up an
object with the actual robot, he must let the Silicon
Graphics system know of this event by selecting an
appropriate option from a pop-up menu. This selection
causes the Silicon Graphics system to scan the workspace to
determine if an object exists which is within two inches of
the tip of the gripper of the robot. If such an object
exists, the animated image is modified to show the object as
being rigidly attached to the end effector. Similarly, when
the operator drops an object from the gripper, he must again
notify the graphics system of the event. The Silicon
Graphics system calculates the present position of the end

116
Figure 4-7: Display of Objects in Manipulator Workspace

117
effector of the robot and places the object at that
position. This 'manual intervention' by the operator could
be replaced by utilizing force feedback information from the
gripper. Basically, when the gripper is closed when it is
near an object and a force is felt in the gripper, then the
graphics system would automatically attach the object in the
animated scene.
The final requirement for the graphics system was to
warn the operator in real time when any part of the
manipulator was close to an obstacle (sphere). The ability
to do this in real time was a significant attribute. This
feature was accomplished by placing a bounding box around
the sphere. The graphics system was then placed in 'select
mode' and the clipping region was set equal to the planes of
the bounding box.
Commands were then given to
redraw
the
robot.
When in
'select mode', no drawing
command
is
executed.
Rather,
any drawing command which
would cause
something to be drawn within the clipping planes, results in
a flag being placed in a buffer. Therefore, if the buffer
is empty after the robot is redrawn, then no part of the
robot fell within the bounding box surrounding the sphere.
In actual practice, two bounding boxes were used, one
whose size equaled the diameter of the sphere, and a larger
one whose size equaled twice the diameter of the sphere.
When the robot did not intersect either box, the color of
the sphere was white. If the robot intersected the larger
box, but not the smaller one, then the graphics system

118
beeped once and the color of the sphere was changed to
yellow. If the robot entered the smaller bounding box, then
the graphics system beeped continuously and the color of the
sphere was changed to red. Figure 4-8 shows this obstacle
warning feature while Figure 4-9 shows the telepresence
system as it is being operated in man-controlled mode. The
ability of the graphics system to warn the operator of
potential collisions significantly impacted on operator
performance and confidence.
4.4.2 Autonomous mode
For autonomous operation of the MBA manipulator, it was
desired to specify two positions and orientations and have
the manipulator move autonomously between them. Typically
the first position was taken as the current location of the
robot. In this instance it can be assumed that the position
of the sphere is known in relation to the robot (supplied by
vision system). A geometrical approach based on vector
manipulations was used to solve this problem.
The tasks required to be performed are as follows:
(a) Determine whether straight line motion between two
specified points is possible without colliding with a
spherical obstacle.
(b) If not, generate a collision free end effector path.
(c) Observe the animated robot move along the path.
(d) Modify the path if desired.
(e) Command actual MBA robot to perform previewed motion.

119
Figure 4-8: Warning of Imminent Collision
Figure 4-9: Operation
in Man-Controlled Mode

120
Shown in Figure 4-10 are the two known points PI and P2
which represent the initial and final positions of the end
effector of the robot. The question to be answered is
whether the line segment from Pi to P2 intersects the sphere
of radius R which is centered at P3. The first step in the
solution is to determine the coordinates of point P4 which
represents the point on the infinite line containing PI and
P2 which is closest to the center of the sphere. If the
distance from P4 to the center of the sphere is greater than
the radius of the sphere, then it can be concluded that the
line segment P1-P2 does not intersect the sphere.
To determine point P4, the vector equation of the
infinite line which contains PI and P2 is first obtained.
This equation is as follows:
(4.1)
r jc S = £i x S
where £ is a unit vector along the line and thus equals
p^-p^/1P2-P^ | and r_ represents a vector from the origin to
any point on the line. Since point P4 lies on this line,
then it must satisfy the equation and thus
(£3 + X) x S = El x S
(4.2)
Rearranging this equation gives
X x S = S0
(4.3)

Figure 4-10: Determination of Intersection
B
A
Figure 4-11: Generation of Alternate Path

122
where £0 is simply equal to (£^ £3) x £3 and is therefore a
known vector. Crossing both sides of this equation with the
vector S gives
S x (X x S) = S x S0 (4.4)
Expanding the left hand side gives,
(S-S)X (S-X)S = £ x S0 (4.5)
Noting that X is perpendicular to £ simplifies the above
equation and allows for the solution for the vector X,
X = (S x S0) / (S*S) (4.6)
If the magnitude of the vector X is greater than the radius
of the sphere, the problem is completed as it is possible
for a point to move from point PI to point P2 on a straight
line. Similarly, straight line motion is possible if the
magnitude of X is less than the radius of the sphere and
point P4 does not lie between points PI and P2. The point
P4 will lie between the points Pi and P2 if the following
condition is satisfied:
*(e4-r2) < 0
(4.7)

123
If equation (4.7) does not hold true, the infinite line does
indeed intersect the sphere, but the line segment from PI to
P2 lies on one side of the obstacle.
If it is determined that straight line motion is not
possible, then an alternate path must be calculated. As
shown in Figure 4-11, two new points A and B are determined.
These points lie in the same plane as points Pi, P2, P3, and
P4. Points A and B are found by first determining points
C if
A B A', and B'. These additional points are given by
the expressions
a
*
= £3
- RS
(4.8)
b
= £3
+ RS
(4.9)
a'
= £4
- RS
(4.10)
b'
= £4
+ RS
(4.11)
The points A and B are then given by
a = a
+ KRX
/
I2i|
(4.12)
*
b = b
+ KRX
/
mi
(4.13)
where K is a constant which governs how far the path is to
be stretched from the sphere.
With points A and B now known, straight line motion
will be performed between these points while a smooth
non-rectilinear path is generated between points PI and A
and points B and P2. A continuous smooth velocity profile

124
is desired so that the manipulator will start at rest at
point Pi, accelerate to some maximum velocity, and finally
come to rest at point P2. This is accomplished by selecting
a local coordinate system as shown in Figure 4-11. As the
manipulator moves from PI to P2, the displacement along the
X axis will be varied according to a cosine curve in exactly
the same manner as rectilinear motion was accomplished in
the previous chapter. The Y coordinate value is varied as
follows:
if X<|a-£1| Y = [1 cos (X*PI/1 a_' | ) ] KR/2
if lE'-Pil < X < jb'-^l Y = KR (4.14)
if |b'-£x| Use of equation (4.14) to determine the Y coordinate as
a function of the X displacement offers a significant
advantage. Since the intermediate points to be displayed
are selected based on the X axis displacement, a cosine
distribution as discussed in Chapter III will result. The
manipulator will start at rest, accelerate to a maximum
velocity, and finally slow down and come to rest. The Y
axis displacement simply stretches the original rectilinear
motion so that the path passes through points A and B.
Equation set (4.14) guarantees that the slope of the path,
measured in terms of the local X,Y axes, will always equal
zero at points A and B and in this way the 'bent path' is
smooth and continuous. An example of a computer generated
path is shown in Figure 4-12.

Figure 4-12: Display of Computer Generated Path

126
The only disadvantage of this approach is that if the
distance |a-a' | is large with respect to | a_' | r then rapid
velocity changes along the Y axis may occur. In other
words, the end effector must move rapidly along the Y axis
in order to avoid the sphere. Dynamic considerations may
make it impossible for the actual manipulator to follow such
a path.
Once the joint angles for the manipulator are
determined for each point along the path, the motion of the
robot is displayed on the graphics screen. As the robot
moves, the 'bounding box' technique is again used to notify
the operator if any part of the robot arm comes close to the
sphere. Furthermore, while the robot is moving along the
path, the operator can observe the motion from any viewing
position and can zoom in for a close up view. In this
manner, the operator can evaluate the path to determine if
it is acceptable.
If for any reason the operator is dissatisfied with the
computer generated path, he has the option of modifying it
and then watching the animated robot move along this new
path. Two basic modifications can be made. First, the
operator can move points A and B (see Figure 4-11) closer or
farther from the sphere. Second, the points A and B can be
rotated around the sphere. In this way the operator has a
high level of control over the non-rectilinear path. During
operation of the system, paths are modified as desired until
one is found which results in a motion where no part of the
manipulator arm strikes the sphere.

127
Once an acceptable path is selected, joint angle
information is sent from the Silicon Graphics system to the
PDP 11/23 which controls the robot. The robot controller
adjusts the angles of the manipulator such that the
manipulator performs the same motion as was just previewed
on the graphics screen. A more detailed description of the
communication task and the operation of the MBA manipulator
are contained in reference [27].
4.5 Problems Encountered
It was very important that the animated scene
accurately reflect the actual situation of the robot and
workspace. Two types of system calibrations represented the
greatest problem encountered in accomplishing this task.
The first calibration problem involved the correct
positioning on the graphics screen of obstacles identified
by the vision system. A specialized calibration program was
written to correctly position animated obstacles with
respect to the animated robot. In this procedure, the
actual robot was positioned directly above a cube in the
workspace. The animated obstacles were then translated and
rotated as a group by moving a mouse input device until the
animated robot was also directly above the corresponding
animated cube. The procedure was repeated until the
animated scene was identical to the actual workspace. The
translation and rotation calibration data were then written
directly into the main program.

128
The second source of calibration error was introduced
by the method of joint angle measurement. Voltages derived
from potentiometers attached to each joint of the robot were
converted to digital angular values. Incorrect measurement
or conversion would cause the animated robot to differ in
appearance from the actual robot. Again, calibration
programs were written to minimize these errors. Small
fluctuations in the voltage applied to the potentiometers
would require recalibration of the system.
4.6 Conclusions
The animation tasks were successfully completed. The
graphic representation of the MBA manipulator and workspace
were more than sufficient to enable the operator to
effectively control the robot with a joystick. The ability
of the graphic system to signal the operator in real time
when any part of the manipulator was in close proximity to a
spherical obstacle greatly enhanced the operator's
performance and confidence. Lastly, the ability to preview
and modify computer generated motions prior to the robot
executing the task was quite successful and has numerous
applications whenever manipulators must operate in any type
of hazardous environment.

CHAPTER V
INTERACTIVE PATH PLANNING AND EVALUATION
5.1 Introduction and Objective
In the previous chapter a method for better controlling
a robot manipulator in man-controlled mode was discussed.
Specifically, the computer graphics system was used to
provide a realistic, real time image of the work area so
that the operator could efficiently maneuver the robot mani
pulator. In this chapter, this concept will be extended and
an aid for path planning and evaluation will be developed.
The primary objective of this effort is the development
of an interactive computer graphics software system which
assists an operator who is planning robot motions. The
system must allow the user to specify where the robot is to
move as well as other factors such as the velocity along a
path and the function of the robot at a certain point.
After the operator enters this datum, the graphics system
must display the robot as it performs the task. In this
manner the user can evaluate the motion of the robot prior
to the execution of the task by the actual robot. The
significant advantage of such a system is the fact that
complex paths and tasks can be programmed and evaluated off
line in a manner which is much faster than existing robot
task planning methods.
129

130
In order to accomplish this objective, certain
algorithms for robot intelligence which provide rapid
decision making capabilities had to be developed. These
algorithms address the following two areas:
1. Can the robot move along a rectilinear path between
two user specified points?
2. Does the robot pass through or near any singular
positions during the motion?
For this work the graphics software system and
subsequent algorithms were specifically applied to the
Cincinnati Milacron T3-726 industrial robot. As shown in
Figure 5-1 this is a six axis manipulator characterized by
the fact that the second and third joints are parallel while
the final three axes intersect at a point. The mechanism
dimensions of the manipulator as well as the joint angle
limitations are presented in Table 5-1. The final system
can readily be modified for any other robot manipulator with
little difficulty.
5.2 Robot Animation
An obvious component of any such path planning and
evaluation system is a realistic computer graphics
representation of the robot manipulator. This element of
the system will allow the user to observe a realistic
picture of the robot as it performs the specified task. In
this manner the operator can observe any problems or

131
Figure 5-1: Cincinnati Milacron T3-726 Manipulator

132
Table 5-1: Mechanism Dimensions and
Joint Limitations of T3-726 Manipulator
11
*
a12
= 0 cm
al2 "
90
22
0 cm
a23
= 51.0
a23 =
0
33
0
a34
= 0
34 "
90
44 =
51.0
a45
= 0
a45 =
61
55
0
a56
= 0
a56
61
* to be determined when closing the loop
-142.5 < ^ < 142.5 deg.
0.0 < 2 < 90.0
-60.0 < 3
< 60.0

133
discrepancies and thereby make any corrections or
modifications prior to any motion by the actual manipulator.
As stated earlier, any such representation must possess
two characteristics. First, the resulting picture must be
an accurate recreation of the actual manipulator. Second,
the images of the robot must be drawn rapidly enough in
order to allow for smooth animation and user interaction.
This rapid drawing or creation of the image is critical if
the user is to have the ability to change viewing positions
and zoom in to observe details of the motion in real time.
Shown in Figure 5-2 is a photograph of the final
result. The graphics techniques described in Chapter II
were used to create the image from a series of seven rigid
bodies as shown in Figure 5-3. The data base for this robot
consisted of 46 four sided polygons and 11 ten sided
cylinders for a total of 156 polygons. This is compared to
202 total polygons for the Cincinnati Milacron T3-776 robot
animation detailed in Chapter II. The generation of sorting
rules for the solid color image was far more complex for the
T3-726 robot due to the existence of the two horizontal
motor cylinders on the second and third rigid bodies.
The resulting program was capable of generating images
at a rate of 8.5 Hz on a Silicon Graphics IRIS model 2400
workstation. This rate allowed for satisfactory user
interaction and display. An improved refresh rate of 12 Hz
was obtained when the program was run on an IRIS model 3030
system. The improvement in performance was noticeable and

134
Figure 5-2: Animated Representation of T3-726 Manipulator

Figure 5-3: Collection of Rigid Bodies
135

136
resulted in a much more aesthetic image, but the model 2400
results were quite acceptable.
5.3 Program Structure
In order to plan a task for a manipulator to perform,
the operator must specify certain information. First the
user must state what exact point of the end effector is to
be controlled. Typically this point will be an obvious part
of the tool such as the tip of a welding rod or the point
between the two fingers of a gripper. This point is defined
by specifying two parameters, tool length and tool offset.
Tool length is defined as the distance from the tool
mounting plate to the point of interest. Tool offset is
defined as the perpendicular distance from the point of
interest to the line extending out from the center of the
tool mounting plate. Specification of these two items
allows for the precise path control of any point in any tool
attached to the manipulator.
The next information that the operator must enter is
the number of precision points that the robot will move
through and whether the robot should perform cyclic motion.
A precision point is defined as a user specified position
and orientation that the robot must move through. Cyclic
motion occurs by moving the manipulator from the last
precision point back to the first precision point so that
the entire sequence can be replayed. For each of the

137
precision points, the following information must be entered
and recorded:
1. Position of the tool point.
2. Orientation of the tool.
3. Function to be performed at this point.
4. Velocity to be used when the robot moves between the
previous point and this point.
Many alternate methods exist for the storage of this
information. A dynamic memory allocation method utilizing
double linked lists was used in order to allow for the
selection of any number of points without the allocation of
any unnecessary memory space. The linked list was
established via use of the data structure shown in Figure
5-4. Stored in each structure is the position, orientation,
function, and velocity data together with the memory address
of the next and previous position of the robot.
The exact meaning of each of the four items listed
above will now be explained. Shown in Figure 5-5 is a
skeletal drawing of the T3-726 robot. In the figure, a
fixed coordinate system attached to the manipulator base is
shown. The origin of the coordinate system is located at
the intersection of the axes of the first two joints. The
direction of each of the XYZ axes are as shown. In terms of
this coordinate system, the position of the tool point is
simply the XYZ coordinate values of where the tool is to be
located

struct data_points
{float r [3] ;
float v6 {3].;
float v67[3] ;
float wrist[3] ;
int vel_factor ;
int function ;
int fun_parameter ;
struct data_points *next ;
struct data_points *prev ;
}
vector to tool tip
dir. cosines of S6 vector
dir. cosines of a67 vector
vector to center of wrist
velocity factor for previous
section of path
the operation to be performed
at this point
a parameter associated with
the function
a pointer to the next
precision point
a pointer to the previous
precision point
Figure 5-4: Data Structure for Precision Points

S,
Figure 5-5: Skeletal Model of T3-726 Manipulator
139

140
The tool orientation is defined by specifying the
direction cosines of the two unit vectors Sg and ag7 as
shown in Figure 5-5. At this point it may appear that nine
quantities have been specified in order to define the six
position and orientation freedoms. However three
constraints (S^g and a^ are unit vectors and must be
perpendicular) reduce the set of nine values to a set of
only six independent values.
One of the following set of four functions must be
selected for each precision point:
1. Open gripper.
2. Close gripper.
3. Delay.
4. No operation.
This set is indicative of the type of operations that can be
performed by the T3-726 robot. If the delay function is
selected, then the operator must also enter the length of
time in milliseconds that the robot should pause at the
precision point.
A velocity value from one to ten must be selected for
each of the precision points. The value selected is a
measure of how fast the manipulator will be moving at the
middle of path as the robot moves from the previous
precision point to the present point. A value of ten cause
the manipulator to move at a rate approximately five times
faster than a velocity value of one.

141
The required data for each precision point can be
entered in either of two ways. The value of each item can
be typed in manually after a prompt or the positions and
orientations can be entered by use of the mouse device. The
manual entry ofdata is straightforward. An advantage of
this method is that data items can be easily read in from a
file to ensure precise repetition of previous paths.
The use of the mouse however to enter data offers the
user a visual interpretation of where the robot will move.
The user is given a top and side view of the robot. By
moving the mouse controlled cursor to a position in the top
view part of the screen and pressing one of the mouse
buttons, the user selects the X and Y coordinate values.
Now by moving the cursor to the side view of the robot, the
user can select the Z value of the point. With the position
of the tool point now known, the user repeats the procedure
and selects a point that the tool should point towards. The
selection of this point specifies the direction cosines of
the unit vector At this point a vector perpendicular to
£3g is displayed on the screen. This new vector represents a
potential choice for the orientation vector £5-7- The ag^
vector can be rotated about the £5g vector by pressing either
the up or down arrow keys on the keyboard until the desired
orientation is established. The pressing of any mouse
button confirms the selection.

142
5.4 Workspace Considerations
Once a position and orientation is selected by the
user, either manually or with the mouse, a reverse kinematic
analysis is performed to determine whether the desired
precision point is in the workspace of the robot. If it is
not in the workspace, the operator is so notified and is
asked to reenter the point.
To assist the user in choosing points that will be in
the workspace of the T3-726 robot, the wrist workspace
boundary is drawn on both the top and side views. One
criterion that must be met in order for the chosen precision
point to be within the workspace of the robot is that the
position of the wrist of the robot (the point of
intersection of the last three joints) must lie within the
wrist workspace volume. This volume is established by
ignoring the last three joints of the robot and moving the
first three joints through all possible positions in order
to establish all reachable points of the wrist. The volume
of all possible wrist locations is directly related to the
mechanism dimensions of the manipulator and to the specific
joint angle limitations of the system. It is important to
recognize however that it is totally independent from the
type or size of tool that is attached to the robot and thus
once determined, the workspace volume will never change.
Shown in Figure 5-6 is a drawing of the wrist workspace
volume. Figure 5-7 is a photograph of how the workspace
volume is displayed for the operator in order to assist in

143
Figure 5-6: Manipulator Workspace
Figure 5-7: Top and Side Views of Workspace

144
the selection of precision points. One fact that must be
emphasized to a new user is that only the wrist must lie
within the wrist workspace region. The tool tip may be
chosen outside of this region as long as the orientation of
the tool is such that the wrist point falls within its
allowable region.
Previously it was stated that in order for the T3-726
robot to be within its workspace, the wrist point must lie
within the wrist workspace region. This is a necessary but
not sufficient condition. In other words it is possible for
the wrist of the T3-726 robot to be within its workspace and
yet the orientation of the tool is such that the tool cannot
be properly oriented. This problem is due to the geometry
of the wrist itself. Shown in Figure 5-8 is a skeletal
drawing of the wrist of the manipulator. Of concern are the
relative directions of the unit vectors S, and S,. It is
4 6
apparent from the figure that it is physically impossible
for £4 and £g to be anti-parallel, due to the fact that the
angles and equal 61 degrees. Because of these twist
angles, there is a group of orientations of £g with respect
to £4 that cannot be attained. This group of impossible
orientations is represented by a cone as shown in Figure
5-9. Mathematically it can be stated that a given position
and orientation will be outside of the allowable workspace
if the dot product of the two unit vectors £4 and is less
than the cosine of 122 degrees.

Figure 5-8: Three Roll Wrist
145

.146
and cannot point 'inside' cone
Figure 5-9: Orientation Limits

147
5.5 Path Evaluation
Once two precision points are entered, both of which
are in the workspace of the robot, it must be the case that
the T3-726 robot can move between them. The simple example
of joint interpolated motion proves this. However, just
because two precision points lie in the workspace, it is not
guaranteed that the robot can perform some controlled path
motion between them.
As detailed in Chapter III, motion of the robot along
some specified path can be accomplished by generating a
series of intermediate positions and orientations along the
desired path. The example of rectilinear motion of the tool
tip was discussed in detail. In the introduction to this
chapter it was stated that one objective of this effort was
to establish an algorithm which could rapidly determine
whether motion between two precision points was possible.
This algorithm must be rapid and must not simply generate a
series of intermediate points and check whether each is in
the workspace. Such a method suffers from problems with
choice of step size and computer accuracy.
The approach to be
taken here is to
determine
if
the
desired path intersects
any part of the
boundary
of
the
reachable workspace. The first selection
to be made
is
the
type of path. Movement
along a straight
line, circular
arc,
or along any polynomial
curve should be
possible.
Motion
along a straight line was considered, but it introduced one
new problem. The reachable workspace of the. robot was now a

148
function of the tool dimensions. This added parameter might
warrant the consideration of a multitude of special cases as
the shape of the workspace varied as to the type of tool.
For the above reason, the wrist of the robot was
constrained to move along a straight line. The primary
benefit of this specification is that the wrist workspace is
independent of the type of tool and thus never changes.
Since the wrist point will now move along a rectilinear
path, the tip of the tool will follow a parametric equation
based on the movement of the wrist and the change in
orientation between successive precision points. It should
be noted that if the orientation of the tool at two
precision points is the same, then the tool tip will move
along a straight line at constant orientation.
At each precision point the position of the tip of the
tool as well as the direction cosines of the two orientation
vectors £g and a^ are known in terms of the fixed
coordinate system attached to the base of the robot. Since
the size of the tool is also known, it is a simple matter to
calculate the position of the wrist point from the equation
w = £ S66S6 a67a6? (5.1)
where w is the vector to the wrist point, r_ is the location
of the tool tip, Sgg is the distance from the tool tip to
the wrist along the vector S,, and a
is the tool offset.

149
Path evaluation is now accomplished by determining
whether the straight line between two successive wrist
points intersects any part of the wrist workspace boundary
(see Figure 5-6). It is apparent from the figure that the
wrist workspace boundary is formed from two inner spheres
and two outer spheres with a section cut from behind the
robot due to the limitation on the first joint. Since both
wrist points must lie in the workspace, it is not necessary
to check to see if the line segment of interest intersects
the outer boundaries. What remains is to determine whether
the line segment attempts to move the robot behind its base
(in conflict with the limit on joint 1) or whether the line
segment intersects either of the 'sphere segments' which
form the inner wrist workspace boundary.
Again from Figure 5-6, it can be concluded that if the
line segment between successive wrist points intersects any
part of either of the complete inner sphere boundaries, then
the desired motion is not possible. Thus the problem is
reduced to determining whether the robot attempts to move
behind its base or whether the line segment between wrist
points intersects either of two inner spheres.
In Chapter IV a method for determining whether a line
segment intersects a sphere was presented. This method was
used here without modification to determine whether straight
line motion of the wrist point was possible with respect to
the two inner spherical voids.

150
The only case that must be analyzed then is whether
straight line motion of the wrist results in the robot
moving behind its base, which is not allowable due to the
rotational limit of the first joint. Shown in Figure 5-10
is a top view of the robot workspace. Two points and P2
which represent the desired initial and final positions of
the wrist are shown. Many alternate methods could be used
to determine whether the straight line between P^ and P2
intersects the non-allowable region. One goal however is to
evaluate the path using a minimal number of mathematical
operators in order to optimize computational time. Because
of this, a method which reduces the problem to a planar case
was used.
As shown in Figure 5-10, the desired initial and final
wrist points have been projected onto the XY plane, giving
the points P^ and P2> can be assumed that neither of
these points lies in the non-allowable region since both
initial points must be in the workspace of the robot. If
line segment P^P2 intersects the line segment OE, then
straight line motion is not possible. Otherwise, the
desired motion can occur.
Shown in Figure 5-11 are two coplanar line segments A
and B with end points A^, A2, B^, and B2. Since the end
points are known, unit vectors SA and £B, which point along
the direction of the line segments, are easily calculated.
In this planar case, the equation of the infinite lines
through line segments A and B are given by

151
Figure 5-10: Motion Behind Base
Figure 5-11: Intersection of Planar Line Segments

152
S A *
A
- *1>
= 0
(5.2)
*B *
" 1>
= 0
(5.3)
where £A and rg are vectors from the origin to a point on
the respective infinite line. It must be noted that
equations (5.2) and (5.3) may be multiplied by -1 in order
for the scalar products
£a (-Ax)
-B <-=!>
to be greater than zero. At this moment points A^ and A£
will be substituted into the equation for the infinite line
through segment B. If one of the points A^ or satisfy
equation (5.3), then that point lies on the infinite line
through segment B. If the results from equation (5.3) are
both greater than zero or both less than zero, then it can
be concluded that both points A^ and lie on the same side
of the infinite line through segment B with respect to the
origin. In either of these cases, it is impossible for
segment A to intersect segment B and the 'no intersection'
result is concluded.
If points A^ and A£ lie on either side of the infinite
line through segment B, then the points B^ and B^ are
substituted into equation (5.2) to determine whether they
lie on opposite sides of the infinite line through segment

153
A. If this is the case, then it can be concluded that line
segments A and B do indeed intersect.
For the case under consideration, point is the
origin and thus the total number of calculations is further
reduced. Application of this algorithm to this special case
results in only 7 additions and 10 multiplications required
to determine whether the robot moves behind its base.
5.6 Calculation of Intermediate Points
Once a series of precision points is entered such that
the wrist of the robot can move along a straight line
between them, a series of intermediate positions and
orientations between these points must be determined. The
method utilized was very similar to that discussed in
Chapter III. Intermediate displacement steps were taken
based on a cosine displacement function. Similarly,
intermediate orientation changes were made based on a cosine
rotation function about the net axis of rotation.
In the previous discussion, the step size taken between
successive intermediate points was based on the total
distance travelled divided by a specified constant and on
the net rotation angle again divided by some constant. In
this case the step size was also influenced by the velocity
factor entered during the input of the precision point data.
Modification of the step size constants based on the
velocity factor resulted in greater or fewer intermediate

154
points being generated. Since the graphics system generates
images at a relative constant rate, fewer intermediate
points would cause the viewer to observe a faster velocity
as the robot moved. The communication of the velocity data
and problems involved in the control of the actual robot
will be discussed later in this chapter.
As before, a reverse kinematic analysis was performed
at each of the intermediate positions and orientations.
Changes in joint angles were monitored and abrupt changes
resulted in a recursive call of the program in order to add
extra points in order to slow down any rapid angular changes
(which typically occurred near singularity points). One
major change was made however. In the method presented in
Chapter III, the program was terminated if an intermediate
position and orientation was outside of the workspace of the
robot. For this case however, a different strategy was
used.
Prior to determining intermediate points between two
successive precision points, it was already determined that
both precision points were in the robot workspace and that
the wrist could move on a straight line between them. It
must be noted however that the orientation change caused by
rotation about the net axis of rotation could result in an
orientation which the robot could not attain. In other
words, the wrist of the robot was guaranteed to lie in its
workspace, but the required orientation may be impossible to
achieve.

155
Many different methods could have been used to resolve
this problem. For example the change in orientation could
have been caused by an opposite rotation about the net
rotation axis or the orientation could have been held
constant until the end of the path in an effort to decouple
the rectilinear motion of the wrist from the orientation
change. The method which was eventually used involved
monitoring the motion to discover the exact joint angles
when the robot left its workspace (due to orientation) and
the exact set of joint angles when the robot reentered its
workspace (as it must since the precision point being moved
to was definitely in the robot workspace). Simple joint
interpolated motion was then performed between these two
positions. Utilization of this method then guaranteed that
a path would be generated which successfully moved the robot
between two given precision points.
5.7 Robot Configurations
As previously stated, the basic geometric form of the
Cincinnati Milacron T3-726 robot is identical to that of the
T3-776 robot. Because of this the reverse kinematic
analysis presented in Chapter III for the T3-776 robot can
be applied again by changing the mechanism dimensions.
In Chapter III it was shown that the total number of
solutions or configurations of the robot for a given
position and orientation was eight. This was reduced to

156
four for the T3-776 robot due to joint limits. A further
reduction to a maximum of two possible robot configurations
resulted in applying the specific joint angle limitations of
the T3-726 robot. The total reduction from eight to two
configurations is due to the limits on 02 and 0^ which
prevent the robot from reaching overhead (the wrist point
cannot intersect the vector) and the limit on 0^ which
prevents the robot from achieving an 'elbow down'
configuration. The two alternate configurations which do
exist lie in the wrist of the robot and are due to the
result of equation (3.53) which allows for two values of 0,.
and subsequently two values for 0^ and 0g.
Once all precision points are entered, the user is
confronted with yet another choice. At the first point of
the path the user must specify which of the configurations
that the robot will start in. In Chapter III, this choice
was made somewhat haphazardly. In this case, the paths
which result when each of the choices are made must be
compared based on some criterion in order to assist the
operator in the choice of initial configuration. The
criterion selected is two-fold. First how well does each of
the paths avoid any singularity positions and secondly,
during cyclic motion, does the robot return to the same
configuration that it started from? It should be noted that
if the robot does not return to its initial configuration,
that simple joint interpolated motion is used to change the
configuration.

157
5.8 Singularity Analysis
A singularity position of a robot manipulator is
defined as a particular position of the robot such that the
set of screws which represent the joints of the robot are
not linearly independent. In other words, the robot is in a
position such that typically at least one degree of freedom
is lost.
At such singularity positions the normal reverse
kinematic analysis fails. For example it is possible for
the 4th and 6th joints of the T3-726 robot to become
colinear. When this
occurs, the value of
4
may be
arbitrarily
selected
and
a corresponding
value
Of e6
calculated.
Typically
when
the manipulator
moves
near a
singularity
position,
rapid
changes in the
joint
angles
within the wrist of the robot will occur. For this reason
it is desirable to select a path which avoids any
singularity positions as much as possible.
The identification of singularities for the T3-726 type
of robot is a relatively easy task. This analysis was
performed by R. Soylu [28] and is presented here for
completeness.
In reference [29] it is shown that the relationship
between the angular velocities of each of the six joints of
the T3-726 manipulator and the resulting screw that the end
effector of the robot will move along is represented by the
6x6 Jacobian matrix, J. This statement is expressed by the
equation

158
= J w (5.4)
where ^ is a 6x1 screw which represents the net motion of
the end effector and w is a 6x1 vector comprised of the six
angular velocity terms.
Another way of interpreting equation (5.4) is by
stating that the net screw motion of the end effector of the
manipulator is equal to the linear combination of the six
infinitesimal rotations about each of the six joints of the
robot. This is written as
i = wl1 + w22 + + w6^-6
where is a 6x1 vector which represents the Plucker line
coordinates (screw coordinates) of the first joint, etc.
Comparison of equations (5.4) and (5.5) leads to the
conclusion that the 6x6 Jacobian matrix is simply comprised
of the six screws ...,
As presented in reference [29], the screw coordinates
of each revolute joint of the robot are comprised of the six
scalar quantities L, M, N, P, Q, R. The quantities L, M, N
represent the direction cosines of a unit vector which
points along the direction of the joint axis while P, Q, R
represent the respective moments of this unit vector with
respect to the origin of some coordinate system and thus are
origin dependent terms. If the six joint angles of the

159
robot are known, it is a straightforward task to generate
the coordinates of the six screws of the robot.
It was previously stated that the robot would be in a
singularity position if the set of screws which represent
the joints of the robot are linearly dependent. This
dependency can be determined by monitoring the determinant
of the Jacobian matrix. If the determinant of the Jacobian
matrix equals zero, then the robot is in a singularity
configuration.
The values of each coefficient of the Jacobian matrix
is dependent on the selection of the coordinate system to be
used. The greatest simplification of the Jacobian resulted
when the coordinate system was attached to the manipulator
such that the origin was at the intersection of the vectors
£2 and S4 as shown in Figure 5-12. The X axis lies along
a^4 and the Z axis along Use of this coordinate system
resulted in the following expressions for the six screws of
the system:
1

<£l
/
-a2323 X -1
2
=
(S2
9
~a2323 X -2
3
=
(k
9
0 0 0)
4
=
<4

9
0 0 0)
5
=
(*5

9
S444 x -5}
6
=
(s6
9
444 x

X
Figure 5-12: Coordinate System for Singularity Analysis
160

161
The direction cosines of each of the vectors of the
manipulator in terms of the coordinate system described are
shown in Table 5-2.
Substitution of known mechanism dimensions, and
utilization of subsidiary formulas described in [21] result
in the following expressions for the six screw coordinates:
1 = (
s2+3'
c2+3' 0 '
0,
0,
"a23C2}
2 = (
0,
0, 1 ;
a23S3' a23
C3'
0)
$3 = (
0,
0, 1 ;
0,
0,
0)
4 = (
0,
-l, 0 ;
0,
0,
(5.7)
0)
$5 = (s
45S4'
-C45' "S45c4 ;
S44S45C4'
0, S
44S45S4}
6 <
X54'
~Z5' X54 ;
-S44X54'
0,
S44X54^
The expressions s^ and c^ represent the sine and
cosine of
angle ^

Similarly, the
expressions
S2+3
and c2+3
represent
the
sine and cosine
of the angle
(2 +
e,). The
terms used in $, are defined as follows:
Xc>, = X,c, Ycs.
54 54 54
X54 = X5S4 + X5C4
X5 = S56S5
(S45C56 + C45S56C5)
C45C56 S45S56C5
Singularity positions are identified by calculating the
determinant of the Jacobian matrix for a given set of joint
angles. If the determinant equals zero,
then the

Table 5-2: Direction Cosines
3
(0
0
1 )
-34
(1
0
9
0
4
(0
"S34 '
c34 >
-4 5
(c4
U*
U43
9
U43
is
(X4
Y4
z4 >
56
(w54 ,
-U*
U543
9
U543
i6
(X54 ,
Y54 '
Z54 >
-67
(W654 '
-U*
u6543
9
U6543
7
(X654 '
Y654 '
Z654 )
71
(W7654 f
-U*
U76543
9
U76543
ii
(X7654 '
Y7654 '
Z7654 }
-12
(W17654'
~U176543
9
U176543
2.2
(X17654'
Y17654'
Z17654)
-23
(C3
~S3
9
0
162

163
manipulator is at a singularity position. Using the
simplified expressions for the screw coordinates, the
determinant of the 6x6 Jacobian matrix can be expanded by
hand. The number of zero elements in the matrix makes this
a relatively simple task.
After proper expansion and reduction of terms, the
determinant of the Jacobian matrix reduces to the following
expression:
J| a23S44s45S56 (s5}(C3}(S44S2+3+a23C2) (5*8)
It is obvious that the determinant of the Jacobian will
equal zero only if one of the terms in parenthesis in
equation (5.8) equals zero.
5.9 Interpretation of Singularity Results
It is interesting to note what actual robot
configuration occurs when each of the factors of equation
(5.8) becomes equal to zero. If c^ equals zero, then
must equal 90 or 270 degrees. This occurs if the vector a^
becomes colinear with the vector This is a singularity
position for the robot because the vectors £2, S^, and the
wrist point are all coplanar. The singularity represents
the transition point from an elbow up to an elbow down
configuration. The T3-776 robot cannot achieve this
singularity configuration due to the limitation on the third
joint (-60 to 60 degrees).

J-64
If the factor (S44s2+3+a23c2^ equals zero, then the
wrist point of the robot intersects the vector S^. This
represents a singularity because at this instant, four of
the screws of the system would intersect at a point. For
the T3-726 robot the lengths and 823 are equal so that
the factor can be simplified to (s2+3+c2) Again, due to
joint angle limitations, the T3-726 robot cannot achieve
this singularity configuration.
The last factor (s^) indicates that a singularity will
exist
if
5 equals
0
or 180 degrees.
In either
of these
cases,
the
vectors
4
, Sg, and Sg are
coplanar and
thus
the
screws
4 '
$5, and
6
are dependent.
The case
when
65
equals
180
degrees
also results in the
vectors £
a and
4
6
being colinear and thus the screws and £g are identical.
The T3-726 robot is capable of achieving this type of
singularity and this case must be monitored and avoided if
possible. It is important to note however that if
artificial joint limits were placed on 05 in order to avoid
the singularity position, then there would be a unique
solution to the reverse kinematic analysis rather than the
two solutions that actually exist.
5.10 Selection of Configuration
It was previously mentioned that the user must select
the starting configuration of the manipulator at the first
precision point. The singularity analysis was used to

165
assist the user in evaluating and comparing the two
alternatives.
In a manner similar to that discussed in Chapter III, a
linked list of joint angles was generated for each of the
two possible starting configurations. Each element of each
of the two linked lists represents the joint angles for the
robot at a particular location along the path. For each of
the linked lists, a plot of the value of the sine of 9^ for
each of the elements of the list was displayed for the user.
The user could then select the starting configuration which
resulted in the motion which best avoided the condition of
sine 0£. equal to zero. An example of the type of
singularity plot provided to the user is shown in Figure
5-13.
5.11 Preview of Motion
Once the starting configuration is selected, the
resulting motion of the robot can be displayed on the
graphics screen. Pictures of the robot are generated by
traversing the appropriate linked list corresponding to the
selected initial configuration. For each element of the
linked list, a picture of the robot was drawn with joint
angles set as specified in the element of the list.
In this method, the user is able to observe the
manipulator's motion prior to any movement by the actual
manipulator. In addition to the robot motion, the path,

166
Figure 5-13: Display of Singularity Results

167
robot function, and velocity of the robot are also
accurately displayed. In other words, the animated robot
will open
and close
its
gripper,
delay,
and
move
with
different
velocities
as
specified
by the
user
when
the
precision
points were
entered.
Just as
in
the
work
described
in Chapter
11/
the operator also
has
complete
freedom to interact with the animation in order to change
viewing position, zoom in, or freeze the motion at any time.
If desired, the operator also has the option of
observing the path had the other initial configuration been
selected. A pop up menu offers the user this choice and a
second selection of this option will display the original
path again. Once the user is satisfied with the displayed
path, the pop up menu is again used and the operator can
elect to send the path data to the actual robot. Once
appropriate information is transmitted, the actual robot
will duplicate the path and programmed functions just as was
displayed on the graphics system.
5.12 Communication with Robot Controller
Communication of path data to the robot controller was
accomplished with the assistance of Mr. Mark Locke and Mr.
David Poole of the U.S. Army Belvoir Research and
Development Center. They developed the appropriate
interface protocol and procedures required to perform this
task.

168
One problem that had to be resolved however was the
fact that it was not possible to simply transmit joint
angles to the robot controller. Rather, the robot required
the X, Y, Z position of the tool together with orientation
data defined as roll, pitch, and yaw. When given the
position and orientation data, the robot controller would
then calculate its own joint angles and position the robot
accordingly.
The lack of ability to communicate joint angles posed a
serious problem in that the user no longer had control over
the configuration that the robot would be in. In other
words, the robot would move to the correct position and
orientation as desired, but it would select the initial
configuration based on some predefined criterion. Typically
the robot would move to the configuration that was 'closest'
to the home position configuration.
A second problem involved the transformation from joint
angle information to the correct roll, pitch, and yaw
orientation data. This transformation introduced several
new algebraic indeterminacies which only serve to complicate
the process.
5.13 Roll, Pitch, Yaw Calculations
For the T3-726 manipulator, the pitch orientation datum
is defined as the angle between the fixed vertical Z axis
and the S, vector. The sine and cosine of this angle, D,

169
can be readily determined since the six joint angles of the
robot are known.
The first step in the calculation of D involves
determining the direction cosines of the vector £g in terms
of the fixed coordinate system. Appropriate sets of
direction cosines are tabulated in [21] and thus the vector
S can be assumed to be known. The cosine of angle D is
o
simply the Z component of the unit vector £g. The sine of
angle D is quickly calculated as the magnitude of the cross
product of the £g vector and the k vector. It must be noted
that in this analysis, the sine of angle D will always be
positive and thus D will always lie between 0 (vertical up)
and 180 (vertical down) degrees.
The yaw orientation is defined as the angle, E, between
the fixed X axis and the projection of the £>g vector onto
the XY plane. The projected unit vector £g is simply
determined by setting the third component of £[g to zero and
then unitizing the resulting vector. The cosine of the
angle E is now equal to the dot product of £g and the i_
vector. The sine of angle E is determined by taking the
cross product of i^ and £g. The sine of E will equal the
magnitude of the resulting vector. The sine will then be
set to either a positive or negative value depending on
whether the vector formed during the cross product points
along or opposite to k.
A problem arises in the calculation of E if the value
of D is near 0 or 180. In other words, if the tool is

170
pointed straight up (or down), then there is no projection
of on the XY plane. In this case, the previous value of
E is sent to the robot controller. Communication of actual
joint angles rather than roll, pitch, and yaw data would
have avoided this problem.
The last orientation datum, roll, is somewhat more
difficult to calculate. Shown in Figure 5-14 is the angle
R, which is shown as the angle between the vectors a^ and
V. The vector V is defined so as to be coplanar with the
vectors k and £g and also perpendicular to Sg. V can be
determined by the following equation:
V = k (Sg*k)Sg (5.9)
With the vector V now known, the sine and cosine of angle R
can be readily determined as before via application of cross
and dot products.
As with the calculation of the angle E, an algebraic
indeterminacy results if the tool is pointed either straight
up or straight down. In either of these cases, the vector V
will be undefined. For this case however an alternate
solution method can be found in which equation (5.9) is
replaced such that the vector V is found by successive
rotations of the i^ vector by the angles E and D about the
respective axes Z and Y.

171
V is coplanar with k and Sg
Figure 5.14: Calculation of Roll Parameter

172
5.14 Results and Conclusions
The resulting program met the goals of providing a user
with an interactive method of planning and evaluating robot
motions. Developments made in other chapters were combined
with new advancements to provide a unique method of
inputting path data, evaluating whether straight line
motions are possible, and evaluating paths based on their
avoidance of singularity configurations.
Although the analysis was aimed particularly at the
Cincinnati Milacron T3-726 robot, straightforward
modifications could be made to apply the results to any type
of industrial robot. Additions to the program such as the
ability to move specific precision points to move further
away from singularities and the inclusion of other types of
non-rectilinear motions will make the system more effective.
The system proved itself to be much more efficient for
planning robot motions compared to 'teach pendant' and other
methods. One requirement that exists, however, is the
ability to identify and display objects which are in the
workspace of the robot. If the location of these objects
were determined, autonomous tasks could be planned and
executed which manipulate the objects as desired. An object
detection system such as that described in Chapter IV, would
make this system a task planner rather than just a motion
planner.
Lastly, the resulting work must be applied to two or
more robots cooperating in parallel. Task specification

173
coupled with a high level task language would make human
interaction with robots, and thus man-machine productivity,
a much more efficient process.

CHAPTER VI
. DISCUSSION AND CONCLUSIONS
Advanced computer graphics hardware has been utilized
to generate real time, interactive, solid color images of
industrial robots. This feature offers the user clear
images which assist in direct control (telepresence) and
workcell modeling tasks.
It is apparent that the primary purpose of the graphics
system is to help the user to command the robot manipulator.
The ability to display an abundance of data in a readily
acceptable format significantly improves this man-machine
interaction. Future work must emphasize this factor and new
and better techniques of data display must be incorporated.
For example, during a workcell modeling task, the user may
be trying to position a vision system camera and a light
source. Shadows may impact on the performance of the vision
system and must be displayed. Also it may be helpful to
display objects as being transparent so that hidden items
may be identified and the camera repositioned accordingly.
As previously stated, the additional realism which is
added to the display will slow down image refresh rates.
Slow image generation will result in poor animation and
174

175
awkward interaction by the user. Future trends in graphics
hardware development, however, will make such applications
practical and feasible. Improvements in graphics
performance by a factor of 1.5 are anticipated within the
next year. Inventive new approaches to the workcell
modeling task will be forthcoming.
Planning tasks for two cooperating manipulators is an
additional task which has not been addressed. An example of
a task for two robots would be passing a work piece back and
forth. The graphics system must allow the user to plan some
actions for the first robot and then some for the second.
The database must be developed to store each robot's task
data plus communication items between the robots.
It was previously mentioned that certain points planned
on a graphics
screen and i
communicated to the robot
must be
'touched
up'
. This is
because
certain items
in the
workspace
of
the robot
are not
positioned exactly as
displayed
by
the graphics
system.
This problem is
magnified
for the case of two cooperating robots. Slight inaccuracies
in position data can result in excessive forces being placed
on the work piece being held by the two robots. This
problem cannot be readily 'touched up' by modifying certain
points. Rather force information must be monitored and each
robot's position modified in real time in order to reduce
the net non-gravitational force and torque to zero. The
graphics system then serves as a global planner, but a
robust control system must still perform the immediate robot

176
control. The combination of the graphics system with such a
sensor integrated control system is a topic for much further
research.

REFERENCES
1. Heginbotham, W.B., Dooner, M., and Kennedy, D.N.,
"Analysis of Industrial Robot Behavior by Computer
Graphics," Third CISM IFToMM Symposium on Robotics, Warsaw,
1975.
2. Heginbotham, W.B., Dooner, M., and Kennedy, D.N.,
"Computer Graphics Simulation of Industrial Robot
Interactions," Third CIRT, Seventh ISIR, Tokyo, 1977.
3. Heginbotham, W.B., Dooner, M., and Case, K., "Rapid
Assessment of Industrial Robots Performance by Interactive
Computer Graphics," Ninth ISIR, Tokyo, 1979.
4. Warnecke, H.J., Schraft, R.D. and Schmidt-Streier, U.,
"Computer Graphics Planning of Industrial Robot
Applications," Third CISM IFToMM Symposium on Robots,
Warsaw, 1975.
5. Schraft, R.D. and Schmidt, U., "A computer-Aided Method
for the Selection of an Industrial Robot for the Automation
of a Working Place," Third CIRT, Sixth ISIR, Paris, 1976.
6. Liegeois, A., Fournier, A., Alden, M.J., and Barrel, P.,
"A System for Computer-Aided Design of Robots and
Manipulators," Tenth ISIR, Paris, 1980.
7. Haseyawa, Y., Masaki, I., and Iwasawa, M., "A
Computer-Aided Robot Operation Systems Design (Part I),"
Fifth ISIR, Tokyo, 1975.
8. Derby, S., "Kinematic Elasto-Dynamic Analysis and
Computer Graphics Simulation of General Purpose Robot
Manipulators," PhD dissertation, Rensselaer Polytechnic
Institute, 1981.
9. Leu, M., "Elements of Computer Graphic Robot
j Simulation," Proceedings of the 1985 ASME Computers in
Engineering Conference, Boston, August 1985.
10. Ravani, B., "Current Issues in Robotics," presentation
made at the 19th Mechanisms Conference, Columbus, Ohio, Oct
1986.
177

178
11. Anderson, C. "Computer Applications in Robotic Design,"
Proceedings of the 1986 ASME Computers in Engineering
Conference, Chicago, July 1986.
12. Smith, R., "Robot Workmate: Interactive Graphic Off Line
Programming," SRI International, Menlo Park, Ca., Feb 1986.
13. Mahajan, R.> and Walter, S., "Computer Aided Automation
Planning: Workcell Design and Simulation,"
Robotics Engineering, Vol.8, Number 8, pp. 12-15, August
1986.
14. Newell, M.E., Newell, R.G., and Sancha, T.L., "A New
Approach to the Shaded Picture Problem," Proceedings of the
ACM National Conference, New York, 1972.
15. Newell, M.E., "The Utilization of Procedure Models in
Digital Image Synthesis," PhD. dissertation, University of
Utah, 1974.
16. Schumacker, R.A, Brand, B., Gilliland, M. and Sharp,
W., "Study for Applying Computer-generated Images to Visual
Simulation" U.S. Air Force Human Resources Lab Technical
Report, Sept. 1969.
17. Appel, A., "Some Techniques for Shading Machine
Renderings of Solids," AFIPS 1968 Spring Joint Computer
Conference.
18. Goldstein, R.A., and Nagel, R., "3-D Visual Simulation,"
Simulation, pp. 25-31, January 1971.
19. Catmull, E., "Computer Display of Curved Surfaces,"
Proceedings of the IEEE Conference on Computer Graphics,
Pattern Recognition, and Data Structures, May 1975.
20. Myers, A.J., "An Efficient Visible Surface Program,"
Report to the NSF, Ohio State University Computer Graphics
Research Group, July 1975.
21. Duffy, J., Analysis of Mechanisms and Robotic Manipu
lators John Wiley and Sons, New York, 1980.
22. Pieper, D.L., and Roth, B., "The Kinematics of
Manipulators Under Computer Control," Proceedings 2nd
International Congress on the Theory of Machines and
Mechanisms, Vol. 2, Kupari, Yugoslavia, 1969.
23.Lipkin, H., and Duffy, J., "A Vector Analysis of Robot
Manipulators," Recent Advances in Robotics, John Wiley and
Sons, New York, 1985.

179
24. Paul, R., Robot Manipulators, Mathematics, Programming,
and Control, MIT Press, Cambridge, Ma., 1981.
25. Lipkin, H., "Kinematic Control of a Robotic Manipulator
with a Unilateral Manual Controller," master's thesis,
University of Florida, May 1983.
26. Morton, A., "Design and Implementation of a Joystick
Based Robot Teaching System," PhD dissertation, University
of Missouri at Rolla, 1985.
27. Duffy, J., Harrell, R. Matthew, G., and Staudhammer,
J., "Enhanced Animated Display of Man-Controlled Robot,"
University of Florida, College of Engineering, Final Report
submitted to McDonnell Douglas Astronautics Co., April 1986.
28. Soylu, R., PhD dissertation in preparation, University
of Florida, 1987.
29. Duffy, J. and Sugimoto, K., "Special Configurations of
Industrial Robots," 11th International Symposium on
Industrial Robots, Tokyo, Oct 1981.

BIOGRAPHICAL SKETCH
Carl D. Crane III was born in Troy, New York, on 14
April 1956. After graduating from Canajoharie High School
he began his college studies at Rensselaer Polytechnic
Institute. He received his Bachelor of Science degree in
1978 and his Master of Engineering degree in 1979 in the
field of mechanical engineering. During the years 1979
through 1984 he served as an officer in the U.S. Army Corps
of Engineers. In 1984 he continued his graduate studies at
the University of Florida.
180

I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
an
cal Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Rlptt G. Seflfr:
Professor oVE Cc
Information Sci
enees

I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
^hn Staudhammer
Professor of Electrical Engineering
This dissertation was submitted to the Graduate Faculty of
the College of Engineering and to the Graduate School and
was accepted as partial fulfillment of the requirements for
the degree of Doctor of Philosophy.
May 1987
Dean, Graduate School

Internet Distribution Consent Agreement
In reference to the following dissertation:
Author: Carl Crane
Title: Motion planning and control of robot manipulators via application of a
computer graphics animated display
Publication Date: 1987
I, Carl D. Crane III, as copyright holder for the aforementioned dissertation, hereby grant
specific and limited archive and distribution rights to the Board of Trustees of the
University of Florida and its agents. I authorize the University of Florida to digitize and
distribute the dissertation described above ,for nonprofit, educational purposes via the
Internet or successive technologies.
This is a non-exclusive grant of permissions for specific off-line and on-line uses for an
indefinite term. Off-line uses shall be limited to those specifically allowed by "Fair Use"
as proscribed by the terms of United States copyright legislation (cf, Title 17, U.S. Code)
as well as to the maintenance and preservation of a digital archive copy. Digitization
allows the University of Florida to generate image- and txt-based versions as
appropriate and to provide and enhance access dsing.search software.
This grant of permissions prohibits use of the digitized versions for commercial use or
profit.
Signature of Copyright Holder
Carl D. Crane III
Printed or Typed Name of Copyright Holder/Licensee
Personal information blurred
16 June 2008
Date of Signature
Return this form to: Cathy Martyniak
Preservation Dept.,
University of Florida Libraries
P.O. Box 117007
Gainesville, FL 32611-7007



NJ
U1
Figure 2-6: Transformation to Viewing Coordinate System


134
Figure 5-2: Animated Representation of T3-726 Manipulator


34
If the two bounding boxes are not disjointed then all
the points of one object are substituted into the equation
of the infinite plane that contains the second object. If
the resulting value of the equation for all points is
greater than zero (assuming that the viewer's side of the
infinite plane is positive), then the first object may cover
the second object. Similarly, the points of the second
object are substituted into the equation of the infinite
plane containing the first object. Again whether the sign
of the equation is greater or less than zero determines
whether one object may overlap the other. Shown in Table
2-2 are all the possible combinations of signs that may
occur. Figure 2-10 shows a representative sample of the
types of overlap conditions that can occur for two plane
segments.
If it is concluded from the previous test that no
overlap can occur, then the comparison is complete. However
if an overlap may occur, then the projections of the two
objects onto the screen are checked to determine if they do
indeed overlap. This is done by determining whether any
lines of either of the two projections cross. If any of the
lines do cross, then the plane segments do overlap. If none
of the lines cross, then it may be the case that one of the
projections lies completely inside the other. One point of
each of the projected plane segments is checked to determine
whether it is inside the other projected polygon.


108
4.3.2. Six degree of freedom universal joystick
Two six degree of freedom joysticks were utilized in
the telepresence system. These are shown in Figures 4-2 and
4-3. The gripper of each joystick can be translated and
twisted in any desired manner thereby giving the operator
six degrees of freedom. The controlling computer calculates
the position of the joystick and then determines the desired
position and orientation of the end effector of the
manipulator. Detailed descriptions of these joystick
devices are presented in references [25] and [26].
4.3.3. Obstacle detection vision system
A vision system was used to identify obstacles and then
transmit the position and orientation of each to the
computer graphics system. For this study, cubes and spheres
were the only items placed in the workspace of the robot.
The objects were all of known size and were painted white
and placed on a black background. Further information
dealing with the vision system can be found in reference
[27] .
4.3.4. Computer graphics system
The Silicon Graphics IRIS 2400 computer graphics
workstation was selected to provide the interactive
animation capability required for the system. As previously
discussed, the ability of this workstation to rapidly draw
shaded polygons was essential in order to allow for real
time interaction with the animated display.


150
The only case that must be analyzed then is whether
straight line motion of the wrist results in the robot
moving behind its base, which is not allowable due to the
rotational limit of the first joint. Shown in Figure 5-10
is a top view of the robot workspace. Two points and P2
which represent the desired initial and final positions of
the wrist are shown. Many alternate methods could be used
to determine whether the straight line between P^ and P2
intersects the non-allowable region. One goal however is to
evaluate the path using a minimal number of mathematical
operators in order to optimize computational time. Because
of this, a method which reduces the problem to a planar case
was used.
As shown in Figure 5-10, the desired initial and final
wrist points have been projected onto the XY plane, giving
the points P^ and P2> can be assumed that neither of
these points lies in the non-allowable region since both
initial points must be in the workspace of the robot. If
line segment P^P2 intersects the line segment OE, then
straight line motion is not possible. Otherwise, the
desired motion can occur.
Shown in Figure 5-11 are two coplanar line segments A
and B with end points A^, A2, B^, and B2. Since the end
points are known, unit vectors SA and £B, which point along
the direction of the line segments, are easily calculated.
In this planar case, the equation of the infinite lines
through line segments A and B are given by


s,
-J
Figure 3-10: Moving and Fixed Coordinate Systems


38
sorting method to take advantage of the hardware
improvements resulted in the rapid generation of full color,
solid images at a rate of over 10 frames per second. The
hardware system and software modifications will be detailed
in the following sections of this chapter.
2.6 Description of Hardware System
The Silicon Graphics IRIS model 2400 workstation is a
68010 based 3-D system designed to function as a stand-alone
graphics computer. It is capable of generating three
dimensional, solid color images in real time without the
need for a main frame computer.
The unique component of the IRIS is a custom VLSI chip
called the Geometry Engine. A pipeline of twelve Geometry
Engines accepts points, vectors, and polygons in user
defined coordinate systems and transforms them to the screen
coordinate system at a rate of 69,000 3-D floating point
coordinates per second.
The display of the IRIS system is a 19 inch monitor
with a screen resolution of 1024 pixels on each of 768
horizontal lines. The monitor is refreshed at a rate of 60
Hz and provides flicker free images. The image memory
consists of eight 1024 x 1024 bit planes (expandable to 32
g
bit planes). An eight bit plane system will allow for 2
(256) colors to be displayed simultaneously on the screen.


ACKNOWLEDGEMENTS
The author wishes to thank his wife, Sherry, and his
children Theodore, Elisabeth, and Stephanie for their
patience and support. Also, special thanks go to his
committee chairman, Professor Joseph Duffy, for his
encouragement and guidance. He is also grateful to the
members of his graduate committee who all showed great
interest and provided considerable insight.
This work was funded by the U.S. Army Belvoir Research
as
and Development Center, McDonnell Dougl
Company, and Honeywell Corporation.
Astronautics


LIST OF FIGURES
Page
Fig. 2- 1 Cincinnati Milacron T3-776 Manipulator ... 13
Fig. 2- 2 Conceptual Sketch 14
Fig. 2- 3 Collection of Rigid Bodies 15
Fig. 2- 4 Graphics Data Structure 17
Fig. 2- 5 Skeletal Model of T3-776 Manipulator .... 20
Fig. 2- 6 Transformation to Viewing Coord. System ... 25
Fig. 2- 7 Parallel and Perspective Transformation ... 28
Fig. 2- 8 Wire Frame Model of T3-776 Manipulator ... 31
Fig. 2- 9 Backfacing Polygons Removed 32
Fig. 2-10 Plane Segments with Infinite Planes ..... 36
Fig. 2-11 Animated Representation of T3-776 Manipulator 50
Fig. 2-12 Animated Representation of T3-776 Manipulator 50
Fig. 3- 1 Spatial Link 53
Fig. 3- 2 Revolute Pair 54
Fig. 3- 3 Cincinnati Milacron T3-776 Manipulator ... 56
Fig. 3- 4 Skeletal Model of T3-776 Manipulator .... 57
Fig. 3- 5 Hypothetical Closure Link 61
Fig. 3- 6 Hypothetical Closure when £5^ | | S^ 65
Fig. 3- 7 Location of Wrist Point 68
Fig. 3- 8 Determination of 2nd and 3rd Joint Angles . 71
Fig. 3- 9 Three Roll Wrist 76
Fig. 3-10 Moving and Fixed Coordinate Systems 77
Fig. 3-11 Forward Analysis 85
vii


74
C2 ~ ^(a23"S44COS3) (-P2 -12*
(S44sin<|>3) (Rp2,S1)]/|Rp2|2 (3.39)
S2 = ,^a23S44COS3^-P2 -1* +
(S44sin3) (Rp2*a12)]/|Rp2|2 (3.40)
Equations (3.39) and (3.40) result in a unique value for 2
corresponding to each pair of calculated values of ^ and
3 From (3.27) the sine and cosine of 2 can be written
as
cos 2 = cos (e2+90)
= -s2 (3.41)
sin <|>2 = sin(2+90)
= c 2 (3.42)
As before each calculated value of 2 must be checked to
see if it is in the allowable range of rotation. If it is
not, then the maximum number of possible configurations of
the robot which can position the hand as specified is
further reduced.
At this point up to two sets of the three displacement
angles ^, 2, and 3 are known which position the point
P2 as required. However if there were no joint angle
limitations at joints 1, 2, and 3, there would be four
possible sets of displacements which would position point P2


133
discrepancies and thereby make any corrections or
modifications prior to any motion by the actual manipulator.
As stated earlier, any such representation must possess
two characteristics. First, the resulting picture must be
an accurate recreation of the actual manipulator. Second,
the images of the robot must be drawn rapidly enough in
order to allow for smooth animation and user interaction.
This rapid drawing or creation of the image is critical if
the user is to have the ability to change viewing positions
and zoom in to observe details of the motion in real time.
Shown in Figure 5-2 is a photograph of the final
result. The graphics techniques described in Chapter II
were used to create the image from a series of seven rigid
bodies as shown in Figure 5-3. The data base for this robot
consisted of 46 four sided polygons and 11 ten sided
cylinders for a total of 156 polygons. This is compared to
202 total polygons for the Cincinnati Milacron T3-776 robot
animation detailed in Chapter II. The generation of sorting
rules for the solid color image was far more complex for the
T3-726 robot due to the existence of the two horizontal
motor cylinders on the second and third rigid bodies.
The resulting program was capable of generating images
at a rate of 8.5 Hz on a Silicon Graphics IRIS model 2400
workstation. This rate allowed for satisfactory user
interaction and display. An improved refresh rate of 12 Hz
was obtained when the program was run on an IRIS model 3030
system. The improvement in performance was noticeable and


100
position (discussed in Chapter V) or when the manipulator
moves from a region where four configurations are possible
to a region where only two configurations exist.
Such cases are identified by establishing a maximum
allowable angular change for any joint angle as the
manipulator moves from the jth to the j+1 position.
Satisfactory results were obtained when this maximum
allowable change was set at 8 degrees.
If it is found that any joint angle changes by more
than 8 degrees from the jth to the closest j+1 position, the
program first tries to install more intermediate points.
This is done by a recursive call to the motion program
itself, with the jth position representing the initial
position and orientation and the j+1 position representing
the final position and orientation of the manipulator.
Since the translational distance between the jth and j+1
.position is usually quite small, a minimum of two
intermediate points are inserted, regardless of the number
of steps calculated due to translation and orientation
change. If the rapid change in joint angles was due to a
singularity condition, then this method will produce
satisfactory results and effectively slow down the
manipulator.
If the recursive call was unsuccessful in solving the
problem, then simple joint interpolated motion is executed
between the jth and j+1 positions. The number of
intermediate interpolated positions is based on the


X
Figure 5-12: Coordinate System for Singularity Analysis
160


149
Path evaluation is now accomplished by determining
whether the straight line between two successive wrist
points intersects any part of the wrist workspace boundary
(see Figure 5-6). It is apparent from the figure that the
wrist workspace boundary is formed from two inner spheres
and two outer spheres with a section cut from behind the
robot due to the limitation on the first joint. Since both
wrist points must lie in the workspace, it is not necessary
to check to see if the line segment of interest intersects
the outer boundaries. What remains is to determine whether
the line segment attempts to move the robot behind its base
(in conflict with the limit on joint 1) or whether the line
segment intersects either of the 'sphere segments' which
form the inner wrist workspace boundary.
Again from Figure 5-6, it can be concluded that if the
line segment between successive wrist points intersects any
part of either of the complete inner sphere boundaries, then
the desired motion is not possible. Thus the problem is
reduced to determining whether the robot attempts to move
behind its base or whether the line segment between wrist
points intersects either of two inner spheres.
In Chapter IV a method for determining whether a line
segment intersects a sphere was presented. This method was
used here without modification to determine whether straight
line motion of the wrist point was possible with respect to
the two inner spherical voids.


41
is given, as for example 'pnt(50, 20, 40)' which draws a
point at the local position (50, 20, 40), the coordinate
datum is multiplied by the matrix M in order to determine
the screen coordinate values. The transformation is
represented by the following equation:
[x,y,z,w] = [x',y',z',w'] M (2.6)
The basic problem then is to make the matrix M represent the
transformation between the coordinate system attached to
each of the rigid bodies of the robot. For example, when M
represents the transformation between the fixed coordinate
system attached to the base of the robot and the screen
coordinate system, the base of the robot can be drawn in
terms of a series of move and draw commands, all of which
will use local coordinate data as input.
When the IRIS system is initialized, the matrix M is
set equal to the identity matrix. Three basic commands,
translate, rotate, and scale, are called to modify M.
Calling one of the three basic commands causes the current
transformation matrix, M, to be pre-multiplied by one of the
following matrices:
Translate (T ,T ,T )
x y z
1
0
0
0 0
1 0
0 1
0
0
0
1
(2.7)


67
determine all possible sets of joint displacements which can
position the hand as specified.
3.4.3 Determination of cj>^, Q^i and 9-,
At this point the next step of a standard reverse
position analysis would be to analyze the closed loop
mechanism formed by the addition of the hypothetical closure
link to the open chain manipulator. However due to the
relatively simple geometry of the T3-776 robot, a shorter
and more direct approach will be taken.
It should be noted from Figure 3-4 that since the
direction cosines of vectors £g and a^ are known in the
fixed coordinate system together with the length of offset
Sgg and link ag^, that the coordinates of point P2, the
center of the three roll wrist, are readily known. The
vector Rp2 (see Figure 3-7) from the origin of the fixed
coordinate system to point P2,is given by
P2 = -PI S666 36767 (3.21)
It is also shown in Figure 3-7 that the vectors Bp2' 12'
23' 4' 3nd 1 311 same Plane* This is due to
the unique geometry of this robot whereby the vectors £2 an<3
£>2 are parallel. Because of this, simple planar
relationships can be utilized to determine the three
relative displacement angles ^, 2, and G^.


39
Animation is obtained by setting the IRIS system into
double buffer mode. In this mode, half of the bit planes
are used for screen display and half for image generation.
In other words, while the user is observing an image
(contained on the front 4 bit planes), the next image is
being drawn on the back 4 bit planes. When the image is
complete, the front and back sets of bit planes are swapped
and the user sees the new picture. The complexity of the
image to be drawn governs the speed at which the bit planes
are swapped. Experience has shown that the swapping should
occur at a rate no slower than 8 Hz in order to result in
satisfactory animation.
The one drawback of double buffer mode is that there
are only half as many bit planes available for generating an
image. The reduced number of bit planes further limits the
number of colors that may be displayed on the screen at
once. An IRIS system with only 8 bit planes, such as the
4
system at the University of Florida, can only display 2
(16) colors on the screen at once while in double buffer
mode. It should be noted, however, that a fully equipped
system with 32 bit planes can display 2^ (65,536) colors
simultaneously in double buffer mode. This capability
should far exceed user requirements in almost all instances.


148
function of the tool dimensions. This added parameter might
warrant the consideration of a multitude of special cases as
the shape of the workspace varied as to the type of tool.
For the above reason, the wrist of the robot was
constrained to move along a straight line. The primary
benefit of this specification is that the wrist workspace is
independent of the type of tool and thus never changes.
Since the wrist point will now move along a rectilinear
path, the tip of the tool will follow a parametric equation
based on the movement of the wrist and the change in
orientation between successive precision points. It should
be noted that if the orientation of the tool at two
precision points is the same, then the tool tip will move
along a straight line at constant orientation.
At each precision point the position of the tip of the
tool as well as the direction cosines of the two orientation
vectors £g and a^ are known in terms of the fixed
coordinate system attached to the base of the robot. Since
the size of the tool is also known, it is a simple matter to
calculate the position of the wrist point from the equation
w = £ S66S6 a67a6? (5.1)
where w is the vector to the wrist point, r_ is the location
of the tool tip, Sgg is the distance from the tool tip to
the wrist along the vector S,, and a
is the tool offset.


9
determine the precise position of the car body with respect
to the manipulator so that all weld points are in reach of
the manipulator. The software system generates path datum
which is communicated directly to the robot controller
together with cycle time datum which is accurate to within
5%.
McDonnell Douglas has learned from experience that
tasks that are taught to the robot in this way must be
"touched up". For example, the car body may not stop
precisely at the position that was determined during the
simulation process. The manual touch up of certain points
along the manipulator path can be accomplished with use of
the teach pendant.
Typically
an
average of fifteen
to
twenty minutes is
required
for
this
manual touch
up
operation. A second method
of
path
upgrade can
be
accomplished by attaching a mechanical probe to the robot.
This probe measures the actual position of an object with
respect to the robot and then updates positional commands as
necessary. Early application of this technique required the
user to replace the tool of the robot with the mechanical
probe. This was often a time consuming and labor intensive
task. New measuring probes have been applied by McDonnell
Douglas to remove this problem.
A final example of robot workcell simulation software
is that developed by Deneb Robotics Inc., Troy, Michigan
[13]. This software runs on the Silicon Graphics IRIS
workstation. The interactive nature of the program allows


137
precision points, the following information must be entered
and recorded:
1. Position of the tool point.
2. Orientation of the tool.
3. Function to be performed at this point.
4. Velocity to be used when the robot moves between the
previous point and this point.
Many alternate methods exist for the storage of this
information. A dynamic memory allocation method utilizing
double linked lists was used in order to allow for the
selection of any number of points without the allocation of
any unnecessary memory space. The linked list was
established via use of the data structure shown in Figure
5-4. Stored in each structure is the position, orientation,
function, and velocity data together with the memory address
of the next and previous position of the robot.
The exact meaning of each of the four items listed
above will now be explained. Shown in Figure 5-5 is a
skeletal drawing of the T3-726 robot. In the figure, a
fixed coordinate system attached to the manipulator base is
shown. The origin of the coordinate system is located at
the intersection of the axes of the first two joints. The
direction of each of the XYZ axes are as shown. In terms of
this coordinate system, the position of the tool point is
simply the XYZ coordinate values of where the tool is to be
located


50
Figure 2-11: Animated Representation of T3-776 Manipulator
Figure 2-12: Animated Representation of T3-776 Manipulator


CHAPTER IV
ROBOTIC TELEPRESENCE
4.1 Introduction and Objective
This chapter deals with a particular application of the
robot animation and path planning techniques discussed in
the preceding two chapters. The interactive computer
graphics program developed in Chapter II was enhanced to
allow an operator to more readily control the motions of a
remote robot manipulator in two distinct modes, viz.
man-controlled and autonomous. In man-controlled mode, he
robot is guided by a joystick or similar device. As the
robot moves, actual joint angle information is measured and
supplied to the graphics system which accurately duplicates
the robot motion. Obstacles are placed in the actual and
animated workspace and the operator is warned of imminent
collisions by sight and sound via the graphics system. In
autonomous mode, a collision free path between specified
points is obtained by previewing robot motions on the
graphics system. Once a satisfactory path is selected, the
path characteristics are transmitted to the actual robot and
the motion is executed. A detailed explanation of
man-controlled and autonomous operation will follow a brief
description of the objectives of a telepresence system.
103


173
coupled with a high level task language would make human
interaction with robots, and thus man-machine productivity,
a much more efficient process.


136
resulted in a much more aesthetic image, but the model 2400
results were quite acceptable.
5.3 Program Structure
In order to plan a task for a manipulator to perform,
the operator must specify certain information. First the
user must state what exact point of the end effector is to
be controlled. Typically this point will be an obvious part
of the tool such as the tip of a welding rod or the point
between the two fingers of a gripper. This point is defined
by specifying two parameters, tool length and tool offset.
Tool length is defined as the distance from the tool
mounting plate to the point of interest. Tool offset is
defined as the perpendicular distance from the point of
interest to the line extending out from the center of the
tool mounting plate. Specification of these two items
allows for the precise path control of any point in any tool
attached to the manipulator.
The next information that the operator must enter is
the number of precision points that the robot will move
through and whether the robot should perform cyclic motion.
A precision point is defined as a user specified position
and orientation that the robot must move through. Cyclic
motion occurs by moving the manipulator from the last
precision point back to the first precision point so that
the entire sequence can be replayed. For each of the


Figure 4-12: Display of Computer Generated Path


54
Figure 3-2: Revolute Pair
I O


Displacement
Figure 3-12: Displacement Profile


Internet Distribution Consent Agreement
In reference to the following dissertation:
Author: Carl Crane
Title: Motion planning and control of robot manipulators via application of a
computer graphics animated display
Publication Date: 1987
I, Carl D. Crane III, as copyright holder for the aforementioned dissertation, hereby grant
specific and limited archive and distribution rights to the Board of Trustees of the
University of Florida and its agents. I authorize the University of Florida to digitize and
distribute the dissertation described above ,for nonprofit, educational purposes via the
Internet or successive technologies.
This is a non-exclusive grant of permissions for specific off-line and on-line uses for an
indefinite term. Off-line uses shall be limited to those specifically allowed by "Fair Use"
as proscribed by the terms of United States copyright legislation (cf, Title 17, U.S. Code)
as well as to the maintenance and preservation of a digital archive copy. Digitization
allows the University of Florida to generate image- and txt-based versions as
appropriate and to provide and enhance access dsing.search software.
This grant of permissions prohibits use of the digitized versions for commercial use or
profit.
Signature of Copyright Holder
Carl D. Crane III
Printed or Typed Name of Copyright Holder/Licensee
Personal information blurred
16 June 2008
Date of Signature
Return this form to: Cathy Martyniak
Preservation Dept.,
University of Florida Libraries
P.O. Box 117007
Gainesville, FL 32611-7007


8
will be integrated with the software package in order to
address these issues.
Intergraph Corporation of Huntsville, Alabama, offers a
hardware and software solution to the workcell modeling
problem. The unique feature of the Intergraph hardware is
the use of two 19 inch color raster monitors in the
workstation. This feature greatly enhances man-machine
interface which is the primary purpose of the graphics
system. Intergraph offers a complete line of CAD/CAM
software in addition to the robot workcell modeling
software. Applications of the system to workcell planning
have been performed at the NASA Marshall Flight Center,
Huntsville, Alabama.
Simulation software has also been developed by the
McDonnell Douglas Manufacturing Industry Systems Company,
St. Louis, Missouri. Ninety four robots have been modeled
to date. McDonnell Douglas has acquired considerable
experience in planning workcell operations for automobile
assembly lines. For example, a robot may be assigned
fifteen specific welds on a car body. The user must decide
where the car body should stop along the assembly line with
respect to the manipulator in order to accomplish these
welds. The simulation software allows the user to attach
the robot to one of the weld points and then move the auto
body with the robot still attached. The user is notified if
the weld point goes outside the workspace of the
manipulator. By repeating the process, the operator can


109
Figure 4-2: Nine String Joystick
Figure 4-3: Scissor Joystick


106
This additional feedback to the operator can significantly
improve performance.
The remainder of this chapter will describe a computer
graphics based telepresence system which has been developed
at the University of Florida with the cooperation of
numerous individuals. Emphasis will be placed on the
graphics system, and on the autonomous planning of potential
collision free paths.
4.3 System Components
Figure 4-1 illustrates the telepresence system as it
exists in the laboratory. The system serves as a test bed
to verify the feasibility of the concept and consists of
four distinct technologies and components as follows:
4.3.1. Robot manipulator
The manipulator chosen for the telepresence system was
an MBAssociates (MBA) robotic manipulator. The MBA
manipulator is a hydraulic six degree of freedom device.
The second and third joint axes are parallel and the final
three axes intersect at a common point. The robot is
controlled by a PDP 11/23 computer which regulates hydraulic
fluid flow so as to minimize the error between the desired
and actual joint angles. The MBA robot was selected for
this study because of its simple geometry and versatile
control system.


92
Rjl-
lCFj
CI2
CI3 1
/
lCI
Rj2
ICI1
CFj
CI3l
/
lCI
Rj 3
ICI1
CI2
CFj 1
/
lCI
where represents the first column of the matrix Cj and
CFj represents the jth column of the matrix Cp, as j varies
from 1 to 3. The matrix R can be readily verified since the
following equations must hold true:
6
the
6F R -61
(3.78)
67F = R -671
(3.79)
It is shown in reference [24] that a rotation of angle
about an arbitrary axis vector k can be represented by
relationship
Rot(k,9)
k k versO+cos
x x
k k vers9+k sin
x y z
k k vers9-k sin
L x z y
k k verse-k sin
y x z
k k verse+cos
y y
k k verse+k sin
y z x
k k vers+k sin
z x y
kzkyVers-kxsin
k k verse+cos
Z Z (3.8GT
where vers = (1-cos). The problem now is that of
equating the matrix R with equation (3.80) and solving for
k k k the axis of rotation, and for which is the
x y z
angle of rotation.
Summing the diagonal elements of the matrix R and of
equation (3.80) yields
R..+R_ _+R_ = (k2+k2+k2)vers +
11 22 33 x y z'
3cos (3.81)
Substituting for vers yields


12
utilized and demonstrated on a VAX 11-750 computer will be
discussed in detail. The second part of this chapter is
concerned with the modification of the initial work in order
to apply it
to
a Silicon
Graphics IRIS model
2400
workstation.
The
modifications
take full advantage of
the
built in hardware
capabilities
of the IRIS workstation
and
result in significantly improved performance.
2.2 Database Development
The first step in the generation of a computer graphics
simulation of a robot manipulator is the drawing of an
artist's concept of what the simulation should look like.
Shown in Figure 2-1 is a drawing of the T3-776 robot and in
Figure 2-2 is a sketch of the desired graphics simulation.
Enough detail is provided for a realistic representation of
the robot. Also shown in Figure 2-2 is a coordinate system
attached to the base of the robot such that the origin is
located at the intersection of the first two joints. The Z
axis is vertical and the X axis bisects the front of the
base. This coordinate system is named the 'fixed coordinate
system' and will be referred to repeatedly.
The robot manipulator is made up of a collection of
rigid bodies as shown in Figure 2-3. Also shown in the
figure is a local coordinate system attached to each body.
The coordinate values of each vertex point of the
manipulator are invariant with respect to the coordinate


Figure 3-8: Determination of 2nd and 3rd Joint Angles


10
the user to rapidly build new robot geometries, modify
actuator limits, or reposition objects within the workspace.
Detailed solid color images with satisfactory animation
rates are obtainable. In addition the user can be warned of
collisions or near misses between parts of the robot and
objects in the workspace although the animation rate slows
down as a function of the number of collision checks being
made. The strength of the Deneb software is its rapid
animation of solid shaded images together with its ease of
use for the operator. As such, it is one of the more
advanced manipulator simulation software packages.


(a)
(b)
Figure 2-7: Parallel and Perspective Transformation.
a) Parallel Projection ;
b) Perspective Projection.


CHAPTER III
PATH GENERATION
3.1 Introduction and Objective
This chapter is concerned with the calculation of a
series of joint angles for a robot manipulator which will
cause the manipulator to move along a user specified path.
These calculations will serve as input to the robot
animation program described in the previous chapter. In
this manner, the user will be able to observe and evaluate
the robot motion on the graphics screen prior to any
movement of the actual robot manipulator. As with the
previous chapter, the specific application to the Cincinnati
Milacron T3-776 manipulator will be presented in detail.
The first problem to be considered will be the reverse
kinematic analysis of the robot manipulator. This analysis
determines the necessary joint displacements required to
position and orient the end effector of the robot as
desired. The problem of path generation is then reduced to
the careful selection of robot positions and orientations
along some desired path. A reverse kinematic analysis is
then performed for each of these locations.
51


21
Twist angles are defined as the relative angle between
two successive joint axes, measured about their mutual
perpendicular. For example, the twist angle a^2 is
measured as the angle between the vectors £3^ and ^ as seen
in a right handed sense along the vector a12- In general,
all twist angles will be constant for an industrial robot
under the assumption of rigid body motion.
Two additional parameters, link lengths and offset
lengths, are shown in the skeletal model of the manipulator.
A link length, a^j, is defined as the perpendicular distance
between the pair axes and j3 ^ All link lengths are known
constant values for a manipulator. An offset length, Sjj'
is defined as the perpendicular distance between the two
vectors a., and a... For revolute joints, offsets are
l3 ~jk
constant values. Shown in Table 2-1 are the specific
constant link length, offset, and twist angle values for the
Cincinnati Milacron T3-776 robot manipulator.
A systematic selection of each local coordinate system
was made based on the skeletal model of the manipulator.
The coordinate system was established such that the Z
axis was aligned with the vector and the X axis was along
a^j. With this definition for each coordinate system, the
coordinates of a point known in the Cj system can be found
in the system by applying the following matrix equation:


172
5.14 Results and Conclusions
The resulting program met the goals of providing a user
with an interactive method of planning and evaluating robot
motions. Developments made in other chapters were combined
with new advancements to provide a unique method of
inputting path data, evaluating whether straight line
motions are possible, and evaluating paths based on their
avoidance of singularity configurations.
Although the analysis was aimed particularly at the
Cincinnati Milacron T3-726 robot, straightforward
modifications could be made to apply the results to any type
of industrial robot. Additions to the program such as the
ability to move specific precision points to move further
away from singularities and the inclusion of other types of
non-rectilinear motions will make the system more effective.
The system proved itself to be much more efficient for
planning robot motions compared to 'teach pendant' and other
methods. One requirement that exists, however, is the
ability to identify and display objects which are in the
workspace of the robot. If the location of these objects
were determined, autonomous tasks could be planned and
executed which manipulate the objects as desired. An object
detection system such as that described in Chapter IV, would
make this system a task planner rather than just a motion
planner.
Lastly, the resulting work must be applied to two or
more robots cooperating in parallel. Task specification


152
S A *
A
- *1>
= 0
(5.2)
*B *
" 1>
= 0
(5.3)
where £A and rg are vectors from the origin to a point on
the respective infinite line. It must be noted that
equations (5.2) and (5.3) may be multiplied by -1 in order
for the scalar products
£a (-Ax)
-B <-=!>
to be greater than zero. At this moment points A^ and A£
will be substituted into the equation for the infinite line
through segment B. If one of the points A^ or satisfy
equation (5.3), then that point lies on the infinite line
through segment B. If the results from equation (5.3) are
both greater than zero or both less than zero, then it can
be concluded that both points A^ and lie on the same side
of the infinite line through segment B with respect to the
origin. In either of these cases, it is impossible for
segment A to intersect segment B and the 'no intersection'
result is concluded.
If points A^ and A£ lie on either side of the infinite
line through segment B, then the points B^ and B^ are
substituted into equation (5.2) to determine whether they
lie on opposite sides of the infinite line through segment


59
vectors a.12' 34' 45' and 56 th;-s direction is arbitrarily
selected as the direction parallel to the vector S^xjSj. The
values for the corresponding twist angles a12' 34' a45' and
alisted in (3.5) were determined based upon this
convention.
3.4. Reverse Displacement Analysis
For the reverse displacement analysis the position and
orientation of the hand of the manipulator are specified.
It is desired to determine the necessary values for the
relative displacements of the joints that will position the
hand as desired, ie. to determine sets of values for the six
quantities ^, 2, 0^, 4, ^, and g. The analysis
is complicated by the fact that there are most often more
than one set of displacements which will place the hand in
the desired position. An advantage of this reverse
displacement analysis is that all displacement sets will be
determined as opposed to an iteration method which would
find only one set of joint displacements.
It turns out that for the T3-776 robot there are a
maximum of four possible sets of angular displacements which
will position and orient the hand as specified. The unique
geometry of the robot, that is parallel to and
£4,£5,£6 intersecting at a point, allows for eight possible
sets. However the limits of rotation of the first three
joints reduces the solution to a maximum of four. The


131
Figure 5-1: Cincinnati Milacron T3-726 Manipulator




struct data_points
{float r [3] ;
float v6 {3].;
float v67[3] ;
float wrist[3] ;
int vel_factor ;
int function ;
int fun_parameter ;
struct data_points *next ;
struct data_points *prev ;
}
vector to tool tip
dir. cosines of S6 vector
dir. cosines of a67 vector
vector to center of wrist
velocity factor for previous
section of path
the operation to be performed
at this point
a parameter associated with
the function
a pointer to the next
precision point
a pointer to the previous
precision point
Figure 5-4: Data Structure for Precision Points


LIST OF TABLES
Page
Table 2-1 T3-776 Mechanism Dimensions 22
Table 2-2 Plane Segment Sorting Cases 35
Table 3-1 Sample Angles for j and j+1 Positions 99
Table 5-1 T3-726 Mechanism Dimensions 132
Table 5-2 Direction Cosines 162
vi


102
could be observed by the user. Joint angles can be
calculated off line, prior to the animation sequence. An
off line calculation time of two or three seconds is not
significant and for this reason the closed form solution
method was chosen. Further work in path planning and
obstacle avoidance benefits considerably from the knowledge
of all possible configurations of the robot at any given
position and orientation. For this additional reason, the
closed form solution approach was recommended.


79
Substituting the values for a12,
into (3.46) gives
a2j, and from set (3.5)
"*r
.
C4C2+3
-S4
C4S2+3
*6
=
-S4C2+3
0
1
_S4S2+3
n
(3.47)
Jl.
S2+3
0
"C2+3_
where S2+3 an<^ c2+3 represent the sine and cosine of
(99+9_). As already stated, the abbreviations s. and c.
D D
in (3.45) denote the sine and cosine of 9j which measures
the relative angular displacement between successive links
a. and a., At this point all parameters on the right side
1J j K
of equation (3.47) are known with the exception of the sine
and cosine of 9..
4
Alternate expressions for the direction of S, in the
D
second moving coordinate system may be obtained by simple
projection. These relationships are as follows:
-
X6
X5
*6
II
in
1
N
CTlS
1
_*5_
where
X5 = s56s5
*5 = _(S45C56+C45S56C5)
X5 = C45C56-s45s56c5
(3.49)