• TABLE OF CONTENTS
HIDE
 Title Page
 Acknowledgement
 Table of Contents
 List of Tables
 List of Figures
 Abstract
 Introduction
 Development of animated displays...
 Path generation
 Robotic telepresence
 Interactive path planning...
 Discussion and conclusions
 Reference
 Biographical sketch
 Copyright






Title: Motion planning and control of robot manipulators via application of a computer graphics animated display
CITATION THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00089973/00001
 Material Information
Title: Motion planning and control of robot manipulators via application of a computer graphics animated display
Physical Description: Book
Language: English
Creator: Crane, Carl David
Publisher: Carl David Crane
Publication Date: 1987
 Record Information
Bibliographic ID: UF00089973
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000940986
oclc - 16664018

Table of Contents
    Title Page
        Page i
    Acknowledgement
        Page ii
    Table of Contents
        Page iii
        Page iv
        Page v
    List of Tables
        Page vi
    List of Figures
        Page vii
        Page viii
    Abstract
        Page ix
        Page x
    Introduction
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
        Page 8
        Page 9
        Page 10
    Development of animated displays of industrial robots
        Page 11
        Page 12
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
        Page 42
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
        Page 48
        Page 49
        Page 50
    Path generation
        Page 51
        Page 52
        Page 53
        Page 54
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
        Page 67
        Page 68
        Page 69
        Page 70
        Page 71
        Page 72
        Page 73
        Page 74
        Page 75
        Page 76
        Page 77
        Page 78
        Page 79
        Page 80
        Page 81
        Page 82
        Page 83
        Page 84
        Page 85
        Page 86
        Page 87
        Page 88
        Page 89
        Page 90
        Page 91
        Page 92
        Page 93
        Page 94
        Page 95
        Page 96
        Page 97
        Page 98
        Page 99
        Page 100
        Page 101
        Page 102
    Robotic telepresence
        Page 103
        Page 104
        Page 105
        Page 106
        Page 107
        Page 108
        Page 109
        Page 110
        Page 111
        Page 112
        Page 113
        Page 114
        Page 115
        Page 116
        Page 117
        Page 118
        Page 119
        Page 120
        Page 121
        Page 122
        Page 123
        Page 124
        Page 125
        Page 126
        Page 127
        Page 128
    Interactive path planning and evaluation
        Page 129
        Page 130
        Page 131
        Page 132
        Page 133
        Page 134
        Page 135
        Page 136
        Page 137
        Page 138
        Page 139
        Page 140
        Page 141
        Page 142
        Page 143
        Page 144
        Page 145
        Page 146
        Page 147
        Page 148
        Page 149
        Page 150
        Page 151
        Page 152
        Page 153
        Page 154
        Page 155
        Page 156
        Page 157
        Page 158
        Page 159
        Page 160
        Page 161
        Page 162
        Page 163
        Page 164
        Page 165
        Page 166
        Page 167
        Page 168
        Page 169
        Page 170
        Page 171
        Page 172
        Page 173
    Discussion and conclusions
        Page 174
        Page 175
        Page 176
    Reference
        Page 177
        Page 178
        Page 179
    Biographical sketch
        Page 180
        Page 181
        Page 182
    Copyright
        Copyright
Full Text







MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY









BY

CARL DAVID CRANE III


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


1987














ACKNOWLEDGEMENTS


The author wishes to thank his wife, Sherry, and his

children Theodore, Elisabeth, and Stephanie for their

patience and support. Also, special thanks go to his

committee chairman, Professor Joseph Duffy, for his

encouragement and guidance. He is also grateful to the

members of his graduate committee who all showed great

interest and provided considerable insight.

This work was funded by the U.S. Army Belvoir Research

and Development Center, McDonnell Douglas Astronautics

Company, and Honeywell Corporation.

















TABLE OF CONTENTS


ACKNOWLEDGEMENTS . . . .

LIST OF TABLES . . . .

LIST OF FIGURES . . . .

ABSTRACT . . . . . .

CHAPTERS

I INTRODUCTION . . . .

1.1 Background . .

1.2 Review of Previous


Page

. . . . . . ii

. . . . . . vi

......... . vii

. . . . . . ix




. . . . . . 1

. . . . . . 1

Efforts . . . 4


II DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS . . . . .

2.1 Introduction and Objective .

2.2 Database Development . . .

2.3 Coordinate Transformations .

2.4 Backface Polygon Removal . .

2.5 Sorting of Plane Segments .

2.6 Description of Hardware System

2.7 Program Modifications . .

2.8 Results and Conclusions . .


III PATH GENERATION . . . . . .

3.1 Introduction and Objective .

3.2 Notation . . . . . .


iii


*.. .. 11

*.. .. 11

*..... 12

S . . 318

...... 29

. . . 30

*..... 38

. . . 40

*.. ... 49


..... 51

* . . 51

* . . 52









3.3 Mechanism Dimensions of the T3-776


3.4

3.5

3.6

3.7


Manipulator


. . . . . . . 55


Reverse Displacement Analysis

Forward Displacement Analysis

Path Generation . . . .

Results and Conclusions . .


* . . 59

*..... 83

. . . 88

.... ...101


IV ROBOTIC

4.1

4.2

4.3

4.4

4.5

4.6


TELEPRESENCE . . . .

Introduction and Objective

Telepresence Concept . .

System Components . .

Method of Operation . .

Problems Encountered . .

Conclusions . . . .


. . . 103

. . . 103

. . . 104

. . . 106

. . . 110

. . . 127

. . . 128


V INTERACTIVE PATH PLANNING AND EVALUATION .. .129

5.1 Introduction and Objective ..... .129

5.2 Robot Animation . . . . ... .130

5.3 Program Structure . . . ... .136

5.4 Workspace Considerations . . ... .142

5.5 Path Evaluation . . . . ... .147


5.6 Calculation of Intermediate Points .

5.7 Robot Configurations . . . . .

5.8 Singularity Analysis . . . . .

5.9 Interpretation of Singularity Results

5.10 Selection of Configuration . . .

5.11 Preview of Motion . . . . .

5.12 Communication with Robot Controller

iv


.153

.155

.157

.163

.164

.165

.167










5.13 Roll, Pitch, Yaw Calculations ... .168

5.14 Results and Conclusions . . .. ..172


VI DISCUSSION AND CONCLUSIONS . . . .. ..174


REFERENCES . . . . . . . . . .177

BIOGRAPHICAL SKETCH . . . . . . ... .180















LIST OF TABLES


Table 2-1

Table 2-2

Table 3-1

Table 5-1

Table 5-2


Page

T3-776 Mechanism Dimensions . . .. .22

Plane Segment Sorting Cases . . . 35

Sample Angles for j and j+l Positions 99

T3-726 Mechanism Dimensions . .. .132

Direction Cosines . . . ... .162














LIST OF FIGURES


Page
Fig. 2- 1 Cincinnati Milacron T3-776 Manipulator . 13

Fig. 2- 2 Conceptual Sketch . . . . . .. 14

Fig. 2- 3 Collection of Rigid Bodies . . . .. 15

Fig. 2- 4 Graphics Data Structure. .. . . . .17

Fig. 2- 5 Skeletal Model of T3-776 Manipulator . .. 20

Fig. 2- 6 Transformation to Viewing Coord. System . 25

Fig. 2- 7 Parallel and Perspective Transformation . 28

Fig. 2- 8 Wire Frame Model of T3-776 Manipulator . 31

Fig. 2- 9 Backfacing Polygons Removed . . . .. 32

Fig. 2-10 Plane Segments with Infinite Planes . .. 36

Fig. 2-11 Animated Representation of T3-776 Manipulator 50

Fig. 2-12 Animated Representation of T3-776 Manipulator 50

Fig. 3- 1 Spatial Link . . . . . . . .. 53

Fig. 3- 2 Revolute Pair . . . . . . . 54

Fig. 3- 3 Cincinnati Milacron T3-776 Manipulator . 56

Fig. 3- 4 Skeletal Model of T3-776 Manipulator . .. 57

Fig. 3- 5 Hypothetical Closure Link . . . .. 61

Fig. 3- 6 Hypothetical Closure when S II S7 .... .65

Fig. 3- 7 Location of Wrist Point . . . . .. 68

Fig. 3- 8 Determination of 2nd and 3rd Joint Angles . 71

Fig. 3- 9 Three Roll Wrist . . . . . . .. 76

Fig. 3-10 Moving and Fixed Coordinate Systems ... .77

Fig. 3-11 Forward Analysis. . . . . . . .85
vii









Fig. 3-12 Displacement Profile . . . . ... 90

Fig. 4- 1 Telepresence System . . . . ... .107

Fig. 4- 2 Nine String Joystick . . . . ... .109

Fig. 4- 3 Scissor Joystick . . . . . .. ..109

Fig. 4- 4 System Configuration . . . . ... .111

Fig. 4- 5 Animated Representation of MBA Manipulator .114

Fig. 4- 6 Obstacle Locations Deter. by Vision System .114

Fig. 4- 7 Display of Objects in Manipulator Workspace .116

Fig. 4- 8 Warning of Imminent Collision ..... .119

Fig. 4- 9 Operation in Man-Controlled Mode ... .119

Fig. 4-10 Determination of Intersection ..... .121

Fig. 4-11 Generation of Alternate Path ..... .121

Fig. 4-12 Display of Computer Generated Path . .. .125

Fig. 5- 1 Cincinnati Milacron T3-726 Manipulator . .131

Fig. 5- 2 Animated Representation of T3-726 Robot . .134

Fig. 5- 3 Collection of Rigid Bodies . . ... .135

Fig. 5- 4 Data Structure for Precision Points ... .138

Fig. 5- 5 Skeletal Model of T3-726 Manipulator .. .139

Fig. 5- 6 Manipulator Workspace . . . . ... .143

Fig. 5- 7 Top and Side Views of Workspace ..... .143

Fig. 5- 8 Three Roll Wrist . . . . . . ..145

Fig. 5- 9 Orientation Limits . . . . ... .146

Fig. 5-10 Motion Behind Base . . . . ... .151

Fig. 5-11 Intersection of Planar Line Segments .. .151

Fig. 5-12 Coordinate System for Singularity Analysis .160

Fig. 5-13 Display of Singularity Results .... .166

Fig. 5-14 Calculation of Roll Parameter ..... .171
viii














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy


MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS
VIA APPLICATION OF A COMPUTER GRAPHICS ANIMATED DISPLAY

By

CARL D. CRANE III

MAY 1987



Chairman: Dr. Joseph Duffy
Major Department: Mechanical Engineering

It is often necessary in a hazardous environment for an

operator to effectively control the motion of a robot

manipulator which cannot be observed directly. The

manipulator may be either directly guided via use of a

joystick or similar device, or it may be autonomously

controlled in which case it is desirable to preview and

monitor robot motions. A computer graphics based system has

been developed which provides an operator with an improved

method of planning, evaluating, and directly controlling

robot motions.

During the direct control of a remote manipulator with

a joystick device, the operator requires considerable

sensory information in order to perform complex tasks.

Visual feedback which shows the manipulator and surrounding

workspace is clearly most important. A graphics program








which operates on a Silicon Graphics IRIS workstation has

been developed which provides this visual imagery. The

graphics system is capable of generating a solid color

representation of the manipulator at refresh rates in excess

of 10 Hz. This rapid image generation rate is important in

that it allows the user to zoom in, change the vantage

point, or translate the image in real time. Each image of

the manipulator is formed from joint angle datum that is

supplied continuously to the graphics system. In addition,

obstacle location datum is communicated to the graphics

system so that the surrounding workspace can be accurately

displayed.

A unique obstacle collision warning feature has also

been incorporated into the system. Obstacles are monitored

to determine whether any part of the manipulator comes close

or strikes the object. The collision warning algorithm

utilizes custom graphics hardware in order to change the

color of the obstacle and produce an audible sound as a

warning if any part of the manipulator approaches closer

than some established criterion. The obstacle warning

calculations are performed continuously and in real time.

The graphics system which has been developed has

advanced man-machine interaction in that improved operator

efficiency and confidence has resulted. Continued

technological developments and system integration will

result in much more advanced interface systems in the

future.














CHAPTER I
INTRODUCTION




1.1 Background

There have been significant advances in the broad range

of technologies associated with robot manipulators in such

areas as kinematics and dynamics, control, vision, pattern

recognition, obstacle avoidance, and artificial

intelligence. A major objective is to apply these

technologies to improve the precision of operation and the

control of manipulators performing various tasks.

Just as significant an advance has been made recently

in the field of computer graphics hardware. Application of

VLSI technology has resulted in a dramatic increase in

graphics performance of up to 100 times faster than

conventional hardware. Low cost workstations ($20-50K) have

been developed which can generate real time raster images

which are formed by the illumination of discrete picture

elements. Although raster generated images may have less

picture resolution than images produced by vector refresh

devices, they do allow for the generation of solid color

images with shading and hidden surface removal. Application

of these and other computer graphics techniques have

resulted in improved image generation and realism and allow

1








for a wide variety of new applications in the robotics

field. This dissertation addresses the use of such computer

graphics hardware in the following two areas:

a) telepresence system development

b) robotic'workcell modeling

Telepresence systems deal with the direct

man-controlled and autonomous operation of remote robot

manipulators. During man-controlled operation, the user

controls the manipulator directly by guiding the end

effector via use of a joystick or similar device. The

operator moves the joystick as a "master" and the robot

follows correspondingly as a "slave." The graphics system

aids the operator by providing a real time visualization of

the manipulator and surrounding work area. Critical

information such as approaching a joint limit or exceeding

payload capabilities can be displayed immediately as an aid

to the user. For autonomous operations of a remote

manipulator, the graphics system is used to plan manipulator

tasks. Once a task is specified, the user can preview the

task on the graphics screen in order to verify motions and

functions. Modifications to the previewed task can be made

prior to the execution of the task by the actual

manipulator.

The modeling of robotic workcells is a second

application for the animation system. In a manufacturing

environment it is desirable to plan new manipulator tasks

off line. In this manner the manipulators can continue 'old








production' during the planning phase. Assembly line down

time is minimized as the new tasks can be quickly

communicated to the manipulator. The graphics system offers

a convenient and rapid method of planning workcell layouts

and manipulator tasks. The ability to interact with the

system allows the user to reposition objects within the

workspace to verify that all important points can be reached

by the robot. Cycle times can be calculated and compared in

order to improve productivity.

Following a review of previous work dealing with the

animation of industrial robots, subsequent chapters will

detail the development and application of an interactive

computer graphics animation system. A brief description of

each chapter is as follows:

Chapter 2: Development of Animated Displays of

Industrial Robots. This chapter describes the development

of the interactive animation program. The database

development is detailed for the particular case of modeling

the Cincinnati Milacron T3-776 industrial robot. Graphics

techniques are described with emphasis on the removal of

backfacing polygons and the sorting of solid objects.

Chapter 3: Path Generation. A method of generating

sets of joint angles which will move a manipulator along a

user specified path is described. Specific issues deal with

motion near singularity positions and the selection of the

robot configuration at each point along the path.








Chapter 4: Robotic Telepresence. A telepresence system

was developed to allow a user to control a remote robot

manipulator in two distinct modes, viz. man-controlled and

autonomous. This chapter details the use of the graphics

system as an aid to the user. Visual feedback of the work

area is provided together with real time warning of an

imminent collision between the robot and an object in the

workspace.

Chapter 5: Interactive Path Planning and Evaluation.

An interactive computer graphics software system was

developed which assists an operator who is planning robot

motions. The user must specify the path of the manipulator

together with velocity and function information. Once a

task is previewed on the graphics screen, the path datum is

communicated to the actual robot controller. Specific

application to the Cincinnati Milacron T3-726 manipulator is

described in detail.





1.2 Review of Previous Efforts

Early work dealing with the computer graphics animation

of industrial robots occurred in the 1970's. Indicative of

these efforts were reports published from England [1-3],

West Germany [4-5], France [6], and Japan [7]. Common to

this work was the use of computer graphic storage tube

terminals. Hardware limitations resulted in slow animation

rates with bright flashes occurring as the screen is cleared

for each image.








More recently, a program named GRASP was developed by

S. Derby [8]. The program was written in FORTRAN on a Prime

Computer with an Imlac Dynagraphics graphics terminal.

Vector images (wire frames) were generated as raster

technology had not yet developed to be able to produce rapid

images. This program allowed an experienced user to design

and simulate a new robot, or modify existing robot

geometries. Robot motions were calculated and displayed

based on closed form kinematic solutions for certain robot

geometries. A generic iterative technique was used for arms

having a general geometry.

The animation programming of M.C. Leu [9] is indicative

of the current work in the field. A hardware configuration

consisting of two DEC VAX computers, vector graphics

terminals, and raster graphics terminals was utilized to

produce wire frame and solid color images. The program

allows for off line programming of new or existing robot

designs. In addition, a swept volume method was utilized to

detect collisions of the robot arm and any object in the

workspace.

Further improvements in the simulation of robotic

workcells have been made by B. Ravani [10]. Animations have

been developed on an Evans and Sutherland graphics terminal

which can rapidly produce color vector images. Significant

improvements in database development and user interaction

with the computer have made this a versatile simulation

program.








C. Anderson [11] modeled a workcell consisting of the

ESAB MA C2000 arc welding robot. Displacement, velocity,

acceleration, force, and torque data were utilized in the

model as calculated by the Automatic Dynamic Analysis of

Mechanical Systems (ADAMS) software package. Rapid wire

frame animations were obtained on Evans and Sutherland

vector graphics terminals. Solid color representations with

shading were also generated; however, real time animation of

these images was not possible.

A further off line animation system, entitled WORKMATE,

has been developed at the Stanford Research Institute by S.

Smith [12]. A goal of this effort is to implement a graphic

based simulation program through which an inexperienced user

can plan and debug robot workcell programs off line. The

program is run on a Silicon Graphics IRIS workstation which

has the capability of rapidly rendering solid color images

with shading. A significant feature of WORKMATE is that

collisions between objects in the workspace can be

identified for the user in real time. This feature avoids

the need for the user to perform the tedious visual

inspection of the robot motion in order to verify that no

collision occurs along the path.

Several companies have recently entered the robotic

simulation market. Silma, Inc., of Los Altos, California,

was formed in 1983 to develop software which would model

robotic workcells. This group recognized the problem that

each robot manufacturer uses a different robot language to








control the robot. To aid the user, Silma Inc. developed a

task planning and simulation language which was independent

of the type of robot being modeled. Once a task was planned

on the graphics screen, a post processor would translate the

task from the generic planning language to the specific

language for the robot controller. This approach simplifies

operations in a situation where many diverse types of robots

must work together in an assembly operation. The software

was written for the Apollo computer series with an IBM PC/AT

version to be completed in the near future.

AutoSimulations, Inc., of Bountiful, Utah, offers a

software package which runs on the Silicon Graphics IRIS

workstation. This system emphasizes the total factory

integration. The robot workcell is just one component of

the factory. General robot tasks can be modeled at each

robot workcell and cycle times are recorded. Autonomously

guided vehicles (AGVs) are incorporated in the factory plan

together with parts storage devices and material handling

stations. The user is able to model the entire factory

operation and observe the system to identify any bottlenecks

or parts backups. Additional robot workcells can be readily

added or repositioned and AGVs can be reprogrammed in order

to alleviate any system problems. At present time, the

software package offers an excellent model of the entire

factory; however, less emphasis is placed on the individual

workcell. Detailed manipulator tasks cannot be planned and

communicated to the robot controller. Additional programs








will be integrated with the software package in order to

address these issues.

Intergraph Corporation of Huntsville, Alabama, offers a

hardware and software solution to the workcell modeling

problem. The unique feature of the Intergraph hardware is

the use of two 19 inch color raster monitors in the

workstation. This feature greatly enhances man-machine

interface which is the primary purpose of the graphics

system. Intergraph offers a complete line of CAD/CAM

software in addition to the robot workcell modeling

software. Applications of the system to workcell planning

have been performed at the NASA Marshall Flight Center,

Huntsville, Alabama.

Simulation software has also been developed by the

McDonnell Douglas Manufacturing Industry Systems Company,

St. Louis, Missouri. Ninety four robots have been modeled

to date. McDonnell Douglas has acquired considerable

experience in planning workcell operations for automobile

assembly lines. For example, a robot may be assigned

fifteen specific welds on a car body. The user must decide

where the car body should stop along the assembly line with

respect to the manipulator in order to accomplish these

welds. The simulation software allows the user to attach

the robot to one of the weld points and then move the auto

body with the robot still attached. The user is notified if

the weld point goes outside the workspace of the

manipulator. By repeating the process, the operator can








determine the precise position of the car body with respect

to the manipulator so that all weld points are in reach of

the manipulator. The software system generates path datum

which is communicated directly to the robot controller

together with cycle time datum which is accurate to within

5%.

McDonnell Douglas has learned from experience that

tasks that are taught to the robot in this way must be

"touched up". For example, the car body may not stop

precisely at the position that was determined during the

simulation process. The manual touch up of certain points

along the manipulator path can be accomplished with use of

the teach pendant. Typically an average of fifteen to

twenty minutes is required for this manual touch up

operation. A second method of path upgrade can be

accomplished by attaching a mechanical probe to the robot.

This probe measures the actual position of an object with

respect to the robot and then updates positional commands as

necessary. Early application of this technique required the

user to replace the tool of the robot with the mechanical

probe. This was often a time consuming and labor intensive

task. New measuring probes have been applied by McDonnell

Douglas to remove this problem.

A final example of robot workcell simulation software

is that developed by Deneb Robotics Inc., Troy, Michigan

[13]. This software runs on the Silicon Graphics IRIS

workstation. The interactive nature of the program allows








the user to rapidly build new robot geometries, modify

actuator limits, or reposition objects within the workspace.

Detailed solid color images with satisfactory animation

rates are obtainable. In addition the user can be warned of

collisions or near misses between parts of the robot and

objects in the workspace although the animation rate slows

down as a function of the number of collision checks being

made. The strength of the Deneb software is its rapid

animation of solid shaded images together with its ease of

use for the operator. As such, it is one of the more

advanced manipulator simulation software packages.














CHAPTER II
DEVELOPMENT OF ANIMATED DISPLAYS OF
INDUSTRIAL ROBOTS




2.1 Introduction and Objective

In the previous chapter, improvements in computer

hardware were discussed which have particular application to

the problem of real time animation of solid color objects.

The goal of this chapter is to detail the development

(hardware and software) of real time, interactive computer

graphics animations of industrial robots.

Any computer graphics simulation must possess two

characteristics. First it must be realistic. The image on

the screen must contain enough detail so as to give the user

an accurate and obvious image. Second, the images must be

generated rapidly enough so that smooth animation will

result. Picture update rates of 10 frames per second have

provided satisfactory animation on a 30 Hz video monitor

(each picture is shown three times before changing).

Throughout this chapter, all applications will be aimed

at the development of a solid color representation of a

Cincinnati Milacron T3-776 industrial robot. The first part

of this chapter will focus on the development of appropriate

data structures, viewing transformations, and hidden surface

removal methods. The specific programming techniques
11








utilized and demonstrated on a VAX 11-750 computer will be

discussed in detail. The second part of this chapter is

concerned with the modification of the initial work in order

to apply it to a Silicon Graphics IRIS model 2400

workstation. The modifications take full advantage of the

built in hardware capabilities of the IRIS workstation and

result in significantly improved performance.





2.2 Database Development

The first step in the generation of a computer graphics

simulation of a robot manipulator is the drawing of an

artist's concept of what the simulation should look like.

Shown in Figure 2-1 is a drawing of the T3-776 robot and in

Figure 2-2 is a sketch of the desired graphics simulation.

Enough detail is provided for a realistic representation of

the robot. Also shown in Figure 2-2 is a coordinate system

attached to the base of the robot such that the origin is

located at the intersection of the first two joints. The Z

axis is vertical and the X axis bisects the front of the

base. This coordinate system is named the 'fixed coordinate

system' and will be referred to repeatedly.

The robot manipulator is made up of a collection of

rigid bodies as shown in Figure 2-3. Also shown in the

figure is a local coordinate system attached to each body.

The coordinate values of each vertex point of the

manipulator are invariant with respect to the coordinate

























-THREE ROLL WRIST


-WRIST DRIVE SUB-ASSEMBLY















- SHOULDER HOUSING


ELBOW DRIVE SU

SHOULDER AXIS*

BASE SWIVEL--


BASE HOUSING
TURNTABLE
GEARBOX
DRIVE
P.A.U.


Figure 2-1: Cincinnati Milacron T3-776 Manipulator














































Figure 2-2: Conceptual Sketch


























Y





y
X 7 J


Figure 2-3: Collection of Rigid Bodies








system attached to each body. For this reason, local

coordinate data can be collected and permanently stored for

future use. It should be noted that the coordinate values

of the vertices were obtained from actual measurement and

from scale drawings of the robot. In this way, the computer

graphics simulation is as accurate as possible.

It is apparent from Figure 2-3 that the simulated robot

is made up of a series of primitives, i.e. n sided polygons,

circles, and cylinders. An n sided polygon was defined to

be a plane segment. A circle was defined as a 20 sided

polygon, and cylinders as a set of 10 four sided polygons.

Thus it is possible to define all parts of the simulation in

terms of plane segments.

Each primitive (polygon, circle, cylinder) which makes

up the simulation must have certain data items associated

with it. The datum was managed by placing each primitive

into a node of a linked list as shown in Figure 2-4. Each

node of the linked list is a variant structure and contains

specific information such as the type of element, the body

it belongs to, and the number of the vertices which comprise

it. The linked list format, which is a standard feature of

the C programming language, was used because of its

versatility and dynamic memory allocation characteristics.

With every element of the simulation now defined in

terms of some local coordinate system, the following three

tasks must now be performed in order to obtain a realistic

color image:










struct plane segment
{int name ;


int number ;


int color ;

float normalpoint[3] ;


union
{struct circletype
struct polytype
struct cyltype
} dat ;


cir ;
pol ;
cyl ;


struct plane_segment *next ;

}


struct circletype
{int centerpt ;


float radius ;


struct polytype
{int sides ;

float points[] [3] ;

}


struct cyltype
{int endpts[2] ;


float radius ;


indicates whether it is a
polygon, cylinder or
circle
identifies the object that
the plane segment
belongs to
the color of the plane
segment
local coordinates of the
normal point


contains circle data
contains polygon data
contains cylinder data


pointer to next plane
segment




the index of the center
point


the number of sides of
the polygon
array of local coordinate
values




the index of the cylinder
endpoints


Figure 2-4: Graphics Data Structure










1. For a given position of the robot and of the viewer,

transform all local vertex data to a screen coordinate

system so that it can be properly displayed.

2. Delete all plane segments which are non-visible

(backward facing).

3. Sort all remaining plane segments so that the proper

overlap of surfaces is obtained.



Each of these three tasks will now be discussed in detail.





2.3 Coordinate Transformations

In order to produce a drawing of the robot, certain

input data must be known. First the angle of each of the

six revolute joints of the robot must be selected. Chapter

3 details a method of calculating joint angles so as to

cause the robot to move along some desired trajectory. For

the purposes of this chapter, it will be assumed that the

set of joint angles is known for the picture to be drawn at

this instant.

The second input item which is required is the point to

be looked at and the point to view from. Knowledge of these

points determines from what vantage point the robot will be

drawn. The selection and modification of these items allows

the user to view the image from any desired location.










2.3.1 Local to fixed coordinate transformation

As shown in Figure 2-3, the representation of the robot

manipulator is made up of a series of rigid bodies. The

coordinates of each vertex are known in terms of the

coordinate system attached to each of the bodies. The first

task to be completed is the determination of the coordinates

of every vertex in terms of the fixed coordinate system

attached to the base of the robot.

Since it is assumed that the six joint angles of the

robot are known, the transformation of local point data to

the fixed reference system is a straightforward task. The

local coordinate systems shown in Figure 2-3 are named C1

through C6. These local coordinate systems were carefully

selected so as to simplify the transformation of data to the

fixed reference system.

Shown in Figure 2-5 is a skeletal model of the T3-776

manipulator. The vectors along each of the joint axes are

labeled S through S6 and the vectors perpendicular to each

successive pair of joint axes are labeled a12 through a67

(not all are shown in the figure). The variable joint

displacements 92 through 96 (j9) are measured as the

angles between the vectors a.. and ak in a right handed
-13 -jk
sense about the vector S The first angular displacement,

1', is measured as the angle between the X axis of the
fixed reference system and the vector a 12 As previously

stated, it is assumed that the joint displacements ~l

through 96 are known values.
















923



<.S,,


S
-6


Figure 2-5: Skeletal Model of T3-776 Manipulator








Twist angles are defined as the relative angle between

two successive joint axes, measured about their mutual

perpendicular. For example, the twist angle a12 is

measured as the angle between the vectors S and S as seen

in a right handed sense along the vector a12. In general,

all twist angles will be constant for an industrial robot

under the assumption of rigid body motion.

Two additional parameters, link lengths and offset

lengths, are shown in the skeletal model of the manipulator.

A link length, aij, is defined as the perpendicular distance

between the pair axes S. and S.. All link lengths are known
-1 -J
constant values for a manipulator. An offset length, Sj.,

is defined as the perpendicular distance between the two

vectors a.. and a For revolute joints, offsets are
--13 -jk"
constant values. Shown in Table 2-1 are the specific

constant link length, offset, and twist angle values for the

Cincinnati Milacron T3-776 robot manipulator.

A systematic selection of each local coordinate system

was made based on the skeletal model of the manipulator.

The Ci coordinate system was established such that the Z

axis was aligned with the vector S. and the X axis was along
--1
a... With this definition for each coordinate system, the

coordinates of a point known in the C. system can be found

in the Ci system by applying the following matrix equation:













Table 2-1: T3-776 Mechanism Dimensions


Sll = *

S22 = 0 in.

33 = 0

S = 55
5 44
55


a12 =0 in.

a23 44

a34 = 0

45 = 0

56 = 0


12 = 90 deg.

a23 = 0

a34 = 90

45 = 61

"56= 61


* to be determined when closing the loop










x x. dx..

i = A. yj + dyj (2.1)
j dzji
Z. Z. dz..
1 J 1 J


where



Cj -Sj 0

A. = scij cjcij -sij (2.2)
sjSij cjSij ci.



The vector [dxji, dyji dz ji represents the

coordinates of the origin of the C. system as measured in

the C. coordinate system. Also the terms s. and c.

represent the sine and cosine of 0. and the terms s.. and

cij represent the sine and cosine of a... This notation

will be used repeatedly throughout subsequent chapters.

Since all joint angles and twist angles are assumed to

be known, equation (2.1) can be repeatedly used to transform

all vertex data to the first coordinate system, C1. A point

known in terms of the C1 coordinate system, [x1, y1' z1]'

can be found in terms of the fixed coordinate system [xf,

Yf, zf], as follows:


Xf- x,

Yf = M yl (2.3)

zf z1










where



cos 1 -sinl 01
M = sin1 cos l 0 (2.4)

0 0 1



Proper use of equations (2.1) and (2.3) will result in the

determination of all vertex data for the robot in terms of

the fixed coordinate system attached to the robot base.



2.3.2 Fixed to viewing coordinate transformation

Assuming that the point to look at and the point to

view from are known in terms of the fixed coordinate system,

all vertices of the robot are now determined in terms of a

viewing coordinate system. The use of this coordinate

system will greatly simplify the eventual projection of the

robot onto the screen.

As shown in Figure 2-6, the origin of the viewing

coordinate system is the point to view from. The Z axis of

the coordinate system is the vector from the point being

looked at to the point being viewed from. With the Z axis

known, the XY plane is defined.

The exact directions of the X and Y axes are immaterial

at this point. Typically, however, the Y axis of the

viewing coordinate system will point "up." In other words,

for the robot to be drawn with the base along the bottom of






















th










O Z
y1


Figure 2-6: Transformation to Viewing Coordinate System








the screen, the Y axis of the viewing coordinate system must

correspond to the Z axis of the fixed coordinate system.

This association is accomplished by selecting a zenith point

(point B in Figure 2-6) which is high above the robot. As

shown in the figure, the direction of the X axis is obtained

as the cross product of the vector along line OA with the

vector along the line OB. With the X and Z axes now known,

the Y axis can be determined.

As described, vectors along the X, Y, and Z axes of the

viewing coordinate system are known in terms of the fixed

coordinate system. A 3x3 matrix, V, can be formed such that

the first column of the matrix is made up of the known

direction cosines of the unit vector along the X axis

(measured in terms of the fixed coordinate system).

Similarly, the second and third columns of V are made up of

the direction cosines of unit vectors along the Y and Z

axes. Recognizing that V is an orthogonal rotation matrix,

the transformation from the fixed coordinate system to the

viewing coordinate system is given by



xf x- dxfv

Y = VT Yf dyfv (2.5)

z zf dzf



where the vector [dxfv,dyfv,dzfv] represents the coordinates

of the origin of the viewing coordinate system as measured

in the fixed coordinate system.








At this point, the coordinates of all vertices of the

robot are known in terms of the viewing coordinate system.

All that remains is to transform the data one more time to

the screen coordinate system so that it can be properly

displayed.





2.3.3 Viewing to screen coordinate system transformation

The screen coordinate system is defined such that the

origin of the system is located at the lower left corner of

the terminal screen. The X axis points to the right, the Y

axis points up, and the Z axis point out from the screen.

The scale of the axes is dependent on the type of terminal

being used. All data points must be transformed to this

coordinate system so that they may be properly displayed on

the screen. Two types of projective transformations may be

used to perform the transformation between the coordinate

systems. These projective transformations are perspective

and parallel projections and are shown in Figure 2-7.

A parallel projection is the simplest type of

transformation. The conversion to the screen coordinate

system is simply accomplished by ignoring the Z component of

the data from the viewing coordinate system. In other

words, the X and Y values of points in the viewing

coordinate system are simply plotted on the graphics screen

(accompanied by any desired translation and scaling). The

resulting image is the same as would be obtained if the




































(a) (b)





Figure 2-7: Parallel and Perspective Transformation.
a) Parallel Projection ;
b) Perspective Projection.









viewer was standing at infinity with respect to the robot.

Parallel lines will remain parallel on the screen.

The perspective transformation is accomplished by

projecting points onto a plane (screen). One point is

selected as shown in Figure 2-7 as the center of the

projection. The screen coordinates of any point are

determined by finding the coordinates of the point of

intersection of the plane (screen) with the line between the

point in question and the center of projection. This

transformation can again be accomplished via matrix

multiplication (coupled with any desired translation on the

screen).

For the purposes of this work, the parallel projection

method was used for determining the data in terms of the

screen coordinate system. This choice was made because of

the reduced number of calculations required to perform

subsequent sorting algorithms used for eventual solid color

representations of the robot.





2.4 Backface Polygon Removal

The next task that must be accomplished is the

filtering of the linked list of plane segments such that the

elements which are backward facing are removed. In other

words, at any time approximately one half of the plane

segments will not be visible to the viewer. The list of

visible plane segments changes each instant that the robot

or the viewer changes position.









The removal of the backward facing polygons is a quick

and simple task. As indicated in the data structure shown

in Figure 2-4, the coordinates of a normal point are

specified for each plane segment. A vector normal to the

surface (and outward pointing) can be formed by subtracting

the coordinates of the origin of the local coordinate system

from the coordinates of the normal point. Just as all the

vertices were transformed from the local coordinate system

to the screen coordinate system, the normal points are also

transformed. Comparison of the Z coordinate of the normal

point with that of the origin of the local coordinate system

(both now in the screen coordinate system) determines

whether the particular plane segment is visible.

Application of this method results in a greatly

simplified image. Shown in Figure 2-8 is an edge drawing of

the T3-776 manipulator. Figure 2-9 shows the same drawing

with backfacing polygons removed.





2.5 Sorting of Plane Segments

A characteristic of raster type graphics displays is

that whatever is drawn last will appear to be on top. For

example if two polygons, A and B, exist and polygon A is

closer to the viewer and overlaps polygon B, then polygon B

must be drawn on the screen prior to polygon A. The only

other alternative would be to redefine polygon B so that the

regions overlapped by polygon A were subtracted. In this











































Figure 2-8: Wire Frame Model of T3-776 Manipulator














































Figure 2-9: Backfacing Polygons Removed








manner the new polygon B' and the original polygon A would

no longer overlap and it would not matter in what order they

were drawn on the screen.

Numerous techniques exist for sorting polygons for

proper display. Algorithms have been developed based on two

primary techniques. The first involves sorting objects into

the correct order for display [14 16], while the second

technique concentrates on individual pixels (ray tracing

[17-18] and z-buffer algorithms [19-20]).

A sorting technique was used in this work for two

reasons, i.e. a rapid algorithm was required which did not

require a substantial amount of computer memory. The

algorithm which performs the sort is of necessity of order
2
n In general, every plane segment must be compared

against every other plane segment. To shorten this process,

however, a numbering scheme was employed so that, for

example, the sides of a cube would not be compared since it

would be impossible for them to cover each other. Similarly

it is not necessary to compare the five visible sides of a

ten sided cylinder.

The comparison of two plane elements to determine if

one of them overlaps or covers the other is accomplished by

applying a series of tests. The first test is to determine

whether a bounding box placed around the projection of one

of the objects is disjoint from a bounding box placed around

the projection of a second object. If the bounding boxes do

not overlap, then it is not possible for the two objects to

overlap and the comparison is complete.









If the two bounding boxes are not disjointed then all

the points of one object are substituted into the equation

of the infinite plane that contains the second object. If

the resulting value of the equation for all points is

greater than zero (assuming that the viewer's side of the

infinite plane is positive), then the first object may cover

the second object. Similarly, the points of the second

object are substituted into the equation of the infinite

plane containing the first object. Again whether the sign

of the equation is greater or less than zero determines

whether one object may overlap the other. Shown in Table

2-2 are all the possible combinations of signs that may

occur. Figure 2-10 shows a representative sample of the

types of overlap conditions that can occur for two plane

segments.

If it is concluded from the previous test that no

overlap can occur, then the comparison is complete. However

if an overlap may occur, then the projections of the two

objects onto the screen are checked to determine if they do

indeed overlap. This is done by determining whether any

lines of either of the two projections cross. If any of the

lines do cross, then the plane segments do overlap. If none

of the lines cross, then it may be the case that one of the

projections lies completely inside the other. One point of

each of the projected plane segments is checked to determine

whether it is inside the other projected polygon.












Table 2-2: Plane Segment Sorting Cases




This table indicates whether all the vertices of plane
segment 1 are on the origin side (+ side) or the opposite
side (- side) of the infinite plane containing plane segment
2. Similarly, the vertices of plane segment 2 are compared
to the infinite plane containing plane segment 1.


segment 1
+


segment 2
+


+/-


+/-
+/-


result
no overlap
no overlap
1 may overlap 2
1 may overlap 2
1 may overlap 2
2 may overlap 1
2 may overlap 1
2 may overlap 1
overlap may occur
















S2




(a)














(b)














(c)


Figure 2-10: Plane Segments with Infinite Planes.
a) segment 1 (+), segment 2 (+)
b) segment 1 (+), segment 2 (+/-) ;
c) segment 1 (-), segment 2 (+/-)









Clearly, the comparison task is lengthy and time

consuming. The case of two objects whose bounding boxes are

not disjoint and yet do not actually overlap takes

considerable time. In addition, the equation of the

infinite plane for each plane segment had to be calculated

for each image to be drawn based on the position of the

robot and of the viewer. On the average, for a particular

drawing of the T3-776 robot there are 85 plane segments to

compare and sort. Due to this large number, the execution

time of this algorithm on a VAX 11/750 computer is

approximately 10 seconds per drawing.

Clearly, the sorting of plane segments in software will

not allow images to be generated rapidly enough to provide

proper animation. An improvement by at least a factor of

100 is necessary in order to reach the minimum animation

rate of 10 frames per second. A second drawback of the

algorithm is that it will fail if there exists a cyclic

overlap of plane segments. For example, if segment A

overlaps B which overlaps C which in turn overlaps segment

A, then the algorithm as written will fall into a recursive

trap. This problem can be corrected in software, but the

additional calculations will only serve to further increase

the computation time.

A solution to the problem was found via application of

special purpose computer graphics hardware. The animation

program was modified to run on a Silicon Graphics IRIS model

2400 workstation. Proper modifications of the database and








sorting method to take advantage of the hardware

improvements resulted in the rapid generation of full color,

solid images at a rate of over 10 frames per second. The

hardware system and software modifications will be detailed

in the following sections of this chapter.





2.6 Description of Hardware System

The Silicon Graphics IRIS model 2400 workstation is a

68010 based 3-D system designed to function as a stand-alone

graphics computer. It is capable of generating three

dimensional, solid color images in real time without the

need for a main frame computer.

The unique component of the IRIS is a custom VLSI chip

called the Geometry Engine. A pipeline of twelve Geometry

Engines accepts points, vectors, and polygons in user

defined coordinate systems and transforms them to the screen

coordinate system at a rate of 69,000 3-D floating point

coordinates per second.

The display of the IRIS system is a 19 inch monitor

with a screen resolution of 1024 pixels on each of 768

horizontal lines. The monitor is refreshed at a rate of 60

Hz and provides flicker free images. The image memory

consists of eight 1024 x 1024 bit planes (expandable to 32

bit planes). An eight bit plane system will allow for 2

(256) colors to be displayed simultaneously on the screen.








Animation is obtained by setting the IRIS system into

double buffer mode. In this mode, half of the bit planes

are used for screen display and half for image generation.

In other words, while the user is observing an image

(contained on the front 4 bit planes), the next image is

being drawn on the back 4 bit planes. When the image is

complete, the front and back sets of bit planes are swapped

and the user sees the new picture. The complexity of the

image to be drawn governs the speed at which the bit planes

are swapped. Experience has shown that the swapping should

occur at a rate no slower than 8 Hz in order to result in

satisfactory animation.

The one drawback of double buffer mode is that there

are only half as many bit planes available for generating an

image. The reduced number of bit planes further limits the

number of colors that may be displayed on the screen at

once. An IRIS system with only 8 bit planes, such as the
4
system at the University of Florida, can only display 2

(16) colors on the screen at once while in double buffer

mode. It should be noted, however, that a fully equipped

system with 32 bit planes can display 216 (65,536) colors

simultaneously in double buffer mode. This capability

should far exceed user requirements in almost all instances.









2.7 Program Modifications

It was previously noted that an increase in performance

by at least a factor of 100 was required in order to produce

images rapidly enough to result in pleasing animation. A

brief description of the graphics software library which is

included with the IRIS system will precede the discussion of

the specific data structure and software modifications which

were made.





2.7.1 IRIS coordinate transformations

The primary task in drawing images on the screen is the

transformation of coordinate values from local coordinate

systems to the screen coordinate system. The IRIS

workstation accomplishes this by manipulating data in terms

of homogeneous coordinates. Four coordinate values, [x, y,

z, w] are used to define the coordinates of a point. What

is normally thought of as the X coordinate value can be

calculated as x/w. Similarly, values for the Y and Z

coordinates of a point are readily determined. The

advantage of using homogeneous coordinates is that

rotations, translations, and scaling can all be accomplished

by 4x4 matrix multiplication.

The IRIS system constantly keeps track of the current

transformation matrix, M. This matrix represents the

transformation between some local coordinate system and the

screen coordinate system. When any graphics drawing command









is given, as for example 'pnt(50, 20,

point at the local position (50, 20,

datum is multiplied by the matrix M in

the screen coordinate values. The

represented by the following equation:



[x,y,z,w] = [x',y',z',w'] M


40)' which draws a

40), the coordinate

order to determine

transformation is





(2.6)


The basic problem then is to make the matrix M represent the

transformation between the coordinate system attached to

each of the rigid bodies of the robot. For example, when M

represents the transformation between the fixed coordinate

system attached to the base of the robot and the screen

coordinate system, the base of the robot can be drawn in

terms of a series of move and draw commands, all of which

will use local coordinate data as input.

When the IRIS system is initialized, the matrix M is

set equal to the identity matrix. Three basic commands,

translate, rotate, and scale, are called to modify M.

Calling one of the three basic commands causes the current

transformation matrix, M, to be pre-multiplied by one of the

following matrices:


Translate (T ,T ,T ) =
x- y z'


1 0 0

0 1 0

0 0 1

T T T
Sx y z


(2.7)


















Scale (S ,S ,S )











Rot (Q)











Rot (9)
y











Rot (9)
z


S 0 0 0

0 S 0 0
y
0 0 S 0
z
_0 0 0 1





1 0

0 coso

0 -sin9

0 0


Scos9

0

sinG

0





cosG

-sinG

0

0


0

1

0

0





sinG

cose

0

0


With these three basic transformations, it is an easy

matter to cause the matrix M to represent the transformation


(2.8)


0

sinG

cosG

0


-sinG

0

cose

0





0

0

1

0


(2.9)











(2.10)











(2.11)









from the fixed robot coordinate system to the screen

coordinate system. A translate command can be called to

center the image as desired on the screen, a scale command

will allow for .zooming in, and a series of rotate commands

will allow for any desired orientation of the robot base

with respect to the screen. The program is written so that

the parameters to these commands are modified by rolling the

mouse device. In this manner, the user can change the

orientation and scale of the drawing as desired. Since the

images will be drawn in real time, the user has the

capability to interact with the system and alter the viewing

position also in real time.

Once the matrix M represents the fixed coordinate

system, the base of the robot can be drawn. A series of

move and draw commands can be called, using local coordinate

data as input. However, since solid color images are

desired, the order that solid polygons are drawn is of

importance. Because of this, the matrix M is simply saved

and given the name Mf. When any part of the base of the

robot is to be drawn, however, the matrix Mf must be

reinstated as the current transformation matrix M.

The transformation from the matrix Mf to the coordinate

system attached to Body 1 (see Figure 2-3) is a simple task.

The transformation matrix for Body 1, M1, is given by the

following equation:


M1 = Rotz(i) Mf (2.12)










Similarly, the transformation matrices for bodies 2 through

6 are given by the following equations:



M2 = Rotx(90) Rot (-Q2) M1 (2.13)

M3 = Translate (a23, 0, 0) Rotz(93) M2 (2.14)

M4 = Rotz( 4) Rotx(90) Translate (0, -S44, 0) M3 (2.15)

M5 = Rotz( 5) Rotx(61) M4 (2.16)

M6 = Rotz 6() Rotx(61) M5 (2.17)
At this point, all transformation matrices are known

and the image of the robot can be drawn. It is important to

note that the method described here is virtually identical

to that discussed in section 2.3. The improvement, however,

is that all the matrix multiplications required to transform

the coordinates of some point from a local coordinate system

to the screen coordinate system will be accomplished by

specially designed chips. In this way the multitude of

matrix multiplications can be accomplished in a minimal

amount of time.




2.7.2 IRIS data structure

The data structure of the robot animation program was

also modified in order to take advantage of the unique

capabilities of the IRIS system. As previously noted, the

entire image of the Cincinnati Milacron T3-776 robot can be

formed from a series of n sided polygons. The IRIS graphics








command which draws a solid polygon in the currently defined

color is as follows:



polf (n, parray) (2.18)



where n is an integer which represents the number of sides

of the polygon and parray is an array of size nx3 which

contains the local coordinates of the vertices of the

polygon.

Since all polygons are to be defined in terms of their

local coordinates, all polygons were defined once at the

beginning of the program. For example, there exist 126 four

sided polygons in the representation of the T3-776 robot.

Therefore the variable 'p4array' was declared to be of size

[126][4][3]. Each four sided polygon was given a number

(name) and the local X,Y,Z coordinates of each of the four

points were stored in the array.

An obvious disadvantage of this scheme is that point

data will be duplicated, thereby requiring more computer

memory. For example, a cube is defined by eight points and

six polygons. In the method used, each point will appear in

the variable 'p4array' three times, i.e. as a member of each

of three sides of the cube. The advantage of this method,

however, is that of speed. The datum for a particular

polygon is pre-formatted for immediate use in the 'polf'

command. No additional data manipulation is required.









2.7.3 Backface polygon removal on the IRIS

In section 2.4 a method of backface polygon removal was

discussed. A normal point was selected such that the vector

from the origin of the local coordinate system to the normal

point represented a vector normal to the particular polygon

in question. Transformation of the origin point and the

normal point to the screen coordinate system would determine

if the polygon was facing the viewer.

This method was again used on the IRIS workstation with

slight modification. From observing Figure 2-3, it is

apparent that most of the polygons which form each of the

rigid bodies have one of the coordinate vectors as their

normal vector. Therefore, associated with each polygon is

the character string 'x', 'y', 'z', or 'other'. In this

manner, not every polygon will have to have its normal

vector calculated. Allowing three normal vector

calculations for each of the coordinate axes of each rigid

body (21 total), plus the normal calculations of the 'other'

cases (50 total), the normal vector calculations have been

reduced from a previous total of 237 to the new total of 71.

Knowing the transformation matrix, Mi, for each of the

rigid bodies, the transformation of the normal points could

be carried out in software via matrix multiplication. This

process, however, would be too time consuming and would

greatly slow down the animation rate. An alternative method

was found whereby the Geometry Engine chips of the IRIS

workstation could be used to perform the matrix

multiplication in hardware.









The IRIS workstation is placed in "feedback mode."

When in feedback mode, graphics commands are not drawn on

the screen, but rather data items are stored in a buffer.

The command 'xfpt' accepts the local coordinates of a point

as input. The homogeneous coordinates [x, y, z, w] of the

point in terms of the screen coordinate system are stored as

output in the buffer. The Z value of the normal point (z/w)

is compared with the Z value of the origin point of the

local system after both points have been transformed to the

screen coordinate system by the Geometry Engine.

Comparisons of these Z values determines whether the normal

vector is pointing towards the viewer and thereby determines

if a particular polygon is visible. It should be noted that

when a parallel projection is used, as it is in this

example, that the homogeneous coordinate 'w' will always

equal 1 and that the division is therefore not necessary.





2.7.4 Modified Sorting of Plane Segments

After backfacing polygons are removed, the remaining

plane segments must be drawn on the screen in proper order

so that polygons closer to the viewer are drawn last.

Section 2.5 detailed a method for accomplishing this

sorting. A lengthy series of tests were made to compare

every pair of plane segments. Although the sorting

algorithm produced correct results, the computational time

was unacceptable.









A new and simplified method was developed for use on

the IRIS workstation. Once all backfacing polygons are

removed, what remains is a collection of objects. It was

desired to compare and sort the objects, not the individual

plane segments which compose the objects. An object is

defined as a collection of plane segments which cannot

overlap each other. An example of an object is the base

plate of the robot. A second example is the large box

shaped object in the base which rests on top of the base

plate. These examples point out that each of the rigid

bodies shown in Figure 2-3 are composed of a collection of

objects.

Once all objects were defined, a series of rules was

generated which describes how the image of the robot is to

be drawn. An example of such a rule is as follows:



If I am looking from above the robot, then the base

plate must be drawn before the box which rests on

top of it.



The 'if clause' of the above rule will be true if the X axis

of the fixed coordinate system is pointing towards the

viewer. This information is already known since it was

required in the determination of which polygons were

backfacing. Similar rules (again based on previously

calculated facts) make it possible to sort the objects

quickly and correctly. A total number of 12 basic rules








were required to produce accurate images of the robot.

These 12 rules form the basis of a forward chaining

production system. It must be re-emphasized that the

correct ordering can be accomplished in a negligible amount

of time because all data required by the 'if clause' of each

rule were calculated previously.





2.8 Results and Conclusions

The resulting representation of the Cincinnati Milacron

T3-776 robot is shown in Figures 2-11 and 2-12. Pictures

are generated at a rate of 10 frames per second which

results in satisfactory animation. As previously stated,

the user is capable of interacting with the system in real

time to alter the viewing position, freeze the motion, or to

zoom in.

Many applications exist for such a graphics system.

Two particular applications, the control of teleoperated

robots and the off line planning of robot tasks, will be

presented in Chapters IV and V. Additional applications in

the areas of designing workstations, and the evaluation of

new robot designs (based on factors such as workspace

envelope, dexterity capability, and interference checking)

make such a graphics system a valuable tool.





























Figure 2-11: Animated Representation of T3-776 Manipulator


Figure 2-12: Animated Representation of T3-776 Manipulator














CHAPTER III
PATH GENERATION




3.1 Introduction and Objective

This chapter is concerned with the calculation of a

series of joint angles for a robot manipulator which will

cause the manipulator to move along a user specified path.

These calculations will serve as input to the robot

animation program described in the previous chapter. In

this manner, the user will be able to observe and evaluate

the robot motion on the graphics screen prior to any

movement of the actual robot manipulator. As with the

previous chapter, the specific application to the Cincinnati

Milacron T3-776 manipulator will be presented in detail.

The first problem to be considered will be the reverse

kinematic analysis of the robot manipulator.- This analysis

determines the necessary joint displacements required to

position and orient the end effector of the robot as

desired. The problem of path generation is then reduced to

the careful selection of robot positions and orientations

along some desired path. A reverse kinematic analysis is

then performed for each of these locations.








3.2 Notation

The notation used throughout this analysis is that

developed by J. Duffy as presented in reference [21].

Briefly stated, a manipulator is composed of a series of

rigid links. One such link is shown in Figure 3-1. In this

figure it is shown that the link connects the two kinematic

pair (joint) axes S. and S.. The perpendicular distance
-1 -J
between the pair axes is a.. and the vector along this
1J
mutual perpendicular is a... The twist angle between the
-13
pair axes is labelled a.. and is measured in a right handed

sense about the vector a...
-1J
The particular kinematic pair under consideration is

the revolute joint which is shown in Figure 3-2. The

perpendicular distance between links, or more specifically

the perpendicular distance between the vectors a.. and ajk'

is labelled as the offset distance S.. The relative angle

between two links is shown as G. and is measured in a right

handed sense about the vector S..
-j
Four types of parameters, ie. joint angles (0j), twist

angles ( aij ), offsets (S.) and link lengths (aij) describe

the geometry of the manipulator. It is important to note

that for a manipulator comprised of all revolute joints,

that only the joint displacement angles are unknown

quantities. The twist angles, offsets, and link lengths

will be known constant values. Furthermore, the values for

the sine and cosine of a twist angle a.. and an angular

joint displacement 9. may be obtained from the equations
J










I5


Figure 3-1: Spatial Link


































- ~.


Figure 3-2: Revolute Pair


sjj





lij


-I










cij = Si'Sj (3.1)

s.j = ISiSja jI (3.2)



c = a.*a (3.3)
c -jk
Si = laijajkSj (3.4)



Determinant notation is used to denote the scalar triple

product.




3.3. Mechanism Dimensions of the T3-776 Manipulator

Shown in Figure 3-3 is a sketch of the T3-776 robot.

The robot is described by the manufacturer as consisting of

a three roll wrist connected to ground by an elbow,

shoulder, and base revolute joint. Shown in Figure 3-4 is a

skeletal drawing of the manipulator. The three roll wrist

is modeled by a series of three revolute joints whose axes

of rotation all intersect at a point. The elbow, shoulder,

and base joints are each modeled by a revolute joint such

that the axis of rotation of the shoulder and elbow are

parallel.

In the skeletal model the joint axes are labeled

sequentially with the unit vectors Si (i = 1,2..6). The

directions of the common normal between two successive joint

axes S. and S. are labeled with the unit vectors a.. (ij =
1 -13
12,23,..67). It must be noted that only the vectors al2 and

-23 are shown in Figure 3-4 for simplicity of the diagram.
























FROT RIS GAROX RIT DIV SB.ASEBL


ELBOW DRIVE SUB-ASSEMBLY


SHOULDER HOUSING




SHOULDER DRIVE
SUB-ASSEMBLY
(BEHIND SHOULDER
HOUSING)

- BASE HOUSING
TURNTABLE
GEARBOX
DRIVE
PA.U.


Figure 3-3: Cincinnati Milacron T3-776 Manipulator


FRONT WRIST GEARBOX


,WRIST DRIVE SUB-ASSEMB LY















923


S6


Figure 3-4: Skeletal Model of T3-776 Manipulator









As previously stated, the link lengths a.., the offsets

S.., and the twist angles a. are constants which are

specific to the geometry of a particular manipulator. The

values of these constants are tabulated below for the T3-776

robot.



S =* a12 = 0 in. a12= 90 deg.

S22 = 0 in. a23 = 44.0 a23 = 0

S33 = 0 a34 = 0 a34 = 90 (3.5)

S44 = 55.0 a45 = 0 a45 = 61

S55 = 0 a56 = 0 a56 = 61
Sll will be computed subsequently



In addition to the above constant dimensions, S66 and

a67 are selected such that the point at the end of vector

a7 is the point of interest of the tool connected to the

manipulator. For example this point may be the tip of a

welding rod that the manipulator is moving along a path.

Once a particular tool is selected, constant values for S66

and a67 are known.

Furthermore it is noted that the link lengths a12, a34'

a45, and a56 equal zero. However it is still necessary to

specify the direction of the unit vectors a12' a34' a45, and

a in order to have an axis about which to measure the

corresponding twist angles. The vector a.. must be
-1J
perpendicular to the plane defined by the vectors S. and S.
S
and as such can have two possible directions. For the









vectors a a34, a45, and a56 this direction is arbitrarily

selected as the direction parallel to the vector S xS The

values for the corresponding twist angles a 2', 34' a45' and

a 56 listed in (3.5) were determined based upon this
convention.





3.4. Reverse Displacement Analysis

For the reverse displacement analysis the position and

orientation of the hand of the manipulator are specified.

It is desired to determine the necessary values for the

relative displacements of the joints that will position the

hand as desired, ie. to determine sets of values for the six

quantities 1,' 92' (3' 04' 5', and 96. The analysis

is complicated by the fact that there are most often more

than one set of displacements which will place the hand in

the desired position. An advantage of this reverse

displacement analysis is that all displacement sets will be

determined as opposed to an iteration method which would

find only one set of joint displacements.

It turns out that for the T3-776 robot there are a

maximum of four possible sets of angular displacements which

will position and orient the hand as specified. The unique

geometry of the robot, that is S2 parallel to S3 and

S4'S5',6 intersecting at a point, allows for eight possible
sets. However the limits of rotation of the first three

joints reduces the solution to a maximum of four. The








limits of rotation for the angles 1,' 2' and 93 (see

Figure 3-4) are as follows:



-135 < <1 < 135 (degrees)

30 < 92 < 117

-45 < 93 < 60



3.4.1 Specification of position and orientation

The first step of the analysis is to establish a fixed

coordinate system. For this analysis a fixed coordinate

system is established as shown in Figure 3-4 such that the

origin is at the intersection of the vectors S1 and S2. The

Z axis is chosen to be parallel to the S, vector and the X

axis bisects the allowable range of rotation of the angle

i". Throughout the rest of this analysis, this coordinate

system will be referred to as the fixed coordinate system.

Using this fixed coordinate system it is possible to

specify the location and orientation of the hand by

specifying the vector to the tool tip, Rpl, (see Figure 3-5)

and the direction cosines of the vectors S6 and a67.

Although RPl' S,6 and a67 have a total of nine components,

the latter two are related by the three conditions,



S S = 1
-6 -6

a a67 = 1
67 67 = 0
6 "~-67











0P12











RPI \


a7,


.767


Figure 3-5: Hypothetical Closure Link


S1I


\s,









so that the three vectors (RPl' 6' E 67) represent the 9-3=6

independent parameters necessary to locate a rigid body in

space.



3.4.2 Closing the loop

Once the position and orientation of the hand is

specified, the manipulator is connected to ground by a

hypothetical link. The problem of determining the sets of

joint displacements to position and orient the hand is thus

transformed to the problem of analyzing an equivalent

spatial mechanism with mobility equal to one. The concept

of closing the loop is not new. Pieper and Roth [22] were

the first to point out the implicit correspondence between

manipulators and spatial mechanisms using homogeneous

transfer matrices. The method of closing the loop which is

presented here was published by Duffy and Lipkin in

reference [23].

It is now necessary to determine the five constraint

parameters S77, a71, S1', 071' and (01-I1) that complete

the loop together with the input angle of the spatial

mechanism, 07. The first step of the completion algorithm

is to establish a direction for the unit vector S This
-7
vector will act as the axis of rotation of the hypothetical

revolute joint which serves to close the loop. The

direction of S7 is arbitrary so long as it lies in the plane

which is perpendicular to a67. For this analysis S7 is

selected such that a67 equals 90 degrees and thus the

direction cosines of S7 may be obtained from the equation







7 = a7 x (3.6)


With S7 now known, application of (3.1) gives the following

expression for c71:


c71 = S7 S (3.7)


A value of c71=+1 immediately flags a singularity which will

be discussed subsequently. The unit vector a71 is now

defined by

S7 x S
-7 -1
a = (3.8)
I-7 X S11

and thus by application of (3.2),



s71 = a71S7S (3.9)


Utilizing the vector loop equation (see Figure 3-5),


Rl + S77 + aa7 + Sa SI = 0 (3.10)


results in explicit expressions for the hypothetical link

a71 and hypothetical offsets S77 and Sli,


877 = IRpla71Sll / s71 (3.11)

a71 = IS7RPSl1- / s71 (3.12)

S11 = S7a71Rpll / s71 (3.13)










Utilizing the results of (3.8) and equations (3.3) and (3.4)

gives the following expressions for the sine and cosine of

77'


C7 = a67 a (3.14)



S7 = S7a771 (3.15)


In addition, by projection the expressions for the sine and

cosine of (01-~1) are



cos(91-+) = a71 i (3.16)


sin(OG 1-) = ISla71il (3.17)


where i is the unit vector in the direction of the X axis.

It was mentioned earlier that a singular condition is

flagged when c71=+1 (and therefore s71=0). This occurs when

the vectors S7 and S are either parallel or antiparallel

and there are thus an infinity of possible links a71 which

are perpendicular to both S and S However the constraint

S77=0 can be imposed to obtain a unique result as shown in

Figure 3-6. Forming the inner product of (3.10) with S

yields,


S11 = -P1 Sl


(3.18)






z,s,


OO*e*


x



S, I


Figure 3-6: Hypothetical Closure when S II S
-1 'I -7


6


L71


967










Further, from equation (3.10),



a71 = .IP + S11S1 (3.19)


and provided that a71 0,


a1 = -(RlP + S S) / a71 (3.20)
271 P 11 + 71


The remaining angles 97 and (01-~I) can again be

calculated using equations (3.14) through (3.17).

Finally when the axes of S and S are collinear, the
7 -1
condition a71=0 flags a further singularity. The direction

of the unit vector a 1 in the plane normal to the axis of S

is now arbitrary. In this case it is convenient to impose

the additional constraint that 97=0 making a71 equal to

a67 The remaining angle (QI-~1) can again be calculated
using (3.16) and (3.17).

Equations (3.7) through (3.17) plus the special

analysis developed for 57 parallel to S, determine all the

necessary parameters of the hypothetical closure link which

is shown in Figure 3-5. In addition, a unique value for the

angle 97 has been determined. Thus the reverse

displacement solution of the open chain manipulator has been

transformed to the solution of a closed loop mechanism with

a known input angle 07. Well documented methods for

analyzing the closed loop mechanism can thus be used to








determine all possible sets of joint displacements which can

position the hand as specified.





3.4.3 Determination of A1, 22' and 03

At this point the next step of a standard reverse

position analysis would be to analyze the closed loop

mechanism formed by the addition of the hypothetical closure

link to the open chain manipulator. However due to the

relatively simple geometry of the T3-776 robot, a shorter

and more direct approach will be taken.

It should be noted from Figure 3-4 that since the

direction cosines of vectors S6 and a 7 are known in the

fixed coordinate system together with the length of offset

S66 and link a67, that the coordinates of point P2, the

center of the three roll wrist, are readily known. The

vector P2 (see Figure 3-7) from the origin of the fixed

coordinate system to point P2,is given by



R =R -S S -a a (3.21)
-P2 = PI P 6 a 6767 (3.21)


It is also shown in Figure 3-7 that the vectors RP2' a12'

a23 S 4 and S all lie in the same plane. This is due to

the unique geometry of this robot whereby the vectors S2 and

S3 are parallel. Because of this, simple planar

relationships can be utilized to determine the three

relative displacement angles 'l, 02' and 93.



















-23


S44


R
-P2


Figure 3-7: Location of Wrist Point








The angle 41 is defined as the angle between the fixed

X axis and the vector a12 measured as a positive twist about

the vector S1. Application of equations (3.3) and (3.4)

gives the following expressions for the sine and cosine of

hi'


cos A1 = i'a12 (3.22)



sin 1 = ia-l2~S1 (3.23)



The only unknown in these equations is the vector a12'

Since the vectors RP2' a12, and S all lie in the same

plane, it must be the case that al2 is either parallel or

antiparallel to the projection of RP2 on the XY plane. Thus

the vector a12 is given by


+[(R p2 i)i + (R P ) (
-12 [(RP. 2 + (Rp2 ) 2] .5


Substitution of the two possible values of a- into (3.22)
12
and (3.23) will result in two possible distinct values for

+1 and it can be shown that these two values will differ by
180 degrees. It is apparent that one of the calculated

values of A1 may not fall in the allowable range of

rotation of -135 to +135 degrees. If this occurs then there

is an immediate reduction from a maximum of four to a

maximum of two possible configurations of the robot which

will position the hand as specified.








Utilizing equations (3.16) and (3.17) gives the

following expressions for the sine and cosine of 91,


cos 01 = cos(O1-41)C os 1 sin(Q1--1)sin 1, (3.25)

sin 91 = sin(01-~l)cos 1i + cos(Q1-cl)sin 1, (3.26)

It must be emphasized that 01 is defined as the relative

angle between the vector a 2 and the hypothetical link a71

defined in the previous section (see Figure 3-5). As such,

Q1 is calculated at this time for subsequent use in the
determination of the angles in the wrist (94, 05, and

06)
Before proceeding with the analysis it is important to

note that the angles (2 and (3 (see Figure 3-4) are used

in addition to the second and third actuator joint

displacements 02 and 03. These angles are related by the

following equation:


.j = tj 90 deg. (j=2,3) (3.27)

The cosine of angle t3 is calculated by applying the

cosine law to the planar triangle shown in Figure 3-8. The

resulting expression is,


2 2 2
cos 3 = (a23 + S44 RP2) / (2a23S44) (3.28)


Two corresponding values for the sine of t3 are obtained


from the equation















.34
.-34


SI-4
P2


'I


Figure 3-8: Determination of 2nd and 3rd Joint Angles








sin = + 1-cos23 (3.29)


Thus there are two values of }3 which can position the

point P2 as specified and these two possibilities are

referred to as the "elbow up" and "elbow down" orientations.

However due to the limit on the rotation of 3' ie. 45 <

<3 < 150 degrees, only one value of t3 and thus unique
values for the sine and cosine of t3 will be possible.

From (3.27) the sine and cosine of 93 are given by


c3 = cos ( 3-90)

= sin t3 (3.30)

s3 = sin (t3-90)

= -cos 3 (3.31)


Equations (3.30) and (3.31) will be used subsequently in
expressions for the angles in the wrist of the manipulator.

A solution for the unique value of 02 and thereby t2

which corresponds to each set of pairs of angles 1 and t3

is obtained by use of projection. It is shown in Figure 3-8

that the vector R can be written as



a23-23 + 444 = P2 (3.32)


Projecting this equation upon a and then S gives the

following two equations:








a 3a3a + S44S4"a2 =R p"a
23-23 -12 44-4 -12 -P2 -12


a23a23S + S 44'S
-1 44-4 -1


- RP2'S1
--P2 --1


The right sides of both the above equations are known.

Expanding the scalar products on the left sides of (3.33)

and (3.34) gives


a23c2 S44cos(92+3) = Rp2al2


a23s2 S44sin(92+3) = Rp2'S
23 2 44 23 P2-1


(3.35)


(3.36)


Expanding the sine and cosine of (92+3) and regrouping

gives


c2[a23-S44cos03] + s2[S44sin 3] = Rp2al2 (3.37)


c2[-S44sin 3] + s2[a23-S44cos3] = Rp2 S1 (3.38)


Using Cramer's rule to solve for the sine and cosine of 92

and recognizing that


2 2 2
1p21 = a23 + 4


- 2a23S44cos3


(3.33)


(3.34)


gives








c2 = [(a23-S44cos43)(RP2"a12) -

(S44sin3) (Rp2"Sl)]/RP21 2 (3.39)


2 = [(a23-S44cos3) RP2"l) +

(S44sin 3) (RP2 l2' ) ]/ P212 (3.40)


Equations (3.39) and (3.40) result in a unique value for 02

corresponding to each pair of calculated values of +1 and

3'. From (3.27) the sine and cosine of 42 can be written
as


cos +2 = cos(@2+90)

= -S2 (3.41)


sin +2 = sin(92+90)

= c2 (3.42)


As before each calculated value of +2 must be checked to

see if it is in the allowable range of rotation. If it is

not, then the maximum number of possible configurations of

the robot which can position the hand as specified is

further reduced.

At this point up to two sets of the three displacement

angles +1' 02, and 03 are known which position the point

P2 as required. However if there were no joint angle
limitations at joints 1, 2, and 3, there would be four

possible sets of displacements which would position point P2








as required. This reduction from four sets of values to a

maximum of two possible sets is significant in that it

reduces the computational time involved in the reverse

position analysis of the T3-776 robot.



3.4.4 Analysis of wrist

The remaining task of the Reverse Displacement Analysis

is to determine the values of the three angles in the wrist

which correspond to each set of values of the angles l1'

92, and 03. Complete rotation is possible about each of
the three axes S S and S6 so that the result will not be

affected by joint limitations as in the previous section.

Figure 3-9 shows a more detailed view of the three roll

wrist. It is important here to reemphasize how 904 05,

and 96 are measured. Each of the angles 9j (j=4,5,6) are

measured by a right handed rotation of the link a.. into the

link a about the vector S..
-jk -3
Two moving coordinate systems are attached to the robot

as shown in Figure 3-10 such that x' always points along a12

and z' points along S~ and analogously x" always points

along a45 and z" along S4. The relationship between the

fixed XYZ system and the moving x'y'z' system is given by

the following equations:



x' = x cos l + y sinl1

y' = -x sinl1 + y cosl1 (3.43)

z' = z



































N, g5B
g67 2


Figure 3-9: Three Roll Wrist



















Y'
-p


Xi ,12


ifX4


Figure 3-10: Moving and Fixed Coordinate Systems


Z










The direction cosines of the vector S which are given
-6
quantities in the fixed system can now be written in the

moving x'y'z' system by application of (3.43) and,



x6 = x6cosFl + Y6sinl1
y~ = -x6sinl1 + y6cos41 (3.44)

z' = z
z = z6


where x6,y6,z6 and x6, y6, z6 are respectively the

direction cosines of 6 in the fixed and moving systems.
-6
The direction cosines of S6 in the second moving
-6
coordinate system, x", y6, z6 are related to the direction

cosines of S6 in the first moving coordinate system by three

successive applications of the rotation matrix


Cj S.jc. SjS..
3 13 3 13
A.. = -s c c.s (3.45)
0 -sij ci.


which yields



6 (
y = A A23Al2 6 (3.46)









Substituting the values for a12, a23' and a34 from set (3.5)

into (3.46) gives



x c4c2+3 -s4 c4S2+3 x6
6 = -s4c2+3 -c4 4 2+3(347)

1s 0 -C XI
z6_ 2+ 0 2+3 z6


where s2+3 and c2+3 represent the sine and cosine of

(02+ 3). As already stated, the abbreviations s. and c.

in (3.45) denote the sine and cosine of 0. which measures

the relative angular displacement between successive links

aij and ajk. At this point all parameters on the right side

of equation (3.47) are known with the exception of the sine

and cosine of 04.

Alternate expressions for the direction of S in the
-6
second moving coordinate system may be obtained by simple

projection. These relationships are as follows:



6 5
Y = 5 (3.48)




where



X = S56s5
5 = -(s45c56+c45s56c5) (3.49)

5 = c45c56-S45s56c5








Equating the right sides of (3.47) and (3.49) and rearranging gives
the following three equations:


X5 = 4(X6C2+3+ZS2+3) + 4(-) (3.50)


Y5 = c4(6) + s4(X6c2+3+z6s2+3) (3.51)


5 = xs2+3 zc2+3 (3.52)


Equation (3.52) is significant in that it contains c5 as its
only unknown. Substituting for Z5 from set (3.49) and
solving for c5 gives


c5 = (c45c56-x s2+3+z6c2+3) / (s45s56) (3.53)


Equation (3.53) gives two possible values for 05 and these
two values determine two configurations of the wrist for a
specified end effector position.
The next task is to find the corresponding value for
04 for each value of 95. This is readily accomplished by
utilizing Cramer's rule to solve for s4 and c4 in equations
(3.50) and (3.51). The resulting equations are as follows:

c X5(x c2+3+zs2+3) g5y
c4 = 2 2- (3.54)
(x6c2+3+z s2+3) +


s4 2 2 (3.55)
(xc2+3+z s2+3 6








It is interesting to note that both the denominator and

numerator of (3.54) and (3.55) vanish simultaneously when



y6 = 0 and x2+3+zs2+3 = 0 (3.56)


This constitutes what will be defined as an algebraic

indeterminacy for 04. It can be shown that these

relationships are only satisfied simultaneously when SS is

colinear with S4, or in other words when 9 = +180 degrees.

Thus a value of c5=-1 calculated from (3.53) will act as a

flag to signal when equations (3.54) and (3.55) may not be

used to determine 94.

It is readily apparent that when S6 and S4 are

colinear, a degree of freedom of the manipulator is lost in

that it is possible to spin the vector S about the S ,S
5 -4 ~6
line without changing the position and orientation of the

hand. Thus the choice of 04 is arbitrary. This problem

can be overcome by setting 94 to its last previously

calculated value prior to the robot moving into the

indeterminate position.

The only remaining angle to be determined in the wrist

is the corresponding value for 06. Utilizing the unified

notation [21], the following subsidiary expressions may be

written for a spherical heptagon:








Z = Z4321



X6 = 43217


(3.57)


(3.58)


Expanding the left sides of the above two equations and

solving for the cosine and sine of 06 respectively gives


c6 = (c56-Z4321) / s56


6 = X43217 / s56


(3.59)


(3.60)


The right

expanded

following


hand side of equations (3.59) and (3.60) can be

in terms of known quantities by use of the

sets of equations:


X43217 = X4321c7 Y4321s7


X4321 = X432C1 Y432S1

4321 = C71(X432S1+Y432C1) s71Z432
4321 = s71(X432S1+Y432Cl) + c71Z432



X432 = X43c2 Y43s2

Y432 = c12(X43s2+Y43c2) s12Z43
= 43

Z432 = s12(X43s2+Y43c2) + c12 43
SX43s2+Y43c2


(3.61)




(3.62)








(3.63)








X43 = X4C3 Y4s3

Y43 = C23(X4s3 + Y4C3) S23Z4
= X4S3 + Y4C3 (3.64)

43 = s23(X4s3 + Y4c3) + c23 4
= z4



X4 = S45s4

Y4 = -(s34c45 + c34s45c4) (3.65)
= -45

4 = 34c45 34s45c4
= -s45c4
45 4
At this point the reverse displacement analysis is

complete. For any given location and orientation of the

hand, all displacement angles of the manipulator are known.

If the hand is in the effective workspace of the robot, then

there will be up to four possible configurations. That is,

there will be up to two sets of values for the angles l1'

02, and 03. For each of these sets there will be two
corresponding sets of values for the angles 94, Q5, and

(6'



3.5. Forward Displacement Analysis

For the forward displacement analysis it is assumed

that the values for the angles 1,' 02' @3' 04, 95, and 06

are known. It is desired to determine the location and

orientation of the hand, i.e. the coordinates of a reference

point of the tool attached to the manipulator and the









direction cosines of the vectors S6 and a67. This analysis

is included for completeness and will be referenced in

Chapter IV although it is not required for the path

generation problem.

The direction cosines of S6 and a67 in the moving x y

z coordinate system (see Figure 3-11) are simply (0,0,1)

and (1,0,0) since the x and z axes are chosen to point

along the vectors a67 and S6 respectively. These direction

cosines may be expressed in the x'y'z' coordinate system by

five successive applications of the rotation matrix


s.
- sjsij


-S.

3 13

3 13


0

-s.

c.j
13


(3.66)


It may be recalled that the x'y'z' coordinate

attached to the manipulator such that x' points

and z' along S~. Therefore a vector in the x y

can be expressed in the x'y'z' system

transformation equation


x' x
I I I "*
y = A21A32A43 A5465 y
L *
- 'z' z


system is

along a12

z system

using the


(3.67)


Further a vector expressed in the x'y'z' system may be

written in terms of the fixed xyz coordinate system via use

of the following transformation equation










ZI I

OZ~ Y
I jr


923


X,12


z6


Figure 3-11: Forward Analysis


2 6 w










y = M y' (3.68)




where



cos:1 -sin4, 0
M = sin cos1, 0 (3.69)

S0 0 1


Combination of (3.67) and (3.68) and the substitution of the

known direction cosines of S6 and a67 in the x y z

coordinate system gives




Y6 = M A21A32A43A54A65 0 (3.70)

z6 1


x67 11

67 = M A21A32A43A54A65 0 (3.71)

z67L 0


where x6,y6,z6 and x67',67,z67 are the direction cosines of

S6 and a67 respectively in the fixed coordinate system.

The last parameter to be determined is the position

coordinates of the point Pl, the point of interest of the

tool attached to the manipulator. This is obtained by use

of the following vector equation:









-P1 -P2 66-6 a67-67 (3.72)


where R P is the vector to the tool point of interest and

R2 is the vector to the wrist point P2. Since the
-P2
direction cosines of S and a67 are known in the fixed
-6 -67
system, the only unknown in (3.72) is the vector RP2. The

components of this vector in the x'y'z' coordinate system

are simply given by



-P2 = [a23c2 + 44S2+3]' +
[a23s2 S44c2+3]k' (3.73)


where i' and k' are unit vectors along the x' and z' axes.

This vector may be transformed to the fixed coordinate

system by multiplying it by the rotation matrix M of

equation (3.69). With RP2 now known in the fixed system,

R~P can be determined from (3.72).

The forward displacement analysis is now complete in

that the position and orientation of the hand of the

manipulator are uniquely determined for a given set of joint

displacements. It is important to note that the forward and

reverse solutions are complementary. That is, a set of

joint angles determined by the reverse analysis for a given

position and orientation of the hand must produce this same

position and orientation when used as input for the forward

analysis. This serves as a first verification of results.









3.6 Path Generation

The reverse analysis of the manipulator will serve as

the primary tool required to cause the manipulator to move

along a specified path. Simply stated, a set of

intermediate positions and orientations of the robot will be

selected along some path between two user specified end

points. A reverse analysis will be performed at each of the

intermediate positions in order to determine the set of

joint angles which will position the manipulator as

required.

For this analysis, it will be assumed that the user has

defined the initial and final pose of the manipulator.

Specifically, this requires that the user has specified the

coordinates of the tool tip and the directions of the unit

vectors S6 and a67. These nine numbers, six of which are

independent, completely describe the position and

orientation of the manipulator. Throughout the remainder of

this chapter the initial and final positions and

orientations of the manipulator will be assumed to be known

quantities and will be referred to as r S-' a 671' and r ,

S6F' a67F respectively. Many methods exist for the user to
input these values. Alternate methods will be discussed in

Chapter V.

Many strategies can be used in order to determine a

series of intermediate positions and orientations of the

manipulator between two user specified poses. For this

analysis, it was desired to cause the tool point to move








along a straight line. Furthermore, the tool attached to

the end effector should start at rest and accelerate to some

maximum velocity before slowing down and coming to rest at

the final manipulator pose. Due to the desired motion

characteristics, a displacement function based on the cosine

curve was selected. This displacement function is shown in

Figure 3-12.





3.6.1 Determination of number of intermediate points

A first step in the analysis is to determine how many

points should be inserted between the initial and final

poses of the manipulator. Too many points will cause the

motion of the animated manipulator to appear quite slow.

Too few points will result in fast velocities which will

make the animation appear to 'flicker'.

The number of intermediate points was selected based on

two factors, ie. the total straight line distance travelled

and the total change in orientation of the end effector.

Since the initial and final position of the tool tip are

specified, the total straight line distance is readily

calculated as follows:



dist = |rf riI (3.74)



The number of steps based on translation is found by

dividing this distance by some standard step size,






































Time


Figure 3-12: Displacement Profile




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs