Pattern mapping in plane motion analysis

MISSING IMAGE

Material Information

Title:
Pattern mapping in plane motion analysis
Physical Description:
xii, 104 leaves : ill. ; 28 cm.
Language:
English
Creator:
Fail, R. Wallace, 1954-
Publication Date:

Subjects

Subjects / Keywords:
Strains and stresses   ( lcsh )
Materials -- Testing   ( lcsh )
Motion   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1987.
Bibliography:
Includes bibliographical references (leaves 99-103).
Statement of Responsibility:
by R. Wallace Fail.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001026381
notis - AFA8353
oclc - 18035304
System ID:
AA00003790:00001

Full Text


















PATTERN MAPPING IN
PLANE MOTION ANALYSIS







BY







R. WALLACE FAIL


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


1987
































Copyright 1987

by

R. Wallace Fail
































To the Glory of God


Digitized by the Internet Archive
in 2010 with funding from
University of Florida, George A. Smathers Libraries with support from Lyrasis and the Sloan Foundation


http://www.archive.org/details/patternmappingin00fail

















ACKNOWLEDGMENTS


I would like to express my appreciation to the members of my

supervisory committee, Dr. S. S. Ballard, Dr. P. Hajela, Dr. U. H.

Kurzweg, Dr. E. K. Walsh, and Dr. C. E. Taylor, chairman.

Support of the graduate program by former department head, Dr.

K. Millsaps, and current department head, Dr. M. A. Eisenberg, is

gratefully acknowledged.

Special thanks go to Mr. Steve McNeil and Mr. Marc Paquette at

the University of South Carolina for their help and extending the use

of their laboratory facilities to digitize the images used in this

study.

I am indebted to my wife, Robyn, for her unwavering support and

love.
















TABLE OF CONTENTS


Page

ACKNOWLEDGMENTS................................................. iv

LIST OF TABLES.................................................. vii

LIST OF FIGURES.................................................. .ix

ABSTRACT........................................................... xi

CHAPTERS

I INTRODUCTION............................................. 1

I.1 Background........................................ 1
1.2 Purpose of Present Work.............................. 3
1.3 Scope of Present Work................................ 3
1.4 Survey of Previous Work.............................5
1.4.1 Solid Mechanics .............................5
1.4.2 Image Processing/Pattern Recognition.........7

II THEORETICAL FOUNDATIONS...................................8

[I.1 Solid Mechanics.....................................
II.2 Image Processing and Pattern Recognition............14
11.2.1 Digital Image.............................. 14
11.2.2 Preprocessing..............................18
11.2.3 Segmentation ...............................18
11.2.4 Image Features.............................. 24
11.2.5 Feature Extraction.........................26
11.2.6 Known Patterns.............................39
11.2.7 Syntactic Pattern Recognition..............41
11.2.8 Approximation of Syntactic Pattern
Mapping.................................... 42

III IMAGE GENERATION AND ANALYSIS ............................58

II1.1 Synthetic Images...................................58
III.2 Image Analysis.................................. ..60











IV EXPERIMENTS.............................................. 78

IV.1 Test Specimen......................................78
IV.2 Test Equipment..................................... 78
IV.3 Experimental Procedure..............................82

V ANALYSIS OF RESULTS........................................84

V.1 Numerical.......................................... 84
V.2 Experimental............................................. 89

VI CONCLUSIONS AND RECOMMENDATIONS...........................94

VI.1 Conclusions........................................94
VI.2 Recommendations...................................95

APPENDIX BORDER FOLLOWING ALGORITHM..............................98

REFERENCES.........................................................99

BIOGRAPHICAL SKETCH...............................................104





































vi
















LIST OF TABLES


Table Page

1 Unordered spot coordinates in the undeformed image..........45

2 Unordered spot coordinates in the deformed image.............46

3 Ordered spot coordinates in the undeformed image............51

4 Ordered spot coordinates in the deformed image..............52

5 Spot distance from origin in undeformed image...............53

6 Spot distance from origin in deformed image.................54

7 Spot distance from Icon in undeformed image.................55

8 Spot distance from Icon in deformed image...................56

9 Input file for synthetic image generator....................59

10 Image analysis options file.................................60

11 Displacement of each spot...................................61

12 Displacement gradients......................................70

13 Lagrangian strain from displacement gradients...............71

14 Strain from Taylor series...................................72

15 Deformation gradient...................................... 73

16 Green deformation tensor...................................74

17 Right stretch tensor........................................75

18 Rotation tensor......................................... .. 76

19 Lagrangian strain from deformation gradient.................77

20 Summary of strain analysis for synthetic images.............86

21 The effect of SNR on translation and strain calculations....87











22 The effect of gray level difference on translation and
strain calculations............................. ........... 87

23 The effect of spot radius on translation and strain
calculations............................................. 88

24 Images used for rigid-body motion experiments...............89

25 Results of rigid-body rotation experiments...................90

26 Results of rigid-body translation experiments...............92


viii















LIST OF FIGURES


Figure Page

1 The motion of body B through space.......................... 9

2 The motion of the neighborhood of particle p...............10

3 A typical PC-based image processing system.................15

4 A spot and its digital image................................16

5 A pixel and its eight-neighbors............................17

6 The bi-modal histogram of figure 4 .........................19

7 Black and white regions in a bi-modal histogram............21

8 Frame of spot in figure 4.................................. 24

9 Image artifacts........................................... 25

10 One-dimension model.......................................30

11 The discretization of KT and B ........................ ...33

12 Icon rotation measured from the vertical...................35

13 Icon rotation is determined to within a constant by
equation (40) ............................................. 37

14 Patterns that are easily recognized and analyzed............40

15 Patterns of Figure 14 with Icons............................40

16 A pattern and its context-sensitive language................42

17 Image addresses and offsets.................................43

18 A simple example of motion................................. 44

19 The deformed image and its reference position...............48

20 Beam fabricated from CR-39.................................79










21 Spot pattern on beam.......................................79

22 Experimental set-up of load frame.........................80

23 Set-up for four-point bending..............................80

24 Set-up for cantilever beam.................................81

25 Rigid-body motion set-up..................................81
















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

PATTERN MAPPING IN
PLANE MOTION ANALYSIS

BY

R. WALLACE FAIL

May 1987


Chairman: C. E. Taylor
Major Department: Engineering Sciences

A new, highly automated method for measuring plane motion,

pattern mapping, has been developed for rigid-body motion and strain

analysis. Pattern mapping employed image processing and syntactic

pattern recognition principles to recognize a known pattern before

and after motion. Using the Lagrangian definition of motion, points

in the two images were mapped, and the map was used to determine

rigid-body motion and strain. Whole-image analysis was thoroughly

demonstrated.

Known patterns were, in this case, rectangular grids of photo-

dots. These patterns were applied to the specimen by contact print-

ing. An icon was included in the pattern, thereby accommodating

rigid-body rotations up to one revolution.

Motion functions were selected by the user and easily changed to

suit the particular application. Function coefficients were deter-

mined by least squares.










Accuracy depended primarily on the signal-to-noise ratio and the

gray level difference between "black" and "white." An increase in

these variables improved accuracy.

User and computer time requirements were quite modest. Images

were processed with little or no user interaction, and output speci-

fications were automatically read from a disk file. Depending on

output specifications, computer time (VAX 11/750) for an 11X11

pattern in a 384X374 image was 25 to 38 CPU seconds.
















CHAPTER I
INTRODUCTION



I.1 Background

The primary purpose in experimental solid mechanics is to deter-

mine stress and motion (strain, rigid-body translation, and rotation)

from measurements of physical quantities. For example, stress may be

computed from measurements of load and cross-sectional area, and

strain is measured by changes in characteristic lengths. Many

methods (mechanical, electrical, optical, etc.) are available to

analyze stress and motion [1-4].

Optical methods play an important role in experimental mechan-

ics. They have a number of advantages which include [5,6]

1. Noncontacting measurements do not interfere with material

response.

2. Full-field analysis is usually possible.

3. Highly accurate results are usually possible.

4. Response times are as fast as light, thereby accommodating

dynamic investigations.

5. Actual structures under live loads can frequently be ana-

lyzed.

6. Effects of actual boundary conditions can be studied in

detail.










Although optical methods offer many advantages, they do have

several undesirable features. For example, some methods are sensi-

tive to small extraneous vibrations and require expensive, delicate

equipment. Perhaps the most undesirable feature is the fact that

most of these methods do not directly yield their wealth of informa-

tion about stress or motion in a readily usable form.

The extraction of this information is time-consuming and re-

quires special skills. An expert may spend hours reducing data and,

all too often, the amount of data analyzed is limited simply because

automated analysis is not available. Furthermore, results can be

affected by the investigator's particular techniques.

Digital computers are opening doors to automation, increased

accuracy of existing methods, and previously intractable or unknown

techniques. Some of the latest innovations employ digital image

processing and pattern recognition (IP/PR) technologies which utilize

digital computers and video imaging equipment for direct data acqui-

sition.

The fields of IP/PR are rapidly expanding and are well-

established in many areas. In experimental mechanics, their poten-

tial for efficiently gathering vast quantities of data consistently

and accurately is being quickly developed. These technologies are

freeing experimentalists from hours of painstaking labor and provid-

ing them with a new avenue to existing numerical methods and analyti-

cal tools [7-14].










1.2 Purpose of Present Work

The IP/PR technologies have their price. For example, IP/PR

programs that use fast Fourier transforms or correlation (template

matching) are frequently considered computationally expensive because

they entail large numbers of (relatively slow) floating-point calcu-

lations. Interactive programs provide great flexibility, but they

need the user's constant attention. Iterative and backtracking

(systematically retracing previous steps) methods can require sub-

stantial amounts of computer memory for intermediate data storage

[15-18].

Significantly reducing these time and memory requirements is

certainly a desirable goal. Perhaps this is possible by rethinking

the problem and developing a highly automated method which capital-

izes on the strengths of the digital computer. Thus, the purpose of

this work is to develop such a method, pattern mapping, to obtain

data via digital imaging equipment and subsequently determine plane

motion.



1.3 Scope of Present Work

Although it was not apparent in the beginning, the concepts

developed in this investigation are readily applied to the photodot

(grid) method [19]. The scope includes this photodot adaptation.

Actually, this investigation encompasses a variety of scientific

disciplines. Perhaps the best approach is to consider the attributes

of each and work toward a goal that is obtainable in a reasonable

amount of time. Specific objectives included in the scope are to










1. guarantee unique mapping

2. provide subpixel registration of 0.1 pixel

3. provide a wide range of motion measurement

4. provide whole-image analysis

5. reduce computer and user time as much as possible

6. provide highly automated analysis.

(Pixels are discrete elements in a digital image, and whole-image

implies results are computed for every photodot.) Special emphasis

is on accuracy and increased computation efficiency.

Several laboratory experiments using a CR-39 (plastic) prismatic

beam were included in the study. The experiments were translation,

rotation, general rigid-body motion, four-point bending, and

cantilever beam. All experiments were static cases of motion in a

plane normal to the optical axis. A rectangular (11X11) pattern of

photodots was contact printed on the lateral side of the beam.

For convenience, motion is defined only in terms of the

Lagrangian formulation. Evaluation of motion functions are limited

to a linear and a second order approximation. The least squares

method of curve fitting is used exclusively, and normal equations are

solved by matrix inversion using Gauss-Jordan elimination.

Synthetic images are used to debug analysis programs and inves-

tigate the influence of various factors. Motion in the images in-

clude affine transformations and rigid-body rotation about the image

center. Interpolation of between-pixel intensities is bilinear, and

system-induced distortions are assumed eliminated during preprocess-

ing. Noise has a zero mean and selectable Gaussian or uniform





5



distributions. Image size is 256X256 pixels which are unsigned, 16

bits or less. The camera position is assumed fixed.

Segmentation and feature extraction (dividing the image into

subregions and obtaining useful information) include neither prefil-

tering nor histogramming (gray levels) and are based on the random

selection of a few pixels. Processing is automatic and direct

because known patterns in the image are always completely visible and

recognizable. Feature space includes gray levels, areas, centroids,

and second moments. The pattern language consists of only one sen-

tence, and it is context-sensitive.



1.4 Survey of Previous Work

1.4.1 Solid Mechanics

Early applications of image processing in experimental mechanics

involved the analyses of fringes. The fringes were from photoelasti-

city, moire, or holographic interferometry. Burger [7] and Chen [8]

describe several of these applications in detail.

Another image processing technique applied to mechanics is cor-

relation. Researchers [9,20-24] determine plane displacements (ui,

i=1,2) and their gradients (3ui/aXj, i,j=1,2) by comparing digitally

recorded images of the same random pattern. For example, a pattern

of black speckles (e.g., minute specks of black paint) on a white

field is recorded as integer gray levels at discrete points

(pixels). Two images are used in the analysis: one before and one

after motion. By comparing (spatially registering) small subimages

from each, displacements and their gradients are determined at










selected points in the (original) pattern. Sigler and Haworth [25]

combine holography with correlation to determine motion. Thus, the

correlation method has the advantages of using coherent or

conventional light sources and easily applied patterns.

The grid method is one of the oldest and simplest methods used

in the analysis of strain. One simply applies reference marks

(lines, spots, scratches, etc.) to the surface of a specimen and

records the original distances between marks. Next, the specimen is

deformed and the distances are again recorded. Then, changes in the

distances are determined and strain is computed [1,2,26-32].

Although simple, the grid method is tedious and traditionally

limited to larger strains. Typical strain measurements are greater

than 0.001 inches-per-inch; the results depend on the care exercised

and the instrumentation [19,26,28]. Even when optical comparators

are used for the smaller strains, many man-hours are frequently

needed to complete the analysis of large grids.

Examining a specimen through a comparator and determining the

position of a grid mark is also a subjective process. Some investi-

gators define a mark's position by its left edge, some by its right

edge, and some by a variety of definitions. All these definitions

are subject to error caused by rotation, the nonlinear characteris-

tics of the recording medium (if photographically recorded), grid

application techniques, lighting, operator fatigue, etc. [19].

Parks [32] suggested computer technology might improve the

accuracy of the method and would certainly reduce the labor










involved. Previous efforts to fully computerize the grid method have

not been reported in the literature.

Sevenhuijesen [33,34] reported some feasibility studies with a

Reticon photo-diode array camera. Using an eight-bit digitizer and

digitizing the signal from each diode, he obtained a resolution of a

few hundred micro inches-per-inch. Images were recorded photographi-

cally without the aid of an image processing system.

1.4.2 Image Processing/Pattern Recognition

Several researchers active in IP/PR have been interested in two-

and three-dimensional motion analysis. Most efforts have concentrat-

ed on the motion of rigid bodies, clouds, the human body, optical

flow, etc. Motion parameters were obtained by a variety of iterative

processes [35-38].

Huang and Tsai [39-43] proposed a direct method of mapping the

three-dimensional motion of a plane using two, time-sequential

images. Mapping parameters were determined uniquely by solving a set

of linear equations. Unfortunately, motion parallel to the optical

axis could not be determined, and "small" approximations were made

for strain and rotation. Mathematical proofs guaranteeing unique

mapping between images were included.
















CHAPTER II
THEORETICAL FOUNDATIONS



II.1 Solid Mechanics

The description of motion used here is limited to the Lagrangian

description and is consistent with the classical field theories of

modern continuum mechanics [44-48]. The Lagrangian description is

sufficient to determine the directions of the principal axes of

strain and the magnitudes of the principal stretches. It is, there-

fore, a fully general measure of strain and is capable of describing

small or large strains. For convenience, all reference frames are

Cartesian in three-dimensional Euclidean space.

In Figure 1, body $ consists of a set of particles which assume

a continuous progression of configurations in time. The undeformed

configuration is chosen as the reference state where particle P

occupies the "material" position X at time t=to. Subsequent motion

carries particle P to the "spatial" position x in the deformed or

"after motion" state at time t. Although each state may be defined

in terms of its own coordinate system, material and spatial coordi-

nates are usually measured with respect to the same coordinate

axis. The motion of a point is described by the relation of the

coordinates between the two states. Mathematically, motion is ex-

pressed by a one-to-one mapping x symbolized as









x = x(X,t)


or x. = xi(X IX2,X3 t)
2. 1 2 3'


Figure 1. The motion of body B through space.


The deformation

derivative or gradient


gradient of the motion x is defined as the

of x at X and is denoted by


3x
F = or
- aX


3x.
F. .
ii DXj


where F is the fundamental quantity for the analysis of local proper-

ties of deformation, and the physical components are, of course,

dimensionless as are all measures of relative configuration.





10



Alternatively, motion written in terms of the displacement

vector u is


x = X + u(X,t)


or xi = X. + ui(XI,X 2X ,t)
2. i 1 2' 3


And the derivative of u, the displacement gradient, is


au
J =
X


or J.
ij


1
SX.


If two particles in the reference configuration are an infini-

tesimal distance dX apart, then these two particles are an infinites-

imal distance dx apart in the deformed configuration (see Figure

2). Tensor F maps the neighborhood of particle P in the


Figure 2. The motion of the neighborhood of particle P.










reference configuration to its neighborhood in the deformed configur-

ation. Thus, the linear transformation is


dx = F-dX or dx. = F..dX.
S- 1 13 j


Unique mapping is guaranteed as long as the Jacobian J = IFI is

greater than zero and finite.

Since vector dX has length dS and vector dx has length ds,

Lagrangian strain E may be defined in terms of these lengths squared

as


(ds) (dS)2 = 2dX-E.dX


The Green deformation tensor C, referred to the undeformed con-

figuration, gives the new squared length (ds)2 of the element into

which the given element dX is deformed. Symbolically, the relation-

ship is


(ds)2 = dX dX
(ds) = dX-C-dX


where


C = FT.F


-I
Conversely, the Cauchy deformation tensor B gives the initial

squared length (dS) of an element dx in the deformed









configuration. This relationship is written as


(dS)2 = dx.B *dx


where


-1 (F T F-
B : (F ) -F


Lagrangian strain written in terms of the Green deformation

tensor is


E = [C-1]
2 -2


(11)


In terms of the displacement gradient, E is


I ui au 3u auk au
Eij = 2[--X + i + -X]
ij 2 jX. 2X. 3X. jX.
3 3- 1. 3


Strain, computed by a Taylor series [1], is


E au 8u)2 av 2 ,w 2
S11 V 3X 9X aX X


/ Byv 2 au 2 aw 2'
E = A1+2- +(-) +(-) +() -I
22 1 Y a Y aY aY


(12)


(13)


au yv au u u v Dv Dw aw
+-- + + +
1 r Y x xs a 3X aY 3X aY
arcsin[(1 )(
2 (1+E )(1+E22)
11 22


(10)


12 = 21





13



For many engineering applications, the so-called "small dis-

placement" theory is adequate and much easier to implement. By this

theory, strain is



au
11 ax


1 au av
12 2 Y ax
(14)


C21 = E12


av
22 3X



If the Jacobian J is greater than zero and finite, the deforma-

tion gradient written in terms of a rotation tensor R and the right

stretch tensor U, a polar decomposition, is



F = R.U (15)



Tensors R and U are unique and represent rigid-body rotation and

stretch, respectively. Rotation tensor R is orthogonal and produces

a rigid-body rotation between the principal axes of C at X and the
-1
principal axes of B at x. The necessary and sufficient condition

for no local rigid-body rotation (sometimes called "pure strain")

during motion is R=1. Symmetric, positive-definite U produces the

changes in vector lengths during motion and produces rotation, in

addition to R, of all vectors except those in the principal











directions of U. Note that U=1 is necessary and sufficient to define

locally "pure" rigid-body motion (neglecting the trivial case when no

motion occurs), and translation does not change a vector with respect

to the common reference axis. Tensors U and R, written in terms of

F and C, are


U = C1



R = F.U_1
R --F.U


(16)



(17)


As a final note, the deformation at any point may result from

translation, a rigid-body rotation of the principal axes of strain,

and stretches along these axes. These motions may occur in any

order, but their tensorial measures may not. Mathematically,

Lagrangian motion is assumed as a successive application of

1. stretch by U

2. rigid-body rotation by R

3. translation to x.


11.2 Image Processing and Pattern Recognition

II.2.1 Digital Images

A typical image processing system is shown in Figure 3. Conver-

sion of the intensity distribution of a camera image to a form suit-

able for computers, digitizing, is accomplished by the digitizer.

After digitization, the image is transferred to the










computer for further processing. How an image is processed depends,

of course, on the hardware configuration [15,18,35,49-51].




















CCD Camera F ~

Object Microcomputer Digitizer
with Internal Monitor
Digitizer









Figure 3. A typical PC-based image processing system.






A digital image is an approximation of a real, continuous

image. For example, the real image of the spot in Figure 4a is re-

presented in digital form as the matrix of integer numbers in Figure

4b. Each matrix element (or pixel which is dimensionless) corre-

sponds to a discrete point in the real image, and the value of the

element (gray level) is the approximation of the average local illum-

ination intensity. Notice that black has lower gray levels than












white and that the edge region (boundary) between the two has inter-


mediate values. The variation of gray levels in regions of constant


illumination is the effect of noise. Gray levels range from zero to


some maximum which depends on the hardware (e.g., camera, digitizer).















149 151 155 146 163 159 159 N 15 15 162 151 142 145 146 156 145 166 148 165
157 145 156 142 146 150 127 148 158 166 151 153 142 142 157 149 153 148 143
150 155 156 145 146 150 159 166 56152 54 153 155 132 148 144 151 169 164
144 165 156 138 15 146 148 129 1~ 98 115 127 150 15 137 149 152 151 153
15 151 147 156 148 138 83 47 36 51 56 60 3# 142 154 147 163 146 15
146 152 146 134 138 66 45 52 48 49 47 51 74 52 139 152 154 15 146
145 154 14 141 94 56 56 47 50 43 51 46 53 59 91 151 153 153 151
146 159 143 138 53 61 51 50 55 43 48 51 45 56 44 135 154 146 154
150 150 150 10 51 48 36 44 55 49 51 58 53 50 49 105 141 144 15
153 139 152 96 50 50 44 52 48 56 57 50 54 46 47 1t1 15 143 143
156 155 155 18 40 43 46 49 59 41 48 49 55 51 42 193 146 158 152
153 148 166 137 47 51 47 41 48 37 51 50 50 47 33 137 156 153 143
147 148 151 153 95 45 5 55 49 50 51 48 44 43 82 165 147 153 160
147 135 155 159 138 76 55 45 57 53 54 47 63 132 151 158 147 148
145 148 151 162 159 144 82 50 55 54 53 39 70 141 148 152 148 159 152
151 153 156 153 148 132 148 132 149 96 99 143 154 150 145 154 163 159 151
144 141 149 157 148 156 145 158 145 154 142 139 145 145 147 154 161 148 149
157 150 141 144 158 149 166 150 151 142 142 158 145 156 150 150 151 147 149
156 158 146 160 148 140 148 149 151 144 149 149 150 15 153 152 149 157 160
155 147 150 131 150 150 163 150 158 153 147 146 163 136 151 154 153 152 153
151 16 146 148 144 152 153 148 16Q 160 136 157 144 144 14 162 146 148 154


Real Image Digital Image
a) b)





Figure 4. A spot and its digital image. The values, I(x,y), in the
digital image correspond to illumination intensity in the
real image.






Noise corrupts the image gray levels, and the sources may be


classified into two groups: optical and electronic. Two examples of


optical noise are particulate matter in the air and laser speckle.


Electronic noise comes from the discretization process, thermal










sources in the hardware, interference from extraneous electromagnetic

sources, etc. Noise is usually assumed statistically random and

normally distributed with a zero mean, i, and a standard devia-

tion, a Unfortunately, no known method completely eliminates the

effects of noise.

Each array element has neighbors which are commonly called the

eight-neighbors [52]. In Figure 5, the center element, pixel (x,y),

is the reference element or element of interest. The element

directly above it is arbitrarily labeled neighbor 0, and the other

elements are named in a counterclockwise direction. Although ele-

ments along the image's outer array edge do not have all


Figure 5. A pixel and its eight-neighbors.


1 0 7




2 Pixel 6
(x.y)



3 4 5











8 neighbors, the neighbors they do have are also labeled according to

Figure 5.

Spatial and illumination sampling must be sufficient to repre-

sent adequately the real image. Too low a spatial sampling frequency

averages intensity over too large an area (see Figure 2.7 in refer-

ence 15), while too high a sampling frequency results in the loss of

information outside the field of view. In general, increasing inten-

sity corresponds to increased gray levels, but intensities beyond the

capability of the equipment are limited to the maximum gray level.

An increase in the range of gray levels increases image detail (see

Figure 2.8 in reference 15).

11.2.2 Preprocessing

After digitization, the digital image is corrected for geometric

and radiometric distortion. Distortion, introduced by the processing

system, is removed by using the appropriate calibration procedure

[49,53]. This type of correction is frequently called preprocessing

and images in this study are assumed corrected unless otherwise

stated.

II.2.3 Segmentation

The next step in image processing, segmentation, consists of

identifying regions and boundaries of interest in the image

[18,54]. Of course, the difficulty in segmenting an image depends on

its complexity. Since the images developed in this study are rela-

tively simple, they are easily segmented.

Histogramming. Segmentation usually begins with histogram-

ming. When the frequency distribution of the gray levels in an image






19



is computed for the entire image, contrast and the presence of multi-

ple modes are readily observed. Figure 6, the bi-modal histogram of

Figure 4, has an average black of 50 and an average white of 150.

Note, the histogram yields no direct information about the location

of the gray levels in the image [55].









50 -

Contrast
Range
40


Pixel 30
Count


20



10-


0 qLA A. .I
0 20 40 60 80 100 120 140 160 180 200

Gray Levels




Figure 6. The bi-modal histogram of Figure 4.






Usually, computing the histogram for the entire image is not

considered computationally expensive. This is not true in the

present example because the percent of the total computer time

devoted to histogramming the entire image is significant. Since the

images used here are designed to have histograms similar to Figure 6,










the needed information is easily estimated without actually computing

a histogram.

Thresholding. Thresholding follows histograimming and includes

the task of determining the gray level which, in this case, reason-

ably separates regions of black from white. This is frequently

accomplished interactively or by use of a priori information, and the

choice of the optimum threshold is a subject of considerable interest

[56-62].

The alternative approach in this study requires neither inter-

active thresholding nor an optimum threshold. Instead, a good esti-

mate of gray levels that are "definitely black" and an estimate of

the average white level are used. The estimates are derived from

histogram statistics and based on the following assumption: the

histogram is strongly bi-modal. In Figure 6, gray level 53 is

assumed black and the average white gray level is about 151. The

task of estimating these levels is a four-step process.

First, pixels are randomly selected and their average gray level

IA is computed along with the standard deviation oA. A sample size

of 30 or more is statistically sufficient to estimate the mean and

standard deviation of the entire histogram [63,64]. Obviously,

darker pixels are to the left of the average and lighter pixels are

to the right (see Figure 7).

In the second step, the standard deviation is subtracted from

the average, resulting in IB. Algebraically, this is


I = IA A


(18)





21










50


40


30 I

Pixel 1 B A
Count
20






0 20 40 60 80 100 120 140 160 180 200

Gray Levels







Figure 7. Black and white regions in a bi-modal histogram (IA=132,
I=98, I1=53, 12=151, It=102).





Another set of 30 or more pixels whose values are below IB are ran-

domly selected and used to compute the average I1. Thus, the average

black level is approximated by I1.

Chebychev's theorem [63] states that, if a probability distribu-

tion has a mean p and a standard deviation a, the probability of

obtaining a value x which deviates from the mean by at least K stand-

ard deviations is at most Mathematically, this is expressed as
K2
KK


P(|x-U| > Ko) < -- (19)
K













Since only one standard deviation is subtracted from the average,

finding pixels darker than Ig is highly probable. A skew of the

histogram to the left further improves the odds of finding pixels

less than Ig.

The third step is the initial estimation of the average white

gray level, I2. A sufficient random sample whose values are greater

than IA is used to compute the initial estimate of 12.

Finally, the threshold value It is computed using the estimates

for the black and white levels. Symbolically,


1 + 12
I = 2 (20)
t 2



Images designed for this study are assumed to have two

regions: black and white. In a sense, they are binary (region)

images. Thus, segmentation is easily accomplished by locating and

isolating the black regions of interest from the white background.

Raster scanning and border following. Beginning at the origin,

the upper left corner, the image is raster scanned (i.e., along a row

or column) until a black pixel is found. If at least one of the

neighbors is black, a possible black region is assumed--spurious

noise is assumed if the pixel has no black neighbors. If a black

neighbor is found, the border between black and white is followed

around the black region thereby isolating it from the neighboring

white.











Rosenfeld [18] has developed a general-purpose border-following

algorithm. First, gray levels are reduced by thresholding to O's for

black or 1's for white. (This is quickly done if one has the

appropriate hardware.) By comparing neighbors and locating the next

edge pixel, the algorithm follows the border counterclockwise for an

outer border and clockwise for an inner one. Since borders may

overlap, provisions are made for backtracking.

In this study, the generality of Rosenfeld's algorithm is un-

necessary, and the algorithm is modified for faster execution. The

modified algorithm (see appendix) is designed to follow outer edges

(no inner edges exist) in the clockwise direction, and reducing the

image is eliminated by combining thresholding with border following.

The only information saved during border following is the y

coordinate of the spot's left-most pixel, L; the y coordinate of the

right-most pixel, R; the x coordinate of the highest pixel, T; and

the x coordinate of the lowest pixel, B. Since the black region

extends a little beyond the approximate border, 2 pixels are sub-

tracted from L and T, while 2 pixels are added to the R and B (see

Figure 8). Thus, the local region of interest (black and inter-

mediate gray) is "framed" for further processing.

Raster scanning resumes at the bottom of the frame and continues

until another black region is found and it is framed. Scanning and

framing continue until the entire image is covered. Thus, the image

is completely segmented and the information needed for feature ex-

traction is available.



















rY

149 151 155 14 1 15 1 1

149 151 155 146 1A3 1SB ISB 1S1


157
150
144
150
146
145
146
150
153
Edge 15&
147
147
145
151


Edge
T

150 ~ A


1441141 149 157 148 156 145 158 145 154
1571150 141 144 158 149 166 150 151 142
156 158 146 160 148 140 148 149 1511144
155 147 150 131 150 150 163 150 158|153
151 160 146 148 144 152 153 148 160 160


1 1


149
147
136


Frame


142 145 146 156 145 160 148 165
153 142 148 157 149 153 148 143
153 155 132 148 144 151 169 164
127 150 154 137 149 152 151 153
60 90 142 154 147 163 146 150'
51 70 52 139 152 150 150 146
46 53 59 91 151 153 153 151
51 45 56 44 135 154 146 154
58 53 50 49 105 141 144 150
50 54 46 47 101 150 143 143
49 55 51 42 103 146 158 1 2
50 50 47 33 137 156 153 13 Edge
48 44 43 82 165 147 153 160
47 50 63 132 151 158 147 148
39 70 141 148 152 148 159 152
143 154 150 145 154 163 159 151
139 145 145 147 154 161 148 149
150 145 156 150 150 151 147 149
149 150 151 153 152 149 157 160
146 163 136 151 154 153 152 153
157 144 144 141 162 146 148 154


Figure 8. Frame of spot in Figure 4.








II.2.4 Image Features


Images contain various types of information known as features,


and, when properly selected, they adequately represent the images.


Here, it is assumed that natural features are intrinsic to the real


scene and artifacts are added as desired to aid subsequent proces-


sing. Features, both natural and artifact, are generally grouped


into three classes: physical, structural, and mathematical [65].


145 156 142 146 150 127
155 156 145 146 150 159
165 156 138 155 146 148
151 147 156 148 138 83
152 146 134 138 66 45
154 140 141 84 56 56
159 143 138 53 61 51
150 150 100 51 48 36
139 15i 96 50 50 44
155 155 108 40 43 46
148 166 137 47 51 47
148 151 153 85 45 50
135 155 159 138 76 55
148 151 162 159 144 82
153 156 153 148 132 148


. . . j .










Physical and structural features are, in this case, the arti-

facts shown in Figure 9 plus the pattern they form in the image. For

simplicity, the (single) icon is considered a "special" spot, and all

other spots are assumed uniform in size.






















Spot Icon











Figure 9. Image artifacts. The icon is a "special" spot.






Mathematical features are frame areas, spot centroids, and those

concerning icon identification and rotation. These features allow

extensive integer programming which speeds the majority of the cal-

culations and improves accuracy.











11.2.5 Feature Extraction

Here, feature extraction consists of isolating the artifacts and

computing their desired mathematical features. Since the artifacts

are extracted (by framing) during segmentation, all that remains is

the computation.

Frame area. Frame (integer) area is easily computed using the

information obtained during the segmentation process. The computa-

tion is



A = (R-L)*(B-T) (21)



Since the icon's frame area is much larger than the other spots',

classifying each artifact as "spot" or "icon" is a simple, binary

task.

Spot centroids. The centroids of all spots are used as defini-

tions of spot position. If the gray levels in a spot's frame are

assumed to be "weights," the x and y coordinates of the centroid are

calculated [18] by



B R
E E xI(x,y)
x=T y=L
B R
E E I(x,y)
x=T y=L



(22)
R B
E Z yI(x,y)
y=L x=T
R B
E E I(x,y)
y=L x=T











Henceforth, these summation limits are implied unless otherwise

noted.

How do gray levels, noise, and spot size affect spatial resolu-

tion of the centroid? Here, it is evident (see Figure 8) that merely

summing with gray levels I(x,y) does not yield the proper result

because the white pixels shift the spot's computed centroid away from

the correct value. A transformation of all gray levels in the frame

by the equation



I'(x,y) = I -I(x,y), (23)
w



where I, is the local average white level and I'(x,y) is the noisy

gray level difference (between black and white), gives the approxi-

mate centroid coordinates. And since pixels on the frame edge are

white, their average gives a good, local estimate for I,. The sub-

stitution of I'(x,y) into equations (22) leads to




E E x I'(x,y)
S=xy
E E I'(x,y)
xy


(24)


Z E y I'(x,y)
yx
E E I'(x,y)
yx











To account for noise, assume each gray level in the image con-

sists of the actual (i.e., uncorrupted by noise) value, r(x,y), plus

zero-mean Gaussian noise, n(x,y). Thus,



I'(x,y) = I r(x,y) + n(x,y)
W


I'(x,y) = H(x,y) + n(x,y)


(25)


where H(x,y) = I r(x,y) is the actual gray level difference.

Substituting equation (25) into equations (24) leads to


EExH + ZZxn
xy xy
ZEH + ZEn
xy xy


(26)


EZyH + EZyn
- yx yx
E H + EE
yx yx


If the expected value [63] for the noise is assumed


EjnI(x,y)}


- EEn(x,y)
xy


- ECn(xy)
yx


En(x,y)
x


- En(x,y)
y


(27)










and substituted into equations (26), then the expected centroid is




ZZxH ZxEn
E{X -= xy x y
EEH + EZn
xy xy


ZExH + Ejn}Zx


xy
xy (28)


EZyH + E{n}Ey
E{Y1} yx y
EZH + Enr}
yx



Since E{n}=O, equations (28) become




EExH
E{} =xy
ECH
xy (29)


ElyH
EY} yx
EZH
yx



In general, the summations in equations (27) are not equal to

zero, which introduces random error. But if Zn = 0, then X = E{X},
y
and if Zn = 0, then Y = E{Y}. Thus, the summation process used to
x
compute the centroids greatly reduces, although in general it does

not eliminate, the effect of noise.










To understand better how noise, spot diameter, and number of

gray levels interact to affect spatial resolution, consider the one-

dimension model in Figure 10. This model is essentially a column in

the two-dimensional image (see Figure 8) with frame edges at x=T and

x=B. For simplicity, edge pixels are always located at x=T+2 and

x=B-2. (The extension to the general two-dimension case is straight-

forward.) Similar to equation (26), the centroid calculation is


ExH(x) + Zxn(x)
Sx x
SH(x) + En(x) (30)
x x


_T_


T T+2 T+3 \
line length


I -v


x


B-3 B-2


- F. Frame





Figure 10. One-dimension model.


Assume


a Ii









KT H x=T+2

KHc x=B-2
H(x) =
H(x) H T+3 C
0 otherwise



1 c- m
O
0<(K< <1
o- B-l


and T<6

where Hc, a constant over the line length, is the gray level differ-

ence and Hm is the maximum possible gray level difference (contrast)

between black and white.

The influence of noise and the edge pixels can be described in

terms of resultants. The noise resultant, 6n is the product of

random variables 6 and no, i.e., Exn(x)=6n and En(x)=nc. Dimen-
x x
sionless parameter KT is the fraction of the x=T+2 edge pixel covered

by the line; and KTH is the resultant. Assume the same for
and KBH at edge x=B-2. During digitization, KT and
tized (see Figure 11) to K' and <', respectively. For a given line

segment at a specified location, KT and KB are constants. The sub-

stitution of these values into equation (30) yields


B-3
Z xH +H [T (T+2)+K (B-2)]+6n
Sx=T+3
B-3
Z H(x)+H ( + If x=T+3

B-3
E x = (T+3)+(T+4)+(T+5)+ ... +(B-5)+(B-4)+(B-3)
x=T+3











is added to


3-3
Z x = (B-3)+(B-4)+(B-5)+ ... +(T+5)+(T+4)+(T+3)
x=T+3


then


B-3
2 Z x = (T+B)+(T+B)+(T+B)+ ... +(T+B)+(T+B)+(T+B)
x=T+3


B-3
E x = -(T+B)
x=T+3


where n is the number of terms in the sum.



n = (B-3) (T+3) + 1


Since B-3>T+3 and n>1,


n = B-T-5.


Therefore,


B-3
B3 (B-T-5)(T+B)
E x =
x2T
x=T+3


B-3
E H(x) = (B-T-5)H
x=T+3










Subsequently, equation (30) becomes


(B-T-5)(T+B)Hc+2{6nc+Hc[ S2[(B-T-5)Hc +c+H ( T + )]
c c T B


After dividing the top and bottom by Hc, equation (31) becomes


(B-T-5)(T+B)+2[6c+KT(T+2)+ 2[(B-T-5)+E+KT +

n
where E = H Note, the E ratio is
C
Pratt's [66] signal-to-noise ratio,


similar to the inverse of


H 2
SNR = (c)
o


(33)


1 I


1
I C


Fraction of Edge Pixel covered


Figure 11. The discretization of T and K An increase in He im-
proves spatial resolution of the edges.


(31)


(32)










After studying Figure 10 and equation (32), it is observed

that 6=T or 6=B gives the most noise error while


(B-T-5)(T+B)+2[CE(T+2)+K (B-2)]
6 = T[BB_5+TB (314)
2[(B-T-5)+K +BK


gives none.

If a line length of d=B-T-5 is assumed, equation (32) becomes


(d)(d+2T+5)+2[6+K (T+2)+BKg(d+T+3)]
X = (35)
2[d+S+KT+

If &=0,


(d)(d+2T+5)+2[KT(T+2)+K (d+T+3)]
xJ= (36)
2[d+K +K B



which is the "exact" centroid. If T=0 is assumed and K' and K' are
T B
substituted into equation (35), the resolution error is


(d)(d+5)+2[2KT+K (d+3)] (d)(d+5)+2[6+2K+K'(d+3)]
R = (37)
e 2[d+KT+KB] 2[d+S+<+

The general relationship between E, d, and Re is clearer if

equation (37) is simplified. To assume the line position is a random

variable implies the following expected values:



EIKT} = E{

After some algebra, equation (37) becomes












Re -(d+5-26)
e 2(d++1)


(38)


For E much smaller than d, equation (38) suggests an almost linear

relation between & and R A spatial dependency (perhaps difficult
to verify experimentally) is also indicated.
to verify experimentally) is also indicated.


Icon
Centroid


Figure 12. Icon rotation measured from the vertical.






Icon rotation. Next, the icon's angle of rotation is computed

(see Figure 12) using the second moment of inertia. The analysis is

as follows:











I = EE I'(x,y) d'
xx y
xy


I = EE I'(x,y) d2 (39)
yy x
xy


P = LE I'(x,y) d d
xy xy
xy


where Ixx = moment of inertia about icon's x-axis

I = moment of inertia about icon's y-axis

Pxy = product of inertia about icon's centroid

dx = x distance from pixel to icon centroid

d = y distance from pixel to icon centroid.

Icon rotation is


2P
S= ~Tan -1 x ) (40)
2 I -I
yy xx


The above equation gives the angle to within a constant. For

example, the analysis gives the same angle for both icon orientations

in Figure 13. The difference is resolved by constructing line "A-A"

through the icon centroid and computing the centroids of the upper

and lower segments. Line "A-A" has the form



A = mxA+b


where xA and yA are points on "A-A" with



m = -TAN(e)

and


b = con-mxcon
Icon Icon













SY
x


+6


K


"A" ,'
/


Icon
"A"/


+6


Upper
Segment


Upper
Segment


Icon



Lower
-Segment


Figure 13.


Icon rotation is determined to within a constant by equa-
tion (40).


The centroid for the upper segment is



XA YA
E E xI'(x,y)
S= x=T y=L
XA YA
E E I'(x,y)
x=T y=L


XA YA
E E yI'(x,y)
x=T y=L
XA YA
E Z I'(x,y)
x=T y=L


(41)










and for the lower segment,


B R
E E xI'(x,y)
X=XA Y=YA
XL B R
E E I'(x,y)
X=XA Y=YA
(42)


B R
E E yI'(x,y)
XX A Y=YA
L B R
E E I'(x,y)
X=XA Y-YA


The distances from the icon centroid to the segment centroids are

2 -- 2'
d = U /(X -X ) +(Y -Y con
U U Icon U Icon
(43)


( m on2 2
d Icon ) I+(Y co
L --Icon L Icon


If dU>dL, 9 computed by equation (40) is correct; otherwise 0 must be

shifted by


r9-ir 8>0
e -- (44)
fe+ r 8<0


Thus, a maximum rotation of one revolution is measurable.











Information saved during feature extraction is, of course, frame

area, the centroid coordinates of each spot, the number of spots,

which spot is the icon, and icon rotation. The areas and the cen-

troids are stored as linear arrays in the order the spots were

found. Lastly, the problem of sorting this raw information and using

it to match spots in the before- and after-motion images is solved

using pattern recognition principles.

11.2.6 Known Patterns

In this study, patterns are classified as either "known" or

"unknown." Known patterns contain structural features (natural,

artifact, or both) that are recognizable before and after motion.

Unknown patterns are patterns which never have recognizable structure

or those in which features are obscured beyond recognition. Known

patterns containing only artifacts are considered here.

Artifacts and their arrangement are selected to improve segmen-

tation, feature extraction, mapping, and subsequent numerical

analysis. Examples of simple, easily analyzed patterns are shown in

Figure 14. These patterns are fine for small rotations and are

easily recognized because the relative order of the spots does not

change, e.g., the upper left spot remains in the same position--

relative to the other spots--during motion. If rotation is 45

degrees or more for the square patterns and 90 degrees or more for

the others, the relative order is lost. The patterns in Figure 15

allow spot identification after rotations up to one revolution. Once

the patterns in the undeformed and deformed images have been

















66 56555

ewes .me

* sue...

* @505mg

* wsswg

S 5 66 m

ewe....









6 6



) 6 5


Figure 14. Patterns that are easily recognized and analyzed. Frames

are for reference and not part of the pattern.


*ee ee



* U U U






0C0 eeO









S

sew.

wee

sees


Figure 15. Patterns of Figure 14 with icons. The icons are located

for convenience.


000000 0


* U U U


00000 *0


* 3 6 S


@6 *5e66

ewe....

66 @6 Se

* *..L@ e U

sew....

.me....

ewe....


* C 9

* w

* S S

* 6 5











recognized, subsequent analysis is simplified (especially for the

Lagrangian description) because the undeformed spacing is approxi-

mately constant.

11.2.7 Syntactic Pattern Recognition

Pattern recognition is usually divided into two classes: deci-

sion-theoretic and syntactic. In the first class, a pattern is

represented by feature vectors and recognized by a stochastic parti-

tioning of feature space [67-69]. Syntactic recognition uses pattern

structure and linguistic rules for analysis [70,71].

Syntactic recognition assumes a pattern is described by a

language which contains valid sentences composed from a finite alpha-

bet or set of symbols. The language is governed by "pattern grammar"

or set of grammar rules. In context-free languages, pattern grammar

is unrestricted. Context-sensitive languages restrict pattern

grammar on the number of sentences, sentence construction, or both.

Here, the pattern language is context-sensitive and has only one

sentence which is composed of a fixed number of ordered symbols. The

alphabet symbols are the spots' relative positions whose order is

determined in the undeformed image. A spot's context is determined

by its 0,2,4,6 (or alternatively by 0 through 7) neighbors. For

example, (2,2) has neighbors (1,2), (2,1), (3,2), and (2,3). An

attempt to map (2,2) in the undeformed image to, say (1,1), in the

deformed is "out of context" with the neighbors (see Figure 16).

Thus, correct pattern mapping is assured if every spot maps in con-

text with its neighbors.





















*
(1,1) (1,2) (1.3) (1,4) (1,5)
(2. 1) (2,2) (2.3) (2.4) (2.5)
S ** (3.1) (3.2) (3.3) (3.4) (3.5)
0 0 4 (4,1) (4,2) (4.3) (4,4) (4.5)
(5.) (5 ) (52) (5.3) (5,4) (5.5)





Pattern One-sentence Language






Figure 16. A pattern and its context-sensitive language. Note the
alphabet (symbols) and context in the language.





Higher dimensional pattern grammars [70-72] provide the capabil-

ity to formally describe, recognize, and map patterns similar to

those in Figure 15. Since a record of each spot and its neighbors

must be maintained and manipulated, generating and parsing formal

grammar for large patterns becomes an intense process. If possible,

trading some of the formal elegance for an economic approximation is

highly desirable.

11.2.8 Approximation of Syntactic Pattern Mapping

Fortunately for many motions, spot relative positions change

very little, which allows context to be determined from spot






43



"addresses." If both images are carefully divided into a number of

addresses subregionss), then a spot has the same address before and

after motion--or nearly so. The determination of a spot's address is

a simple calculation involving centroid coordinates, and recognition

is reduced to merely arranging the spot addresses in proper order.

Thus, context is recognized.

"Offsets" are the spot addresses relative to the icon address.

All offsets are defined according to an icon address of 0 (see Figure

17).


Image
Border


- --K--------


Spot
Addresses
*
-18 -10 -2 6 14

*
-17 -9 -1 7 15

01 *
-16 -8 0 8 16

*
-15 -7 1 9 17

*0
1 -14 -6 2 10 18

"Offee se


Figure 17. Image addresses and offsets.
image edges.


Some addresses overlap









Pattern geometry affects the upper limits of measurable strain

because large strain displacements may change the order of the off-

sets in the deformed image. Spreading spots further apart (in the

undeformed image) increases the size of a spot's subregion. A larger

subregion allows the spot more strain displacement before any change

in offset order occurs.

The details are best illustrated by example (see Figure 18). It

is assumed that the undeformed image is analyzed first and, with one

exception, both images receive the same preliminary analysis.


* *
* emiS* *






Undeformed


U
U
0


+ o
U+

0
U
eo




Deformed


Figure 18. A simple example of motion (au./3X.=0.05, u=v=-10 pixels,
e=1200, E..=0.0525). The origin is in the upper left
corner of 1ach image.










Successful mapping is initially assumed and later verified by four

final checks.

First, the image is segmented and the features are extracted.

Scanning is by column and the spot centroids are stored in the order


they were found. (See columns marked "Raw" in Tables 1 and 2.)


The


icon is identified, and it's rotation is also computed.




Table 1. Unordered spot coordinates in the undeformed image.


Reference


From Icon


X Y X Y X Y


58.0101
93.0079
163.0518
197.9896
127.9832
58.0481
93.0216
197.9616
127.9396
163.0107
128.7025
57.9599
93.0284
198.0660
63.0010
92.9964
127.9581
162.9653
197.9768
57.9856
58.0635
92.9969
128.0099
163.0027
197.9936


57.9660
58.0056
57.9950
58.0041
57.9801
93.0576
92.9791
93.0132
92.9724
92.9915
127.9873
128.0263
128.0002
128.0575
127.9894
163.0297
162.9798
163.0147
162.9531
162.9875
198.0315
198.0169
198.0273
198.0136
197.9982


57.4541
92.4517
162.4955
197.4332
127.4271
57.4187
92.3923
197.3321
127.3103
162.3813
128.0000
57.2575
92.3260
197.3632
162.2984
92.2207
127.1824
162.1896
197.2011
57.2101
57.2148
92.1481
127.1610
162.1538
197.1446


57.8310
57.9438
58.0797
58.1617
57.9914
92.9227
92.9173
93.1757
92.9836
93.0760
128.0000
127.8911
127.9383
128.2151
128.0738
162.9676
162.9909
163.0989
163.1105
162.8523
197.8964
197.9548
198.0384
198.0978
198.1556


-70.5459
-35.5483
34.4955
69.4332
-0.5729
-70.5813
-35.6077
69.3321
-0.6897
34.3813
0.0000
-70.7425
-35.6740
69.3632
34.2984
-35.7793
-0.8176
34.1896
69.2011
-70.7899
-70.7852
-35.8519
-0.8390
34.1538
69.1446


-70.1690
-70.0562
-69.9203
-69.8383
-70.0086
-35.0773
-35.0827
-34.8243
-35.0164
-34.9240
0.0000
-0.1089
-0.0617
0.2151
0.0738
34.9676
34.9909
35.0989
34.1105
34.8523
69.8964
69.9548
70.0384
70.0978
70.1556


NOTE: Units are pixels.


"Raw" coordinates are the spot centroids


in the image. "Reference" and "From Icon" values are computed by
equations (46) and (47), respectively. See Figure 18.










Table 2. Unordered spot coordinates in the deformed image.


Reference


From Icon


X Y X Y X Y


98.5336
131.2724
78.6761
163.9343
111.3736
196.6288
58.7113
144.0612
229.4016
91.4833
176.7262
123.7393
38.9059
209.4617
71.5500
156.8817
18.9873
104.3150
189.5733
51.6854
136.9841
84.3163
169.6761
117.1079
149.7807


33.4158
50.2682
64.3727
67.1092
81.1960
84.0388
95.2677
98.0816
100.8559
112.1065
114.9435
129.6776
126.2522
131.8162
143.1225
145.8710
157.2419
160.0107
162.8025
174.0585
176.8253
190.9335
193.7303
207.7552
224.6135


51.9634
53.0816
88.4532
54.2228
89.5630
55.4298
124.9335
90.7333
56.5016
126.0252
91.8919
128.0000
161.4258
93.0300
162.6010
128.3497
197.9716
163.7403
129.5597
199.0750
164.8545
200.2602
166.0404
201 .3280
202.4800


192.1880
155.3833
196.7857
118.6530
160.0309
81 .8551
201.5070
123.2581
45.0350
164.6782
86.5161
128.0000
206.0458
49.7056
169.3190
91.1147
210.6846
132.4753
54.3186
173.9321
95.7498
137.2151
58.9647
100.3761
63.6284


-76.0366
-74.9184
-39.5468
-73.7772
-38.4370
-72.5702
-3.0665
-37.2667
-71.4984
-1.9748
-36.1081
0.0000
33.4258
-34.9700
34.6010
0.3497
69.9716
35.7403
1.5597
71 .0750
36.8545
72.2602
38.0404
73.3280
74.4800


64.1880
27.3833
68.7857
-9.3470
32.0309
-46.1449
73.5070
-4.7419
-82.9650
36.6782
-41.4839
0.0000
78.0458
-78.2944
41.3190
-36.8853
82.6846
4.4753
-73.6814
45.9321
-32.2502
9.2151
-69.0353
-27.6239
-64.3716


NOTE: Units are pixels.


"Raw" coordinates are the spot centroids


in the image. "Reference" and "From Icon" values are computed by
equations (46) and (47), respectively. See Figure 18.



All spots are moved into a "reference position" by translation

and rotation (see Figure 19). Translation is accomplished by adding

X to the X and YT to the Y of each spot; X and YT are



X = X -X
T cp Icon

(45)


Y = Y -Y
T cp Icon


I











where X and Y are the coordinates of the icon in reference pos
cp cp
tion (typically the image center). Rotation for each spot

computed from the x and y distance between it and the icon. T

computation is


X = X-X
p Icon



Y = Y-Y
p Icon



p = X + Y
P P


-1 p
_ = Tan (-)
z X
P
p


XRef = X+p[cos( z-0)-cos z ]+XT



YRe = Y+p[sin(w -6)-sin w ]+Y
Ref z z T


where XRef and YRef are the new coordinates of each

reference position. The results of translation and

shown in Tables 1 and 2 under the "Reference" column.

Position relative to the icon, i.e., distance from


spot in

rotation



the icon,


XRel XRef XIcon



Y =Y -YIcon
Rel Ref Icon


(46)


the

are



is


(47)






48












0


-


) e *
S*



Deformed Image Reference Position






Figure 19. The deformed image and its reference position.





The distances from the icon are shown in Tables 1 and 2 under the

columns marked "From Icon." Although all spots have been located

relative to the icon, recognizing the pattern they form and arranging

them in proper order (by address) is not always easy--especially if

rotation is more than a few degrees.

Address computation is based on "learning" the undeformed

image. The spots nearest and farthest from the icon row and column

are located in the undeformed image and used to calculate the address

"offset," Aoffset, by the following:










X = MAX(X)
max



m = MIN(X)



Y = MAX(Y)
max


Y
min


= MIN(Y)


X
N = max
N = NINT( )
x X
mln

Y
max
N = NINT( )
Smin


X -N (X )
SX + max x min
X = X +
space min N +1
X

Y -N (Y )
Ymax y min
Y = Y +
space min N +1


N
N = NINT( ) + 1
shift X
space


Xel YRel
Offset = NINT( ) + NINT( )(N )
offset X space sift
space space


where Xmax


Xmin

Ymax

Ymin

N =


= maximum

= minimum

= maximum

= minimum

number of


X centroid

X centroid

Y centroid

Y centroid

rows above or below icon


(48)


(49)



(50)










Ny = number of columns left or right of icon

Nrows = number of rows in the image

Xspac = approximate row spacing

Yspace = approximate column spacing

Nshift = number of possible columns in image.

Function NINT rounds numbers to the nearest integer value. Values

Space' space' and Nshift are saved from the undeformed image and

used in the deformed.

Fortunately, arranging the spots in proper sequence is a simple

matter of bubble sorting [73] the offsets in ascending order. By

equation (50), offsets (see column "Offset" in Tables 3 and 4)

increase down columns and left to right. Column "Key" is the order

in which the spots were found. Note, the two-dimension pattern

context is now represented by a one-dimension array, XRel, which

corresponds to the ordered spot addresses.

Recognition of the two-dimension context is accomplished by

determining the row-column order of the spots. The left-most column

is assumed to begin with the lowest offset and continue with ascend-

ing offset until the X coordinate decreases. Column 2 begins with

the next highest offset and continues until the X coordinate again

decreases. The process continues until the offset array is exhaust-

ed. By incrementing row and column counters, the relative spot order

is determined which establishes context (see Tables 5-8).

Until now, both undeformed and deformed images were processed

the same (except for learning the undeformed image) and assumed to

have the same context. This assumption is verified by the following












Table 3. Ordered spot coordinates in the undeformed image.


Reference


From Icon


Y

57.8310
57.9438
57.9914
58.0797
58.1617
92.9227
92.9173
92.9836
93.0760
93.1757
127.8911
127.9383
128.0000
128.0738
128.2151
162.8523
162.9676
162.9909
163.0989
163.1105
197.8964
197.9548
198.0384
198.0978
198.1556


X Y


Offset

-18
-17
-16
-15
-14
-10
-9
-8
-7
-6
-2
-1
0
1
2
6
7
8
9
10
14
15
16
17
18


Key

1
2
5
3
4
6
7
9
10
8
12
13
11
15
14
20
16
17
18
19
21
22
23
24
25


NOTE: Units are pixels.


Offsets are computed by equation (50).


-70.5459
-35.5483
-0.5729
34.4955
69.4332
-70.5813
-35.6077
-0.6897
34.3813
69.3321
-70.7425
-35.6740
0.0000
34.2984
69.3632
-70.7899
-35.7793
-0.8176
34.1896
69.2011
-70.7852
-35.8519
-0.8390
34.1538
69.1446


-70.1690
-70.0562
-70.0086
-69.9203
-69.8383
-35.0773
-35.0827
-35.0164
-34.9240
-34.8243
-0.1089
-0.0617
0.0000
0.0738
0.2151
34.8523
34.9676
34.9909
35.0989
35.1105
69.8964
69.9548
70.0384
70.0978
70.1556


X

57.4541
92.4517
127.4271
162.4955
197.4332
57.4187
92.3923
127.3103
162.3813
197.3321
57.2575
92.3260
128.0000
162.2984
197.3632
57.2101
92.2207
127.1824
162.1896
197.2011
57.2148
92.1481
127.1610
162.1538
197.1446


Keys are the order in which the spots were found. Undeformed
spacing is 35 pixels in x and y. See Figure 18.










Table 4. Ordered spot coordinates in the deformed image.



Reference From Icon

Offset Key X Y X Y

-18 9 56.5016 45.0350 -71.4984 -82.9650
-17 14 93.0300 49.7056 -34.9700 -78.2944
-16 19 129.5597 54.3186 1.5597 -73.6814
-15 23 166.0404 58.9647 38.0404 -69.0353
-14 25 202.4800 63.6284 74.4800 -64.3716
-10 6 55.4298 81.8551 -72.5702 -46.1449
-9 11 91.8919 86.5161 -36.1081 -41.4839
-8 16 128.3497 91.1147 0.3497 -36.8853
-7 21 164.8545 95.7498 36.8545 -32.2502
-6 24 201.3280 100.3761 73.3280 -27.6239
-2 4 54.2228 118.6530 -73.7772 -9.3470
-1 8 90.7333 123.2581 -37.2667 -4.7419
0 12 128.0000 128.0000 0.0000 0.0000
1 18 163.7403 132.4753 35.7403 4.4753
2 22 200.2602 137.2151 72.2602 9.2151
6 2 53.0816 155.3833 -74.9184 27.3833
7 5 89.5630 160.0309 -38.4370 32.0309
8 10 126.0252 164.6782 -1.9748 36.6782
9 15 162.6010 169.3190 34.6010 41.3190
10 20 199.0750 173.9321 71.0750 45.9321
14 1 51.9634 192.1880 -76.0366 64.1880
15 3 88.4532 196.7857 -39.5468 68.7857
16 7 124.9335 201.5070 -3.0665 73.5070
17 13 161.4258 206.0458 33.4258 78.0458
18 17 197.9716 210.6846 69.9716 82.6846

NOTE: Units are pixels. Offsets are computed by equation (50).


Keys are the order in which the spots were found.


See Figure 18.














Table 5. Spot distance from origin in undeformed image.



COL 1 2 3 4 5

ROW 1 X: 58.0101 58.0481 57.9599 57.9856 58.0635
Y: 57.9660 93.0576 128.0263 162.9875 198.0315

COL 1 2 3 4 5

ROW 2 X: 93.0079 93.0216 93.0284 92.9964 92.9969
Y: 58.0056 92.9791 128.0002 163.0297 198.0169

COL 1 2 3 4 5

ROW 3 X: 127.9832 127.9396 128.7025 127.9581 128.0099
Y: 57.9801 92.9724 127.9873 162.9798 198.0273

COL 1 2 3 4 5

ROW 4 X: 163.0518 163.0107 163.0010 162.9653 163.0027
Y: 57.9950 92.9915 127.9894 163.0147 198.0136

COL 1 2 3 4 5

ROW 5 X: 197.9896 197.9616 198.0660 197.9763 197.9936
Y: 58.0041 93.0182 128.0575 162.9531 197.9982

NOTE: Units are pixels. The origin is the upper left corner of the
image. See Figure 18.














Table 5. Spot distance from origin in deformed image.


COL 1 2 3 4 5

ROW 1 X: 229.4016 196.6288 16-3.9343 131.2724 98.5336
Y: 100.8559 84.0388 67.1092 50.2682 33.4158

COL 1 2 3 4 5

ROW 2 X: 209.4617 176.7262 144.0612 111.3736 78.6761
Y: 131.8162 114.9435 98.0816 81.1960 64.3727

COL 1 2 3 4 5

ROW 3 X: 189.5733 156.8817 123.7393 91.4833 58.7113
Y: 162.8025 145.8710 129.6776 112.1065 95.2677

COL 1 2 3 4 5

ROW 4 X: 169.6761 136.9841 104.3150 71.5500 38.9059
Y: 193.7303 176.8253 160.0107 143.1225 126.2522

COL 1 2 3 4 5

ROW 5 X: 149.7807 117.1079 84.3163 51.6854 18.9873
Y: 224.6135 207.7552 190.9335 174.0585 157.2419

NOTE: Units are pixels. The origin is the upper left corner of the
image. See Figure 18.













Table 7. Spot distance from icon in undeformed image.


COL 1 2 3 4 5

ROW 1 X: -70.5459 -70.5813 -70.7425 -70.7899 -70.7852
Y: -70.1690 -35.0773 -0.1089 34.8523 69.8964

COL 1 2 3 4 5

ROW 2 X: -35.5483 -35.6077 -35.6740 -35.7793 -35.8519
Y: -70.0562 -35.0827 -0.0617 34.9676 69.9548

COL 1 2 3 4 5

ROW 3 X: -0.5729 -0.6897 0.0000 -0.8176 -0.8390
Y: -70.0086 -35.0164 0.0000 34.9909 70.0384

COL 1 2 3 4 5

ROW 4 X: 34.4955 34.3813 34.2984 34.1896 34.1538
Y: -69.9203 -34.9240 0.0738 35.0989 70.0978

COL 1 2 3 4 5

ROW 5 X: 69.4332 69.3321 69.3632 59.2011 69.1446
Y: -69.8383 -34.8243 0.2151 35.1105 70.1556

NOTE: Units are pixels. Spacing is 35 pixels in x and y. See
Figure 18.













Table 8. Spot distance from icon in deformed image.


COL 1 2 3 4 5

ROW 1 X: -71.4984 -72.5702 -73.7772 -74.9184 -76.0366
Y: -82.9650 -46.1449 -9.3470 27.3833 64.1880

COL 1 2 3 4 5

ROW 2 X: -34.9700 -36.1081 -37.2667 -38.4370 -39.5468
Y: -78.2944 -41.4839 -4.7419 32.0309 68.7857

COL 1 2 3 4 5

ROW 3 X: 1.5597 0.3497 0.0000 -1.9748 -3.0665
Y: -73.6814 -36.8853 0.0000 36.6782 73.5070

COL 1 2 3 4 5

ROW 4 X: 38.0404 36.8545 35.7403 34.6010 33.4258
Y: -69.0353 -32.2502 4.4753 41.3190 78.0458

COL 1 2 3 4 5

ROW 5 X: 74.4800 73.3280 72.2602 71.0750 69.9716
Y: -64.3716 -27.6239 9.2151 45.9321 82.6845

NOTE: Units are pixels. See Figure 18.










checks:

1. same number of rows in each image

2. same number of columns in each image

3. same number of spots in each image

4. rows times columns equal number of spots.

Of course, if any check fails, the images are "out of con-

text." If strains are large enough, context similarities between

images cannot be recognized and an error message is printed. Assum-

ing large, local rotation does not occur in the material, recognizing

similar context between the two patterns that does not exist is very

remote. Thus if no check fails, the approximation algorithm estab-

lishes a one-to-one relationship, or "context mapping," between spots

in the two images.
















CHAPTER III
IMAGE GENERATION AND ANALYSIS



III.1 Synthetic Images

Computer generated images, or "synthetic" images, were used as

the primary development tool and to study the influence of various

parameters. The options data file shown in Table 9 was used to gen-

erate these 256X256 pixel images.

Motion is limited to affine transformations plus rigid-body

rotation about the image center. The transformations are



x = c X + c2Y + 3

(51)

y = c4X + c5Y + 6



There are three image output options available. The "BINARY"

option is the machine representation of the image. "DECIMAL" is the

machine version converted to a readable form (see, for example,

Figure 4b). "PLOT" displays a thresholded image, suitable for hard

copy, on a graphics terminal (see, for example, Figure 15).

"U123.DAT," "NIMG.DAT," "D123.DAT," and "DIMG.DAT" are user-defined

file names. The image generator automatically reads the input file

and generates the specified output thereby allowing the user a choice

of interactive or batch processing.










Table 9. Input file for synthetic image generator.



IMAGE GENEREATOR INPUT DATA


SPOT GEOMETRY


RADIUS XSPACING
4.00 35.00


PATTERN SIZE

XCOLS YROWS
5 5


YSPACING
35.00


GRAY LEVELS

BLACK WHITE
00000 00200


NOISE DATA


DISTRIBUTION TYPE: N


NORMAL


STANDARD DEVIATION: 007.000


UNIFORM


IMAGE MOTION

MOTION TYPE: 2

1--POLYNOMIAL ONLY
2--POLYNOMIAL AND ROTATION
3--ROTATION ONLY


POLYNOMIAL


X Y TRANSLATION


x: +1.05000 +0.05000
y: +0.05000 +1.05000

ROTATION (DEGREES): +120.000


-010.0000
-010.0000


IMAGE OUTPUT


FORMAT

BINARY:
DECIMAL:
PLOT:


UNDEFORMED

U123.DAT
NIMG.DAT
*SCREEN*


DEFORMED

D123.DAT
DIMG.DAT
*SCREEN*


Y/N

Y
N
N











III.2 Image Analysis

Similar to image generation, image analysis begins with an op-

tions file which is shown in Table 10. If "USE THESE FILE NAMES?" is

answered no, the user is automatically prompted for the file name.

Since no other information is required by the program, the user is

free to choose interactive or batch processing.






Table 10. Image analysis options file.



******** IMAGE ANALYSIS INPUT/OUTPUT OPTIONS ********


INPUT SPECIFICATIONS

IMAGE FILE NAME:
USE THESE FILE NAMES?:


IMAGE OUTPUT

UNORDERED SPOT DATA:
DISTANCE FROM ORIGIN:
DISTANCE FROM ICON:


DISPLACEMENT ANALYSIS

TOTAL DISPLACEMENT:
DISPLACEMENT GRADIENTS:
LAGRANGIAN STRAIN:
TAYLOR SERIES STRAIN:


MOTION ANALYSIS

DEFORMATION GRADIENT:
GREEN DEFORM. TENSOR:
RIGHT STRETCH TENSOR:
ROTATION TENSOR:
LAGRANGIAN STRAIN:


UNDEFORMED
-----+------
U123.DAT
N


UNDEFORMED
-----+------
Y
Y
Y


DEFORMED
-----+------
D123.DAT
N


DEFORMED
------------
Y
Y
Y


OUTPUT

Y
Y
Y
Y


OUTPUT

Y
Y
Y
Y
Y









After segmentation, feature extraction, and context recognition,

automatic processing continues with motion analysis. The motion of

each spot is resolved and the displacement is computed by equation

(3) (see Table 11). Next, the deformation and displacement gradients

at each spot are computed by differentiating the functions used to

describe motion and displacement.



Table 11. Displacement of each spot.



COL 1 2 3 4 5

ROW 1 X: 171.3916 138.5808 105.9744 73.2868 40.4701
Y: 42.8899 -9.0188 -60.9171 -112.7193 -164.6158

COL 1 2 3 4 5

ROW 2 X: 116.4538 83.7046 51.0328 18.3772 -14.3208
Y: 73.8106 21.9643 -29.9186 -81.8337 -133.6442

COL 1 2 3 4 5

ROW 3 X: 61.5900 28.9421 -4.9632 -35.4747 -69.2986
Y: 104.8223 52.8985 1.6903 -50.8733 -102.7596

COL 1 2 3 4 5

ROW 4 X: 6.6243 -26.0267 -58.6861 -91.4154 -124.0968
Y: 135.7353 83.8338 32.0212 -19.8922 -71 .7614

COL 1 2 3 4 5

ROW 5 X: -48.2088 -80.8538 -113.7497 -146.2914 -179.0063
Y: 166.6094 114.7370 62.8761 11.1054 -40.7563

NOTE: Units are pixels. Compare images in Figure 18. Displacements
are computed using the data in Tables 5 and 6 in equation (3).


The choice of a mapping model depends, of course, on the

application. Intuition, a priori knowledge, elasticity theory,










numerical efficiency, etc. are used to select the approximating func-

tions. In this study, simple polynomials, linear or second order,

adequately approximate the motion and displacement in the synthetic

and real images. The models are linear



x = 11X + c21 + 31Y



y = c12X + c22 32Y

(52)


u = c13X + c23 c33Y


v 0 C14X + c24 + C34Y



and second order


2 2
x= 11 + X c+ 31 + c41 + cY + + c XY
11 21 31 41 51 01


2 2
y = C12 + c22X + 32 + cY2 + 52Y + c6XY



u = 13X + c23X + c33 + cY cY Y + c XY
13 23 33 143 53 63


(53)


2 2
v = c1X2 c2X + c3 + 44Y + c54Y + c64XY



These models are fitted to a small neighborhood of spots and

differentiated to obtain the displacement and deformation






63


gradients. Typically, the neighborhoods are 3X3, 5X5, or 7X7 and the

gradients are computed for the center spot. Exceptions are the spots

near the edge of the pattern and their gradients are computed like a

forward-backward difference.

For both models, the coefficients are determined by least

squares evaluation. Based on a modified form of the Conte and Deboor

[74] notation, the normal equations are



T c = f (54)



where




X X2 ." XN

= 1 ... Linear

Y1 Y2 N




(55)

2 2 2
X X2 ... XN
1 2 N
X1 X2 XN

1 1 1
2 2 2 Second
Y Y ... Y Order

Y Yx ... YN

X1Y1 X2Y2 ... XNYN











Linear


c 01 c14

c 31 c 34_
L_ _3 j


(56)


C = i.


61


I:
x 1

f =

xN
"N


Second
order


1 u061

Y1 Ul V1


and N is the number of spots in the neighborhood.



T
S= A


and


Equations (54) become


Ac = Z


which are solvable by Gauss-Jordan elimination.


(57)


(58)


f = z










Hornbeck [75] points out Gauss-Jordan elimination may be used to

solve equations (58), but unfortunately the set is extremely

ill-conditioned. Taking his advice, double precision arithmetic is

used to cope with the large variation in the magnitudes of the co-

efficients in any given row. In addition to his suggestions, 0 and f

are scaled before solution and the c's are reconverted during

gradient computation.

For a variety of programming reasons, solution of equations (58)

is not performed directly by Gauss-Jordan elimination. Instead, the

inverse of A is computed using Hornbeck's [75] Gauss-Jordan matrix

inversion subroutine, with the addition of appropriate error traps,

and c is solved by



-1
c = A Z (59)



How accurate are the values of c and how well do equations (52)

or (53) fit the data? Debugging experience indicates the number of

reliable digits (numerically speaking, not experimentally) runs in

the teens. In this study the solutions are known, but if they were

unknown, goodness-of-fit and the contribution of each term could be

easily tested by the methods suggested by Miller and Freund [63].

The example of Figure 18 is continued with the calculation of

c. Assume the linear model of equation (52) and a 3X3 neighborhood

centered at spot (3,3). A substitution of data from Table 5 into

equation (55) produces













T
I =


93.0216
93.0284
92.9964
127.9396
128.7025
127.9581
163.0107
163.0010
162.9653


1 .0000
1.0000
1.0000
1.0000
1 .0000
1.0000
1 .0000
1.0000
1.0000


92.9791
128.0002
163.0297
92.9724
127.9873
162.9798
92.9915
127.9894
163.0147


(60)


Likewise, data from Tables 6 and 11 in equation (57) lead to


176.7262
144.0612
111 .3736
156.8817
123.7393
91.4833
136.9841
104.3150
L71 .5500


114.9435
98.0816
81.1960
145.8710
129.6776
112.1065
176.8253
160.0107
143.1225


83.7046
51.0328
18.3772
28.9421
-4.9632
-36.4747
-26.0267
-58.6861
-91 .4154


21.9643
-29.9186
-81.8337
52.8985
1 .6903
-50.8733
83.8338
32.0212
-19.8922


(61)


The solution for c is


-0.56849
c = 316.52130
_-0.93425


0.88427
77.50840
-0.48175


-1.56849
316.52130
-0.93425


0.88427
77.50840
-1.48165


(62)


The motion functions (i.e., the "mathematical" maps) in equa-

tions (52) become



x = -0.56849X + 316.52130 0.93425Y


(63)


y = 0.88427X + 77.50840 0.48175Y


I









and the displacement maps in equations (52) become



u = -1.56849X + 316.52130 0.93425Y


(64)


v = 0.88427X + 77.5084 1.48175Y


The deformation and displacement gradients, computed by equa-

tions (2) and (4), respectively, become


-0.56849

0.88427



-1 .56849

0.88427


-0.93425

-0.48175



-0.93425

-1.48175]


(65)


(66)


Strain, computed by equations (11) or (12), is


.05255

0[.05256


0.05256

0.05246]


(67)


By the Taylor series method of equation (13), strain is


0.05124
0.04763


0.0476 3

0.05115


(68)






68



Strain computed according to small displacement theory is, in

this case, quite inaccurate. By equations (14),


-1 .56849
e =
-0.02499



Rigid-body rotation is computed

the Green deformation tensor is




[ .10509
0.10511


-0.02499

-1.48175_


(69)


as follows. By equation (8),


0.10511

1.10491


(70)


The eigenvalues of C are


1 .21009
S= .00000
~p 0.00000


0.00000

0.99990


(71)


1/2
and C 2 is
-p


1/2
~p


1 .10004

0.00000


0.00000

0.99995


(72)


1/2
Since U = C then
-p -p



1.05000

0.05005


0.05005

1.05000


(73)





69



Rotation tensor R, computed by equation (17), is



F-0.50016 -0.86596

0.86596 -0.50011



The remainder of the output for the example in Figure 18 is

shown in Tables 12 through 19.















Table 12. Displacement gradients.


COL 1 2 3 4 5

ROW 1 au/ax: -1.56849 -1.56849 -1.56849 -1.56849 -1.56849
3v/ax: 0.88427 0.88427 0.88427 0.88427 0.88427
au/3Y: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
av/aY: -1 .48175 -1.48175 -1.48175 -1 .48175 -1.48175

COL 1 2 3 4 5

ROW 2 3u/aX: -1.56849 -1.56849 -1.56849 -1.56849 -1.56849
av/ax: 0.88427 0.88427 0.88427 0.88427 0.88427
3u/3Y: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
3v/3Y: -1.48175 -1.48175 -1.48175 -1.48175 -1.48175

COL 1 2 3 4 5

ROW 3 3u/aX: -1.56849 -1.56849 -1.56849 -1.56849 -1.56849
av/3X: 0.88427 0.88427 0.88427 0.88427 0.88427
3u/3Y: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
av/aY: -1.48175 -1.48175 -1.48175 -1.48175 -1.48175

COL 1 2 3 4 5

ROW 4 3u/3X: -1.56849 -1.56849 -1.56849 -1.56849 -1.56849
av/ax: 0.88427 0.88427 0.88427 0.88427 0.88427
au/3Y: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
av/3Y: -1.48175 -1.48175 -1.48175 -1.48175 -1.48175

COL 1 2 3 4 5

ROW 5 3u/3X: -1.56849 -1.56849 -1.56849 -1.56849 -1.56849
av/ax: 0.88427 0.88427 0.88427 0.88427 0.88427
au/3Y: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
3v/3Y: -1.48175 -1.48175 -1.48175 -1.48175 -1.48175

NOTE: Correct values are 3u/3X = -1.56830, 3v/3X = 0.88433,


3u/3Y = -0.93433, 3v/3Y =


-1 .48170.











Table 13. Lagrangian strain from displacement gradients.



COL 1 2 3 4 5

ROW 1 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 2 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 3 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 4 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 5 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

NOTE: Units are micro inches-per-inch. Correct values are E -
52,500.












Table 14. Strain from Taylor series.


COL 1 2 3 4 5

ROW 1 E11: 51241. 51241. 51241. 51241. 51241.
E12: 47634. 47634. 47634. 47634. 47634.
E22: 51151. 51151. 51151. 51151. 51151.

COL 1 2 3 4 5

ROW 2 E11: 51241. 51241. 51241. 51241. 51241.
E12: 47634. 47634. 47634. 47634. 47634.
E22: 51151. 51151. 51151. 51151. 51151.

COL 1 2 3 4 5

ROW 3 E11: 51241. 51241. 51241. 51241. 51241.
E12: 47634. 47634. 47634. 47634. 47634.
E22: 51151. 51151. 51151. 51151. 51151.

COL 1 2 3 4 5

ROW 4 E11: 51241. 51241. 51241. 51241. 51241.
E12: 47634. 47634. 47634. 47634. 47634.
E22: 51151. 51151. 51151. 51151. 51.151.

COL 1 2 3 4 5

ROW 5 E11: 51241. 51241. 51241. 51241. 51241.
E12: 47634. 47634. 47634. 47634. 47634.
E22: 51151. 51151. 51151. 51151. 51151.

NOTE: Units are micro inches-per-inch. Correct values are E1 = E22
= 51,190, E12 = E21 = 47,582.












Table 15. Deformation gradient.


COL 1 2 3 4 5

ROW 1 F11: -0.56849 -0.56849 -0.56849 -0.56849 -0.56849
F12: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
F21: 0.88427 0.88427 0.88427 0.88427 0.88427
F22: -0.48175 -0.48175 -0.48175 -0.48175 -0.48175

COL 1 2 3 4 5

ROW 2 F11: -0.56849 -0.56849 -0.56849 -0.56849 -0.56849
F12: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
F21: 0.88427 0.88427 0.88427 0.88427 0.88427
F22: -0.48175 -0.48175 -0.48175 -0.48175 -0.48175

COL 1 2 3 4 5

ROW 3 F11: -0.56849 -0.56849 -0.56849 -0.56849 -0.56849
F12: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
F21: 0.88427 0.88427 0.88427 0.88427 0.88427
F22: -0.48175 -0.48175 -0.48175 -0.48175 -0.48175

COL 1 2 3 4 5

ROW 4 F11: -0.56849 -0.56849 -0.56849 -0.56849 -0.56849
F12: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
F21: 0.88427 0.88427 0.88427 0.88427 0.88427
F22: -0.48175 -0.48175 -0.48175 -0.48175 -0.48175

COL 1 2 3 4 5

ROW 5 F11: -0.56849 -0.56849 -0.56849 -0.56849 -0.56849
F12: -0.93425 -0.93425 -0.93425 -0.93425 -0.93425
F21: 0.88427 0.88427 0.88427 0.88427 0.88427
F22: -0.48175 -0.48175 -0.48175 -0.48175 -0.48175


NOTE: Correct
0.88433, F22 =


values are
-0.48170.


F11 = -
0.56830,
Fl2 = -
0.93433,
?21 "













Table 16. Green deformation tensor


COL 1 2 3 4 5

ROW 1 C11: 1.10511 1.10511 1.10511 1.10511 1.10511
C12: 0.10511 0.10511 0.10511 0.10511 0.10511
C22: 1.10492 1.10492 1.10492 1.10492 1.10492

COL 1 2 3 4 5

ROW 2 C11: 1.10511 1.10511 1.10511 1.10511 1.10511
C12: 0.10511 0.10511 0.10511 0.10511 0.10511
C22: 1.10492 1.10492 1.10492 1.10492 1.10492

COL 1 2 3 4 5

ROW 3 C11: 1.10511 1.10511 1.10511 1.10511 1.10511
C12: 0.10511 0.10511 0.10511 0.10511 0.10511
C22: 1.10492 1.10492 1.10492 1.10492 1.10492

COL 1 2 3 4 5

ROW 4 C11: 1.10511 1.10511 1.10511 1.10511 1.10511
C12: 0.10511 0.10511 0.10511 0.10511 0.10511
C22: 1.10492 1.10492 1.10492 1.10492 1.10492

COL 1 2 3 45

ROW 5 C11: 1.10511 1.10511 1.10511 1.10511 1.10511
C12: 0.10511 0.10511 0.10511 0.10511 0.10511
C22: 1.10492 1.10492 1.10492 1.10492 1.10492

NOTE: Correct values are C11 = C22 = 1.10500, C12 = 0.10500.


-0












Table 17. Right stretch tensor.


COL 1 2 3 4 5

ROW 1 U11: 1.05005 1.05005 1.05005 1.05005 1.05005
U12: 0.05005 0.05005 0.05005 0.05005 0.05005
U22: 1.04996 1.04996 1.04996 1.04996 1.04996

COL 1 2 3 4 5

ROW 2 U11: 1.05005 1.05005 1.05005 1.05005 1.05005
U12: 0.05005 0.05005 0.05005 0.05005 0.05005
U22: 1.04996 1.04996 1.04996 1.04996 1.04996

COL 1 2 3 4 5

ROW 3 U11: 1.05005 1.05005 1.05005 1.05005 1.05005
U12: 0.05005 0.05005 0.05005 0.05005 0.05005
U22: 1.04996 1.04996 1.04996 1.04996 1.04996

COL 1 2 3 4 5

ROW 4 U11: 1.05005 1.05005 1.05005 1.05005 1.05005
U12: 0.05005 0.05005 0.05005 0.05005 0.05005
U22: 1.04996 1.04996 1.04996 1.04996 1.04996

COL 1 2 3 4 5

ROW 5 U11: 1.05005 1.05005 1.05005 1.05005 1.05005
U12: 0.05005 0.05005 0.05005 0.05005 0.05005
U22: 1.04996 1.04996 1.04996 1.04996 1.04996

NOTE: Correct values are U11 = U22 = 1.05000, U12 = 0.05000.












Table 18. Rotation tensor.


COL 1 2 3 4 5

ROW 1 R11: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011
R12: -0.86596 -0.86596 -0.86596 -0.86596 -0.86596
R21: 0.86596 0.86596 0.86596 0.86596 0.86596
R22: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011

COL 1 2 3 4 5

ROW 2 R11: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011
R12: -0.86596 -0.86596 -0.86596 -0.86596 -0.86596
R21 0.86596 0.86596 0.86596 0.86596 0.86596
R22: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011

COL 1 2 3 4 5

ROW 3 R11: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011
R12: -0.86596 -0.86596 -0.86596 -0.86596 -0.86596
R21: 0.86596 0.86596 0.86596 0.86596 0.86596
R22: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011

COL 1 2 3 4 5

ROW 4 R11: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011
R12: -0.86596 -0.86596 -0.86596 -0.86596 -0.86596
R21: 0.86596 0.86596 0.86596 0.86596 0.86596
R22: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011

COL 1 2 3 4 5

ROW 5 R11: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011
R12: -0.86596 -0.86596 -0.86596 -0.86596 -0.86596
R21: 0.86596 0.86596 0.86596 0.86596 0.86596
R22: -0.50011 -0.50011 -0.50011 -0.50011 -0.50011


NOTE: Correct
0.86603, R22 =


values are R11 = -0.50000, R12 =
-0.50000.


-0.86603, R21 =


m













Table 19. Lagrangian strain from deformation gradient.


COL 1 2 3 4 5

ROW 1 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 2 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 3 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 4 E11: 52554. 52554. 52554. 52554. 52554.
E12: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

COL 1 2 3 4 5

ROW 5 E11: 52554. 52554. 52554. 52554. 52554.
E2: 52557. 52557. 52557. 52557. 52557.
E22: 52459. 52459. 52459. 52459. 52459.

NOTE: Units are micro inches-per-inch. Correct values are Eii =


52,500.


-
















CHAPTER IV
EXPERIMENTS



IV.1 Test Specimen

The prismatic beam shown in Figure 20 was used for all experi-

ments. A clear plastic, CR-39, was chosen for its relatively low

modulus of elasticity and capacity for large, elastic strain.

The 11X11 test pattern on the beam was generated by the synthe-

tic image generator and applied by contact printing. First, a hard

copy of the pattern was photographed using Kodalith film. After

coating the beam with a buffer of polyurethane and naphtha, a Liquid

Light emulsion was applied, contact printed, and chemically fixed

(see Figure 21).




IV.2 Test Equipment

Three experimental set-ups were configured; two were for strain

tests and the third for rigid-body motion.

The strain set-up was a typical photoelastic load frame with

special fixtures (see Figure 22). The fixtures were for four-point

bending (see Figure 23) and cantilever beam (see Figure 24) experi-

ments.

The equipment shown in Figure 25 was used for the rigid-body

motion tests. A photocopy of a protractor was taped to the back of

the lens holder and rotation was measured relative to the tip of the






















S2 Drix '", 11xl Spot Pattern


-~-I--~______-- ____ I~-~ -- -~------c ------i-
S' 1 2" -- 2" ------ 1"J -- ,-
__________________ 2 --------------------3-
7"Figure 20. Beam fabricated from CR-3.













Figure 20. Beam fabricated from CR-39.


I ***

i**






**
J *



0, 0
i^


*****
**.**
*****


*****
O O O
O 0
0
O
O
o-Le


S.'
0* (
00
0@(
0O(
00
00 1
S.(
00


t


i
] '
I
t

i r


* *
l ** .. .. ..


Figure 21. Spot pattern on beam.


4 #28 Drill


- -L-


_~~__~ ~






80


Figure 22. Experimental set-up of load frame.


Figure 23. Set-up for four-point bending.






































Figure 24. Set-up for cantilever beam.


Rigid-body motion set-up.


Figure 25.










clamped pointer. Measurements were accurate to about 1 degree. A

dial indicator measured translation in the horizontal direction with

an accuracy of 0.0005 inch.

Since image processing equipment was unavailable at the

University of Florida, the equipment for the rigid-body experiments

was transported to USC where the digital images were recorded.



IV.3 Experimental Procedure

Unfortunately, the strain tests were not done because the load

frame was too large to transport to USC. The second order model in

equation (53) was for these tests.

The equipment shown in Figure 25 and an image processing system

similar to the one shown in Figure 3 (based on the IBM PC) was used

for the rigid-body experiments. The CCD camera was a Sony model

XCM-38 with a zoom lens, and the digitizer board was an eight-bit

Datacube "Frame Grabber" with 385 rows and 374 columns of pixels.

The set-up included several routine checks. First, alignment

between the rail, table, and camera were checked to minimize experi-

mental error. Then, the zoom lens was adjusted to enlarge the

pattern image as much as possible. Resolution in the horizontal

direction was measured at 358 pixels per inch.

Lighting was adjusted to provide the usual illumination for

tests in the USC lab, and the transparent specimen caused some

lighting problems. Contrast was impaired by the light transmitted

through the beam.





83



After set-up, images were digitized and stored on floppy disk.

All images were transferred to magnetic tape via USC's VAX-11/780

computer for transport to the University of Florida.
















CHAPTER V
ANALYSIS OF RESULTS



The analysis of results has been divided into two sections:

numerical and experimental. The numerical section summarizes synthe-

tic image work and the experimental section summarizes the analysis

of the images digitized at USC.



V.1 Numerical

The analysis of synthetic images included rigid-body motion,

homogeneous strain, combined motion, and the influence of noise, gray

levels, and spot radius. To ensure a good random sample, the results

from a variety of spot spacings were averaged. The pattern was 5X5

with the icon being spot (3,3). Except as noted, the gray level

difference was 200, and the zero-mean noise was Gaussian with a

standard deviation of 7.0 gray levels. The absolute values of noisy

pixels with values less than zero were used instead of the negative

gray levels.

In the following tables, the average absolute error is defined

as follows: average absolute error is the average of the absolute

values of the errors. For example, if E11=E12=E22=0 and the computed

values are E11=100p, E12=-150p, and E22=50p, the average absolute

error is 100p.










Standard deviation, also in the tables, is the standard devia-

tion of the computed values. For the strain example above, the

standard deviation is 132y.

For translations of various amounts in the x and y directions,

the average absolute error was 0.008 pixels with a standard deviation

of 0.011 pixels. Translations near whole pixels had slightly less

error. (Equation (38) suggested a spatial sensitivity.)

Rigid-body rotation was about the image center which was approx-

imately two-thirds of a pixel below the pattern center. Rotations

ranged from 0 to 180 degrees, and the results, computed using the

elements of the rotation tensor, were very consistent with an average

absolute error of 0.0039 degrees and a standard deviation of 0.0008

degrees.

Several strain cases were investigated and summarized in Table

20. Normal strains were E =E22 with E12=E21=0; shear strains were

E =E 22=0 with E 12=E21; and combined strains were E 11=E22=E=E21'

See equation (12) for strain definitions.

Noise had a significant effect on results. As suggested by

equation (38), increasing SNR decreased translation and strain error

(see Table 21). Framing failures began occurring below SNR=100.

Gray level was another significant variable because of the SNR

and the digitization of illumination intensity. For constant noise,

increasing gray level difference increased the SNR and thereby in-

creased accuracy. With a noise standard deviation of 7.00 gray

levels, spots were frequently unidentifiable below a gray level













Summary of strain analysis for synthetic images.


Strain
Displacement
Strain Gradient Ave. Abs. Standard
Case aui/axj Correct Error Deviation


Normal 0.000010 10 17 10
0.000100 100 17 15
0.001000 1001 22 27
0.010000 10050 40 32
0.100000 105000 40 49


Shear 0.000010 10 24 0
0.000100 100 29 0
0.001000 1000 25 0
0.010000 10000 33 0
0.090000a 90000 34 0


Combined 0.000010 10 3 14
0.000100 100 7 28
0.001000 1000 23 26


a Limited by pattern geometry.
NOTE: Rigid-body motion was
per-inch. SNR=816.


zero. Strain units are micro inches-


Table 20.










The effect of SNR on translation and strain calculations.


Translation Strain

Ave. Abs. Standard Ave. Abs. Standard
SNR Error Deviation Error Deviation


100 0.0049 0.0362 39 58
200 0.0029 0.0253 19 42
400 0.0021 0.0176 20 35
600 0.0016 0.0143 14 32
800 0.0012 0.0123 10 26
1000 0.0012 0.0109 8 21
2500 0.0008 0.0068 5 8
5000 0.0006 0.0043 3 6
10000 0.0004 0.0034 2 5
S0.0001 0.0001 0 0

NOTE: Translations were 10.5 pixels in x and y. No rotation. All
strains were Eij=100P.


Table 22. The effect of gray level difference on translation and
strain calculations.



Translation Strain

Ave. Abs. Standard Ave. Abs. Standard
H Error Deviation Error Deviation


32 0.0257 0.0270 ** **
64 0.0043 0.0349 66 62
128 0.0026 0.0172 20 45
256 0.0009 0.0089 3 16
512 0.0007 0.0043 5 8
1024 0.0003 0.0022 4 5
2048 0.0002 0.0011 2 2
4096 0.0001 0.0006 1 2


NOTE: Translations were 10.5 pixels
strains were Eij=100I.


in x and y. No rotation. All


Table 21.









difference of 32. Below a difference of 64, a strain of Eij=100U was

undetectable (see Table 22).

An increase in gray level difference reduced the error caused by

digitization. Without noise, a difference of 16 gray levels produced

an average absolute strain error of 12p (for Eij=100p) with a stand-

ard deviation 0.031 and, above a difference of 64, error was nil.

However, SNR effects greatly overshadowed digitization error.

The effect of spot radius decreased rapidly with increased

radius (see Table 23). Equation (38) suggested this for small 5,

and, in fact, almost no change occurred for radii above 10 pixels.





Table 23. The effect of spot radius on translation and strain
calculations.



Translation Strain

Spot Ave. Abs. Standard Ave. Abs. Standard
Radius Error Deviation Error Deviation


2 0.0030 0.0233 29 20
4 0.0018 0.0272 10 32
6 0.0013 0.0085 7 15
8 0.0016 0.0069 8 13
10 0.0020 0.0052 8 19
12 0.0021 0.0059 5 11

NOTE: Translations were 10.5 pixels in x and y. No rotation. All
strains were Eij=100p. SNR=816.
13


How much VAX 11/750 computer time did the analysis of a 256X256

pixel image require? For a 3X3 spot pattern, CPU time was about 4.9