Citation
Visual Servo Tracking Control via a Lyapunov-Based Approach

Material Information

Title:
Visual Servo Tracking Control via a Lyapunov-Based Approach
Creator:
Hu, Guoqiang
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (171 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Mechanical Engineering
Mechanical and Aerospace Engineering
Committee Chair:
Dixon, Warren E.
Committee Members:
Lind, Richard C.
Crane, Carl D.
Burks, Thomas F.
Hutchinson, Seth

Subjects

Subjects / Keywords:
Calibration ( jstor )
Cameras ( jstor )
Coordinate systems ( jstor )
Matrices ( jstor )
Pixels ( jstor )
Quaternions ( jstor )
Robots ( jstor )
Servomotors ( jstor )
Simulations ( jstor )
Trajectories ( jstor )
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
adaptive, autonomous, control, nonlinear, robotics, robust, tracking, visual
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Mechanical Engineering thesis, Ph.D.

Notes

Abstract:
Recent advances in image processing, computational technology and control theory are enabling visual servo control to become more prevalent in robotics and autonomous systems applications. In this dissertation, visual servo control algorithms and architectures are developed that exploit the visual feedback from a camera system to achieve a tracking or regulation control objective for a rigid-body object (e.g., the end-effector of a robot manipulator, a satellite, an autonomous vehicle) identified by a patch of feature points. The first two chapters present the introduction and background information for this dissertation. In the third chapter, a new visual servo tracking control method for a rigid-body object is developed by exploiting a combination of homography techniques, a quaternion parameterization, adaptive control techniques, and nonlinear Lyapunov-based control methods. The desired trajectory to be tracked is represented by a sequence of images (e.g., a video), which can be taken online or offline by a camera. This controller is singularity-free by using the homography techniques and the quaternion parameterization. In the fourth chapter, a new collaborative visual servo control method is developed to enable a rigid-body object to track a desired trajectory. In contrast to typical camera-to-hand and camera-in-hand visual servo control configurations, the proposed controller is developed using a moving on-board camera viewing a moving object to obtain feedback signals. This collaborative method weakens the field-of-view restriction and enables the control object to perform large area motion. In the fifth chapter, a visual servo controller is developed that yields an asymptotic tracking result for the completely nonlinear camera-in-hand central catadioptric camera system. A panoramic field-of-view is obtained by using the central catadioptric camera. In the sixth chapter, a robust visual servo control method is developed to achieve a regulation control objective in presence of intrinsic camera calibration uncertainties. A quaternion-based estimate for the rotation error signal is developed and used in the controller development. The similarity relationship between the estimated and actual rotation matrices is used to construct the relationship between the estimated and actual quaternions. A Lyapunov-based stability analysis is provided that indicates a unique controller can be developed to achieve the regulation result despite a sign ambiguity in the developed quaternion estimate. In the seventh chapter, a new combined robust and adaptive visual servo control method is developed to asymptotically regulate the feature points in an image to the desired locations while also regulating the pose of the control object without calibrating the camera. These dual objectives are achieved by using a homography-based approach that exploits both image-space and reconstructed Euclidean information in the feedback loop. The robust rotation controller accommodates for the time-varying uncertainties in the rotation error system, and the adaptive translation controller compensates for the unknown calibration parameters in the translation error system. Chapter 8 serves as the conclusions of this dissertation. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2007.
Local:
Adviser: Dixon, Warren E.
Statement of Responsibility:
by Guoqiang Hu.

Record Information

Source Institution:
UFRGP
Rights Management:
Copyright Hu, Guoqiang. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Classification:
LD1780 2007 ( lcc )

Downloads

This item has the following downloads:


Full Text





The time-derivative of V(t) can be determined as

S= 244q, + 2(1 qo)(- 40) + zeT + f~ + sTr-1i

= T (o13 + x) KI v (1 qo)qTK

+ eT( -Ke zid + a LvRw^ Si) + peTed + aiSw, RTLVe

= -q K (qo13 + qx + (1- qo)3) 4v

eTKe + iLveLRwX Si + a~ RT LTe

= -qTK4 eTKve + a, (e AeLvRwj si)T + aii,x RT L ATe

= -qK eTKe aS+ (wX)T RTLAT e + ai iR TL Ae

= -qK,, eTKe + ai (-wX) RTLT Ae + aisiwR TLvAAe

= -qK"q eTKve, (3-46)

where (3-34)-(3-43) were utilized. From (3-45) and (3-46), signal chasing argu-
ments described in the previous section can be used to conclude that the control
inputs and all the closed-loop signals are bounded. Barbalat's Lemma [96] can then
be used to prove the result given in (3-44).
3.5 Simulation Results

A numerical simulation was performed to illustrate the performance of the
tracking controller given in (3-19), (3-21), and the adaptive update law in (3-22).
In this simulation, the developed tracking controller aims enable the control object
to track the desired trajectory encoded by a sequence of images and rotate more
than 360 (see Fig. 3-5).









7 COMBINED ROBUST AND ADAPTIVE HOMOGRAPHY-BASED
VISUAL SERVO CONTROL VIA AN UNCALIBRATED CAMERA 117

7.1 Introduction .................... ........... 117
7.2 Camera Geometry and Assumptions .............. 118
7.3 Open-Loop Error System ........................ 120
7.3.1 Rotation Error System ................. ... 120
7.3.2 Translation Error System ................ .... 121
7.4 Control Development ................... ...... 123
7.4.1 Rotation Control Development and Stability Analysis . 123
7.4.2 Translation Control Development and Stability Analysis 125
7.5 Simulation Results ................... ........ 128

8 CONCLUSIONS ................... ........... 135

APPENDIX

A UNIT NORM PROPERTY FOR THE QUATERNION ERROR ..... 138

B ONE PROPERTY OF UNIT QUATERNIONS ............... 140

C OPEN-LOOP TRANSLATION ERROR SYSTEM ........... 141

D PROPERTY ON MATRIX NORM ....................... 143

E COMPUTATION OF DEPTH RATIOS ................ 144

F INEQUALITY DEVELOPMENT ................. .... 147

G LINEAR PARAMETERIZATION OF TRANSLATION ERROR SYS-
TEM ...................... ................ 148

REFERENCES .................... .............. 149

BIOGRAPHICAL SKETCH ................... ........ 157









Xhr(t), ai(t), n*(t), n, n T(t), m t), and mi(t) can be determined, the following
relationship can be used to determine mn (t):


mn = (R + Xan ni mi, (4-39)
z yr n mni,

where the inverse of the ratio -- can be determined as
)(t)

= 0 0 1 Rr + raijT7 nT mi. (4-40)
Zi n Tni

4.5 Control Objective

The control objective is for a six DOF rigid-body object (e.g., an autonomous

vehicle) identified by a planar patch of feature points to track a desired trajectory

that is determined by a sequence of images taken by a fixed reference camera. This

objective is based on the assumption that the linear and angular velocities of the

camera are control inputs that can be independently controlled (i.e., unconstrained

motion) and that the reference and desired cameras are calibrated (i.e., A1 and

A2 are known). The control objective can be stated as mi (t) rnidi (t) (i.e.,

the Euclidean feature points on 7 track the corresponding feature points on wd).

Equivalently, the control objective can also be stated in terms of the rotation

and translation of the object as xf,(t) -- Xf,d(t) and R'(t) -- R,d (t). As stated

previously, R'(t) and Rd (t) can be computed by decomposing the projective

homographies in (4-35)-(4-38) and using (4-1). Once these rotation matrices

have been determined, the unit quaternion parameterization is used to describe

the rotation matrix. This parameterization facilitates the subsequent problem

formulation, control development, and stability analysis since the unit quaternion

provides a global nonsingular parameterization of the corresponding rotation

matrices. See Section 2.3 for some background about the unit quaternion.






142





From (C-3), the translation error system can be obtained as



S= -L', (vc + ~w s) ed,
zi

and it can be further written as


7*
z,* = iL'R (vc + ,Si) ziLed-
zi















BIOGRAPHICAL SKETCH

Guoqiang Hu received his bachelor's degree in automation engineering from

the University of Science and Technology of China (USTC) in 2002, and he

received his Master of Philosophy in Automation and Computer-aided Engineering

from The Chinese University of Hong Kong (CUHK) in 2004.

Currently, he is pursuing his PhD degree in the nonlinear control and robotics

group at the University of Florida (UF) under the supervision of Dr. Warren

Dixon.

























0 10 20 30
Time [sec]


40 50 60


0 10 20 30 40 50 60
Time [sec]


Figure 3-19: Tracking error p(t) pd(t) (in pixels) of the four feature points in the
tracking experiment. The upper figure is for the u(t) ud(t) component and the
bottom figure is for the v(t) vd(t) component.


0 .1 . . . . . .. . . . . . . . . . . .

0-

-0.1
0 10 20 30 40 50 60
0.05


0
-0...... ------- '----------' --- --
-0.05
0 10 20 30 40 50 60
0.02


0


-0.02
0 10 20 30 40 50 60
Time [sec]


Figure 3 20: Linear camera velocity input vc(t) in the tracking experiment.



















1 0 .5b / .. .. .. .. .. .. .. .. .. .


0 5 10 15 20
0.5

S 0

-0.5
0 5 10 15 20
0.5

0

-0.5
0 5 10 15 20


-0.5

-1
0 5 10 15 20
Time [sec]



Figure 4-7: Rotation quaternion error q(t).


0-

_. -.--- '- ..... ... ... .. ... ...-..
-1

-2
0 5 10 15 20

2





-4
0 5 10 15 20



1
1 ,---------,---------,----




0 -2


0 5 10 15 20
Time [sec]



Figure 4-8: Linear camera velocity input v (t).









diagonal terms of R (q) can be obtained from (2-12) and (2-16) as


2 (q22 23)
(q2 + W3)

2 (q21 + W3)

2 (q1 + W2)


By utilizing (2-12) and (2-17)-(2

2
q=

2
'=

2=

23
q '3


19), the following expressions can be developed:

R11 + R22 + R33 + 1 3
4 (2-20)

R11 R -22 R33 + 1
4
R22 R11 R33 +1
4
R33 R11 22 +1


where qo(t) is restricted to be non-negative without loss of generality (this restric-

tion enables the minimum rotation to be obtained). As stated in [15], the greatest

numerical accuracy for computing qo(t) and q,(t) is obtained by using the element

in (2-20) with the largest value and then computing the remaining terms respec-

tively. For example, if q (t) has the maximum value in (2-20) then the greatest

numerical accuracy can be obtained by computing qo(t) and q,(t) as


R11 + R22 + R33 + 1
qo =
R23 R32
q^1 = -- ---
R31 R13
qv2 =
4qo
R12 R21
qv3 -
4qo


(2-21)


(2-17)

(2-18)

(2-19)









2.3 Unit Quaternion Representation of the Rotation Matrix

For a given rotation matrix, several different representations (e.g., Euler angle-

axis, direction cosines matrix, Euler angles, unit quaternion (or Euler parameters),

etc.) can be utilized to develop the error system. In previous homography-based
visual servo control literature, the Euler angle-axis representation has been used

to describe the rotation matrix. In the angle-axis parameters (p, k), p(t) E R

represents a rotation angle about a suitable unit vector k(t) E IR3. The parameters

(cp, k) can be easily calculated (e.g., using the algorithm shown in Spong and
Vidyasagar [11]).
Given unit vector k(t) and angle p(t), the rotation matrix R(t) = ekk can be

calculated using the Rodrigues formula

R = ek = 13 + kx sin(c) + (kx)2(1 cos(y)), (2-9)

where 13 is the 3 x 3 identity matrix, and the notation kx (t) denotes the following
skew-symmetric form of the vector k(t):

0 -k3 k2
C x= k3 0 -ck Vk = [ci i2 ck3 (2-10)

-k2 ki 0

The unit quaternion is a four dimensional vector which can be defined as [23]

q [qo qjT (2-11)

F FT
In (2-11), qg(t) A qi(t 2) 2t) 3(t) qo(t), qi(t) R Vi = 1, 2, 3. The unit
quaternion must also satisfy the following nonlinear constraint


qTq = 1.


(2-12)









6.5 Control Development

6.5.1 Rotation Control

The rotation open-loop error system can be developed by taking the time

derivative of q(t) as

] I -q (6-32)
2
qv qo13 + qv

where wc (t) E R3 denotes the angular velocity of the camera with respect to F*

expressed in F. Based on the open-loop error system in (6-32) and the subsequent

stability analysis, the angular velocity controller is designed as


wc = -K & (6-33)


where K IR denotes a positive control gain. Substituting (6-33) into (6-32), the

rotation closed-loop error system can be developed as

1
qo = -KI q q, (6-34)
2
S= 2K (qol3 + qx) 4v. (6-35)


6.5.2 Translation Control

The contribution of this chapter is the rotation estimate and associated control

development. The translation controller developed in this section is provided

for completeness. As stated previously, translation controllers such as the class

developed by Malis and Chaumette [21] and Fang et al. [87] can be combined with

the developed quaternion-based rotation controller. To facilitate the subsequent

stability analysis for the six DOF problem, a translation controller proposed in [87]

is provided in this section, which is given by


vc = K6e,


(6-36)















LIST OF TABLES


page


4-1 Coordinate frames relationships . . . ..... ..... 65


Table















REFERENCES


[1] S. Hutchinson, G. Hager, and P. Corke, "A tutorial on visual servo control,"
IEEE Trans. Robot. Automat., vol. 12, no. 5, pp. 651-670, 1996.

[2] F. Chaumette and S. Hutchinson, "Visual servo control part I: Basic ap-
proaches," IEEE Robot. Automat. Mag., vol. 13, no. 4, pp. 82-90, 2006.

[3] "Visual servo control part II: Advanced approaches," IEEE Robot.
Automat. Mag., vol. 14, no. 1, pp. 109-118, 2006.

[4] G. Hu, N. Gans, and W. E. Dixon, Coimple.ritf and Nonlinearity in Au-
tonomous Robotics, Encyclopedia of CoIple.rxitfi and System Science.
Springer, to appear 2008, ch. Adaptive Visual Servo Control.

[5] N. J. Cowan and D. Koditschek, "Planar image-based visual serving as a
navigation problem," in Proc. IEEE Int. Conf. Robot. Automat., 2000, pp.
1720-1725.

[6] N. J. Cowan, J. D. Weingarten, and D. E. Koditschek, "Visual serving via
navigation functions," IEEE Trans. Robot. Automat., vol. 18, pp. 521-533,
2002.

[7] Y. Mezouar and F. Chaumette, "Path planning for robust image-based
control," IEEE Trans. Robot. Automat., vol. 18, no. 4, pp. 534-549, 2002.

[8] E. Rimon and D. E. Koditschek, "Exact robot navigation using artificial
potential functions," IEEE Trans. Robot. Automat., vol. 8, pp. 501-518, 1992.

[9] A. Ruf, M. Tonko, R. Horaud, and H.-H. Nagel, "Visual tracking of an end-
effector by adaptive kinematic prediction," in Proc. IEEE/RSJ Int. Conf.
Intell. Robots Syst., 1997, pp. 893-898.

[10] J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, "Adaptive homography-
based visual servo tracking for a fixed camera configuration with a camera-in-
hand extension," IEEE Trans. Contr. Syst. Technol., vol. 13, no. 5, pp. 814
825, 2005.

[11] M. Spong and M. Vidyasagar, Robot Dynamics and Control. New York:
John Wiley & Sons Inc., 1989.

[12] 0. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint.
Cambridge, Massachusetts: MIT Press, 1993.







48













10





1000
1500
500 1000
500
Vdi [pixel] 0 0 [pixel]
Udi [pixel]


Figure 3-5: Desired image-space coordinates of the four feature points (i.e., pd(t))
in the tracking Matlab simulation shown in a 3D graph. In the figure, "0" denotes
the initial image-space positions of the 4 feature points in the desired trajectory,
and "*" denotes the corresponding final positions of the feature points.













E .. -
10
2- 4



1000
1500
500 1000
500
vi [pixel] 0 0 u. [pixel]



Figure 3-6: Current image-space coordinates of the four feature points (i.e., pd(t))
in the tracking Matlab simulation shown in a 3D graph. In the figure, "0" de-
notes the initial image-space positions of the 4 feature points, and "*" denotes the
corresponding final positions of the feature points.
























mirror Fm
mirror Ym


Figure 5-1: Central catadioptric projection relationship.


5.2 Geometric Model

A central catadioptric camera is composed of two elements: a camera and a

mirror which are calibrated to yield a single effective viewpoint. Geyer and Dani-

ilidis [36] developed a unifying theory that explains how all central catadioptric

systems are isomorphic to projective mappings from the sphere to a plane with a

projection center on the optical axis perpendicular to the plane. For the central

catadioptric camera depicted in Figure 5-1, the coordinate frames F, and F, are

attached to the foci of the camera and mirror, respectively. Light rays incident to

the focal point of the mirror (i.e., the origin of F,) are reflected into rays incident

with the focal point of the camera (i.e., the origin of FT).

Without loss of generality, the subsequent development is based on the

assumption that the reflection of four coplanar and non-collinear Euclidean feature

points denoted by Oi of some stationary object is represented in the camera image

plane by image space coordinates ui(t), (t) ER Vi = 1, 2, 3, 4. The plane defined

by the four feature points is denoted by 7r as depicted in Figure 5-1. The vector















CHAPTER 3
LYAPUNOV-BASED VISUAL SERVO TRACKING CONTROL VIA A
QUATERNION FORMULATION

3.1 Introduction

Previous visual servo controllers typically only address the regulation problem.

Motivated by the need for new advancements to meet visual servo tracking appli-

cations, previous research has concentrated on developing different types of path

planning techniques [5-9]. Recently, Chen et al. [10] developed a new formulation

of the tracking control problem. The homography-based adaptive visual servo

controller in [10] is developed to enable an actuated object to track a prerecorded

time-varying desired trajectory determined by a sequence of images, where the

Euler angle-axis representation is used to represent the rotation error system. Due

to the computational singularity limitation of the angle axis extraction algorithm

(see Spong and Vidyasagar [11]), rotation angles of T were not considered.

This chapter considers the previously unexamined problem of six DOF visual

servo tracking control with a nonsingular rotation parameterization. A homography

is constructed from image pairs and decomposed via textbook methods (e.g.,

Faugeras [12] and Hartley and Zisserman [13]) to obtain the rotation matrix. Once

the rotation matrix has been determined, the corresponding unit quaternion can

be determined from globally nonsingular and numerically robust algorithms (e.g.,

Hu et al. [14] and Shuster [15]). An error system is constructed in terms of the

unit quaternion. An adaptive controller is then developed and proven to enable

a camera (attached to a rigid-body object) to track a desired trajectory that is

determined from a sequence of images. These images can be taken online or offline

by a camera. For example, a sequence of images of the reference object can be








The camera is assumed to view an object with four coplanar feature points
with the following Euclidean coordinates (in [m]):

01 = 0.15 0.15 0 2 0.15 -0.15 0 (347)

03 = -0.15 0.15 0 04 = -0.15 -0.15 0

The time-varying desired image trajectory was generated by the kinematics of the
feature point plane where the desired linear and angular velocities were selected as



Vcd 0.1 sin (t) 0.1 sin(t) 0 [m/s]

cd= 0 0 1.5 [rad/s].

The initial and desired image-space coordinates were artificially generated. For
this example, consider an orthogonal coordinate frame I with the z-axis opposite
to n* (see Figure 2-1) with the x-axis and y-axis on the plane 7T. The rotation
matrices R1 between F and I, and R2 between F* and I were set as

R1 = Rx(1200)Ry (-20) R, (-80) (3-48)

R2 = Rx(1600)Ry (30) R, (30), (3-49)

where Rx,(), Ry (-) and Rz(-) E SO(3) denote rotation of angle (degrees)
along the x-axis, y-axis and z-axis, respectively. The translation vectors xf and
Xf2 between F and I (expressed in F) and between F* and I (expressed in *),
respectively, were selected as

Xfl = 0.5 0.5 4.0 (3-50)
TXf
Xf2 = 1.0 1.0 4.5 (3-51)









Y3 (') IRlx3 satisfy the following equations (see Appendix G):

a12 (a13 u- i)
Y101 = -ai-vc2 aiVc3 + z1YY (Ui, ,' cJ)
a11 a11 a11
a23 + ( .
Y202 = -ai Vc3 +Z2i (Ui c --
a22 a22

Y303 = z (Ui,, Jc) 0


The control strategy is to design Vc3(t) to stabilize e3(t), and then design v2 (t) to

stabilize e2(t) given Vc3(t), and then design vzc(t) to stabilize el(t) given Vc3(t) and

vc2(t). Following this design strategy, the translation controller vc(t) is designed

as [89]


c3 = (kv33 + Y3 (ai,, Jc) 3) (7 35)
ai

Vc2 = -(kv22 + Y2 (ai, Ui, c, Vc3) 02)
tai

Vc1 =- (kvie + Yi (a, U, V2, Vc3) ,
tai

where the depth ratio ai(t) > 0 Vt. In (7-35), 01 (t) e RI1, 2 (t) R"2, 3 (t) I"3

denote adaptive estimates that are designed according to the following adaptive

update laws to cancel the respective terms in the subsequent stability analysis


=1 12 F2Y2Te2 3 = F33Te3, (7-36)

where F1 RIWxn1, R IRfF2X E 2,F3 E Rf3xn'3 are diagonal matrices of positive

constant adaptation gains. Based on (7-34) and (7-35), the closed-loop translation

error system is

z*
i 1 = -k~,el + Y1 (ai, Ui, .,c2, 3) i (7 37)
a11
,*
i_ 2 = -, + Y2 (ai, i, '. Vc3) 2
a22


Zie3 = -I, : + Y3 (ai, Ui, I .c) 03,








The initial rotation matrix R3 and translation vector Xf3 between Fd and I were
set as

R3 = RX(2400)Ry (-900) R, (-300) Xf3 = 0.5 1 5.0 (3-52)

The initial (i.e., pi(0)) and reference (i.e., p7) image-space coordinates of the four
feature points in (3-47) were computed as (in pixels)



i(0) = 907.91 716.04 1 P2(0)= 791.93 728.95 1
[ TT
P3(0)= 762.84 694.88 1 P4(0)= 871.02 683.25 1

P = 985.70 792.70 1 ] = 1043.4 881.20 1

P3 = 980.90 921.90 1 P] = 922.00 829.00 1

The initial (i.e., pdi(O)) image-space coordinates of the four feature points in
(3-47) for generating the desired trajectory were computed as (in pixels)


Pdl(0) = 824.61 853.91 1
r ^T
Pd3(0)= 766.59 790.50 1

The control gains K, in (3-19) and K,
were selected as

K, =diag{3,3,3}
y = 0.0002.


Pd2(0) = 770.36 878.49


T ]


r IT
Pd4(0) = 819.03 762.69 1

in (3-21) and adaptation gain 7 in (3-22)



K = diag{15, 15, 15}


The desired and current image-space trajectories of the feature point plane are
shown in Figure 3-5 and Figure 3-6, respectively. The feature point plane rotates
more than 360 degrees as shown in these two figures. The resulting translation







79







4- Ca er .. : Reference Camera IR
S Cam ra .

3

2

N 1




-1-I- : '




-22 2

y x


Figure 4-2: This figure shows the initial positions of the cameras and the feature


by "0". The feature points on the planes T* and 7d are denoted by ".". The
origins of the coordinate frames F, '*and Fd are denoted by "*".


rotation tracking errors are plotted in Figure 4-6 and Figure 4-7, respectively. The

errors go to zero asymptotically. The translation and rotation control inputs are


shown in Figure 4-8 and Figure 4-9, respectively.









6-5 Image-space error in pixies between pi (t) and p7 shown in a 3D graph.
In the figure, "0" denotes the initial positions of the 4 feature points in
the image, and "*" denotes the corresponding final positions of the fea-
ture points. . . . . . . . ... .. .. 115

6-6 Linear camera velocity control input. . . . ...... ...... 116

6-7 Angular camera velocity control input. . . . . ... 116

7-1 Unitless translation error e(t).... ..................... 131

7-2 Quaternion rotation error q(t) . . . ... . 131

7-3 Pixel coordinate p(t) (in pixels) of the current pose of the four feature
points in the simulation. The upper figure is for the u(t) component and
the bottom figure is for the v(t) component. . . . ... 132

7-4 Regulation error p(t) p*(in pixels) of the four feature points in the sim-
ulation. The upper figure is for the u(t) u*(t) component and the bot-
tom figure is for the v(t) v*(t) component. . . . ... 132

7-5 Image-space error in pixles between p(t) and p*. In the figure, "0" de-
notes the initial positions of the 4 feature points in the image, and "*"
denotes the corresponding final positions of the feature points. ..... 133

7-6 Image-space error in pixles between p(t) and p* shown in a 3D graph. In
the figure, "0" denotes the initial positions of the 4 feature points in the
image, and "*" denotes the corresponding final positions of the feature
points. ..................................... 133

7-7 Linear camera velocity control input (t). . . . 134

7-8 Angular camera velocity control input (t). . . . ... 134










mi (t) m, mdn (t) E i 3 are defined as

i 7"i ]T
mi = (2-3)


T nT
zi [Z
a mdi xdi Yi 1
Tndi =-
Zdi Zdi Zdi

with the standard assumption that zi (t) z, zdi (t) > e where e is an arbitrarily

small positive constant. From (2-3), the relationships in (2-2) can be expressed as




H
mi = zi (R + m
1- (2 4)
ai H

Tndi (Rd + -i
Zdi d_

adi Hd

where ai(t), adi(t) E IR are scaling terms, and H(t), Hd(t) IR3x3 denote the

Euclidean homographies.

2.2 Euclidean Reconstruction

Each feature point on 7T has a projected pixel coordinate pi (t) E R3, p R3

and pdi (t) E R3 in F, F* and Fd respectively, denoted by


Pi u (2-5)
ST
Pi 7, 7 1


Pdi Udi Vdi 1


where ui(t), (' (t), u7, vf, Udi(t), Vdi(t) E R. The projected pixel coordinates pi (t),

p* and Pdi (t) are related to the normalized task-space coordinates mi (t), m7 and

mdi (t) by the following global invertible transformation (i.e., the pinhole camera









3-29 Adaptive on-line estimate of z* in the regulation experiment. ...... ..60

4-1 Geometric model ................... ........... 63

4-2 This figure shows the initial positions of the cameras and the feature point
planes. The initial positions of the cameras attached to I and IR are
denoted by "O". The feature points on the planes T, T* and Td are de-
noted by ".". The origins of the coordinate frames 7, F*and Td are de-
noted by "*". .. . . . . . . .. 79

4-3 Pixel coordinate prd(t) of the four feature points on the plane Td in a se-
quence of desired images taken by the camera attached to IR. The up-
per figure is for the u,d(t) component and the bottom figure is for the
V,d(t) component. ............................... 80

4-4 Pixel coordinate p* (t) of the four feature points on the plane T* in a se-
quence of reference images taken by the moving camera attached to 1.
The upper figure is for the u* (t) component and the bottom figure is for
the v*(t) component. ........................... .. 80

4-5 Pixel coordinate p(t) of the four feature points on the plane T in a se-
quence of images taken by the moving camera attached to 1. The up-
per figure is for the u(t) component and the bottom figure is for the v(t)
com ponent. . . . . . . . ... 81

4-6 Translation error e(t). ................... ......... 81

4-7 Rotation quaternion error q(t). . . . ..... . 82

4-8 Linear camera velocity input v,(t). . . . ..... .. 82

4-9 Angular camera velocity input wc(t). .................. 83

5-1 Central catadioptric projection relationship. . . . . 85

5-2 Projection model of the central catadioptric camera. . . ... 86

5-3 Camera relationships represented in homography. . . .. 88

6-1 Unitless translation error between ml(t) and mn. ............. 113

6-2 Quaternion rotation error. . . . ... ........ .. 114

6-3 Quaternion rotation error for comparison with different sign. ....... 114

6-4 Image-space error in pixles between pi(t) and p7. In the figure, "0" de-
notes the initial positions of the 4 feature points in the image, and "*"
denotes the corresponding final positions of the feature points. . 115









controller which is robust to the intrinsic calibration parameters. As in the previous

three problems, the quaternion parameterization will be used to represent the

rotation error system. Since the quaternion error cannot be measured precisely

due to the uncertain calibration, an estimated quaternion is required to develop

the controller. One of the challenges to develop a quaternion estimate is that the

estimated rotation matrix is not a true rotation matrix in general. To address this

challenge, the similarity relationship between the estimated and actual rotation

matrices is used to construct the relationship between the estimated and actual

quaternions. A Lyapunov-based stability analysis is provided that indicates a

unique controller can be developed to achieve the regulation result.

5) Combined robust and adaptive visual servo control.

This research is also motivated by the desire to compensate for uncertain cam-

era calibration. This controller has adaptive updated terms which can compensate

for the unknown calibration parameters. The open-loop error system is composed

of a rotation error system and a translation error system. One challenge is that the

rotation quaternion error is not measurable. To address this problem, an estimated

quaternion is obtained based on the image-space information and is used to develop

the controller. The transformation between the actual and estimated quaternions

is an upper triangular matrix determined by the calibration parameters and the

diagonal elements are positive. This fact is exploited to design a robust high-gain

controller. Another challenge is that the unknown calibration matrix is coupled in

the translation error system. To address this problem, the translation error system

is linearly parameterized in terms of the calibration parameters. An adaptive up-

date law is used to estimate the unknown calibration parameters, and a translation

controller containing the adaptive compensation terms is used to asymptotically

regulate the translation error.









societies, they will not be the focus of this dissertation. The development in this

dissertation is based on the assumption that images can be acquired, analyzed, and

the resulting data can be provided to the controller without restricting the control

rates.

The use of image-based feedback adds complexity and new challenges for

the control system design. The scope of this dissertation is focused on issues

associated with using reconstructed and estimated state information from a

sequence of images to develop a stable closed-loop error system. Particularly, this

dissertation focuses on the following problems: 1) how to design a visual servo

tracking controller that achieves asymptotic tracking via a quaternion formulation?

2) how to design a collaborative visual servo control scheme when both the camera

and the control object are moving? 3) how to design a visual servo controller using

a central catadioptric camera? 4) and how to design a visual servo controller that is

robust to camera calibration uncertainty?

1.2 Problem Statement

In this dissertation, visual servo control algorithms and architectures are

developed that exploit the visual feedback from a camera system to achieve a

tracking or regulation control objective for a six degrees of freedom (DOF) rigid-

body control object (e.g., the end-effector of a robot manipulator, a satellite, an

autonomous vehicle) identified by a patch of feature points. The tracking control

objective is for the control object to track a desired trajectory that is encoded by

a video obtained from a camera in either the camera-in-hand or camera-to-hand

configuration. This video can be taken online or offline by a camera. For example,

the motion of a control object can be prerecorded by a camera (for the camera-

to-hand configuration) beforehand and used as a desired trajectory, or, a video of

the reference object can be prerecorded as a desired trajectory while the camera

moves (for the camera-in-hand configuration). The regulation control objective is



































120 125 130
u [pixel]


135 140 145


Figure 7-5: Image-space error in pixies between p(t) and p*. In the figure, "0" de-
notes the initial positions of the 4 feature points in the image, and "*" denotes the
corresponding final positions of the feature points.


160


120


v [pixel]


100 110


u [pixel]


Figure 7-6: Image-space error in pixles between p(t) and p* shown in a 3D graph.
In the figure, "0" denotes the initial positions of the 4 feature points in the image,
and "*" denotes the corresponding final positions of the feature points.


135 F


120


115'
115









* A new adaptive homography-based visual servo control method via a quater-

nion formulation is developed that achieves asymptotic tracking control. This

control scheme is singularity-free by exploiting the homography techniques

and a quaternion parameterization. The adaptive estimation term in the pro-

posed controller compensates for the unknown depth information dynamically

while the controller achieves the asymptotic tracking results.

* A new collaborative visual servo control method is developed to enable a

rigid-body object to track a desired trajectory via a daisy-chaining multi-view

geometry. In contrast to typical camera-to-hand and camera-in-hand visual

servo control configurations, the proposed controller is developed using a

moving on-board camera viewing a moving object to obtain feedback signals.

This collaborative method weakens the FOV restriction and enables the

control object to perform large area motion.

* A visual servo controller is developed that yields an asymptotic tracking

result for the complete nonlinear six DOF camera-in-hand central cata-

dioptric camera system. A panoramic FOV is obtained by using the central

catadioptric camera.

* A robust visual servo controller is developed to achieve a regulation con-

trol objective in presence of intrinsic camera calibration uncertainties. A

quaternion-based estimate for the rotation error signal is developed and

used in the controller development. The similarity relationship between the

estimated and actual rotation matrices is used to construct the relationship

between the estimated and actual quaternions. A Lyapunov-based stability

analysis is provided that indicates a unique controller can be developed

to achieve the regulation result despite a sign ambiguity in the developed

quaternion estimate.


































5 10
Time [sec]


Figure 3-25: Pixel coordinate p(t) (in pixels) of the current pose of the four feature
points in the regulation experiment. The upper figure is for the u(t) component
and the bottom figure is for the v(t) component.


0 5 10 15 20
Time [sec]


Figure 3-26: Regulation error p(t) p*(in pixels) of the four feature points in the
regulation experiment. The upper figure is for the u(t) u*(t) component and the
bottom figure is for the v(t) v*(t) component.


~nnn


. . .
















APPENDIX C
OPEN-LOOP TRANSLATION ERROR SYSTEM

The translation error was defined as


e = e m- med,


where meT(t) and med(t) are the extended normalized coordinates of mri (t) and

mrdi (t), respectively, and were defined as
rT T
me = n( md = rdin(
Zi ri di Zrdi ri

Differentiating me(t) gives
1 /
me = ,L'mi, (C-1)
zi

where
xi
10 -^
z

LV = 0 1 -
zy.
Zi
00 1

The derivative of mi (t) is given by

.!
i
mi Y R' v, + C s (C 2)
.!
z,

where Vc (t) and wjc (t) are the linear and angular velocities of 7T with respect to ZR

expressed in F. Based on (C 1) and (C-2), ri2e(t) can be further written as

1I ,
me= L',R' (vc + jsi). (C 3)
zi,









Given the rotation matrices R' (t) and Rd (t), the corresponding unit quater-

nions q (t) and qd (t) can be calculated by using the numerically robust method

(e.g., see [14] and [15]) based on the corresponding relationships


R' = (q2

Rrd = (qO d


q/,qv) 13 + 2q~q/T + 2qoqx<

qCdqVd) 13 + 2qCdq~d + 2qOdq

(4-41)

(4-42)


where 13 is the 3 x 3 identity matrix, and the notation qx (t) denotes the following

skew-symmetric form of the vector q (t) as in (2-10).

To quantify the rotation error between the feature points on Tr and Trd, the

error between rotation matrices R' (t) and R,d (t) is defined as


R = R'TRd = (4q~ 4v) 13 + 2q~v 4- 2qoq ,


(4-43)


where the error quaternion q(t) = (qot), q (t))T is defined as

o [ qoqod + q qvd

v qodqv qoqvd + qv qvd

Since q(t) is a unit quaternion, (4-43) can be used to quantif:

objective as


(4-44)


y the rotation tracking


I,, (t) |- 0 = R(t) I3 as t oo.

The translation error, denoted by e(t) E IR3, is defined as [10,20]

e = me med,

where me (t), med(t) IR3 are defined as
S_ _T T
i Y i I rdi X rdi 1 zrdi
me = In(-) med = In( Z
Zi zi ri Zrdi Zrdi i

In (3-7), Z( and ) can be computed as below
Vi\ I ri\ M


(4-45)


(4-46)


(4-47)

















0.02


0


-0.02
0 10 20 30 40 50 60
0.02


0


H 9 II


0


10 20 30 40 50 60


0.1

0

-0.1 -

0 10 20 30 40 50 60
Time [sec]


Figure 3 21: Angular camera velocity input w,(t) in the tracking experiment.


0 10 20 30
Time [sec]


40 50 60


Figure 3-22: Adaptive on-line estimate of z* in the tracking experiment.









with weighted features that allows visibility changes in the image features (i.e.,

some features can come in and out of the FOV) during the control task. Some

researchers have also investigated methods to enlarge the FOV [60-64]. In [60-63],

image mosaicing is used to capture multiple images of the scene as a camera moves

and the images are stitched together to obtain a larger image. In Swaminathan and

Nayar [64], multiple images are fused from multiple cameras mounted in order to

have minimally overlapping FOV.

An alternative solution to the aforementioned algorithmic approaches to

resolve the FOV issue is to use advanced optics such as omnidirectional cameras.

Catadioptric cameras (one type of omnidirectional camera) are devices which

use both mirrors (reflective or catadioptric elements) and lenses (refractive or

dioptric elements) to form images [32]. Catadioptric systems with a single effective

viewpoint are classified as central catadioptric systems, which are desirable because

they yield pure perspective images [33]. In Baker and Nayar [34], the complete

class of single-lens single-mirror catadioptric systems is derived that satisfy the

single viewpoint constraint. Recently, catadioptric systems have been investigated

to enlarge the FOV for visual servo control tasks (e.g., [35, 65-72]). Burschka

and Hager [65] addressed the visual serving problem of mobile robots equipped

with central catadioptric cameras, in which an estimation of the feature height

to the plane of motion is required. Barreto et al. [73] developed a model-based

tracking approach of a rigid object using a central catadioptric camera. Mezouar

et al. [66] controlled a robotic system using the projection of 3D lines in the image

plane of a central catadioptric system. In [35, 65, 66], the inverse of the image

Jacobian is required in the controller development which may lead to a singularity

problem for certain configurations. Hadj-Abdelkader et al. [67] presented the 2 1/2

D visual serving approach using omnidirectional cameras, in which the inverse

of an estimated image-Jacobian (containing potential singularities) is required in









(3-19), the rotation closed-loop error system can be determined as

1
0 = -~ Kwv (3-20)

v = (qo13 + q) K& .


The contribution of this chapter is the development of the quaternion-based

rotation tracking controller. Several other homography-based translation controllers

could be combined with the developed rotation controller. For completeness, the

following development illustrates how the translation controller and adaptive

update law in [10] can be used to complete the six DOF tracking result.

Based on (3-17), the translation control input v,(t) is designed as


Vc = -L;' (Ke + z (Lvrm Ped)) (3-21)
ai

where K, E R3x3 denotes a diagonal matrix of positive constant control gains. In

(3-21), the parameter estimate i4(t) E R for the unknown constant z4 is defined as

S= e (Lmw Ped) (3-22)


where 7 R denotes a positive constant adaptation gain. The controller in (3-21)

does not exhibit a singularity since L,(t) is invertible and ai(t) > 0. From (3-17)

and (3-21), the translation closed-loop error system can be listed as



z = -Ke + (LmZ c ped)Z, (3-23)

where *" (t) E R denotes the following parameter estimation error:


* *
z. z. z


(3-24)









where (qod(t), qT (t))T, (Qod(t), T (t))T are bounded, so wcd(t) is also bounded.

Based on (3-4), (3-5), (3-12) and (3-13), the open-loop rotation error system can

be developed as
-T
1 -(

o013 +q ]

where q(t) = (qo(t), T(t))T.

By using (2-5), (2-6), (3-6), (3-11), and the fact that [95]


mi = -Vc+ m j c, (3-16)


where Vc(t) E R3 denotes the actual linear velocity of the camera expressed in .F,

the open-loop translation error system can be derived as [10]


zi* = -aiL vc + (L Ped)Z (3-17)


where Lv(t) E R3x3 are defined as


0 0 no 1 0
L A= A- 0 0 vo 0 1 (3-18)
Zi
0 0 0 001

The auxiliary term Lv (t) is an invertible upper triangular matrix.

3.3.2 Closed-Loop Error System

Based on the open-loop rotation error system in (3-15) and the subsequent

Lyapunov-based stability analysis, the angular velocity controller is designed as


c = -K,(I3 + q)-14v + Rwcd = -K~q + Rwcd, (3-19)


where K, E R3x3 denotes a diagonal matrix of positive constant control gains. See

Appendix B for the proof that (13 + qx)-1q = qv. Based on (3-11), (3-15) and




























S 0


-100
0 10 20 30 40 50 60
0.05


CO 0


-0.05
0 10 20 30 40 50 60
Time [sec]



Figure 3-15: Translation error e(t) in the tracking experiment.


1.01



0.99
0 10
0.1

, 0 *V


20 30 40


0

-0.1
0 10 20 30 40 50 60
0.1
0.1 o ------ i.... ...---- -----..... ..-i-- ----



-0.1
0 10 20 30 40 50 60
Time [sec]



Figure 3 16: Rotation quaternion error q(t) in the tracking experiment.


60


















0



-20
0 1 2 3 4 5
20





-20
0 1 2 3 4 5
20


0

2 0 .. .. .. .. . .. .
0 1 2 3 4 5
Time [sec]



Figure 7-7: Linear camera velocity control input Vc(t).


0.5


0


-0.5
0 1 2


0.2


0
^------ ---J~


-0.2
0 1 2


Time [sec]


Figure 7-8: Angular camera velocity control input wc(t).









After differentiating both sides of (5-24) and using the equations (5-30), (5-31)

and

mi = -Vc + mi w,

the open-loop translation error system can be derived as


ze = -3iLvVc + z/ LwJc z*ndL vd7 md (5-32)
Si vd(dt Ip di)

where v,(t) E R 3 denotes the linear velocity input of the camera with respect to F*

expressed in f, the Jacobian-like matrices L,(t), Ld(t), L,, (t) E RI33 are defined

as

1 0 -mer 1 0 -medl

L = 0 1 -me2 Ld = 0 1 -med2 (5-33)

001 001

melme2 -1 Tmel e2

Lv, = 1 + m e2 mlm2 -mel l (5-34)

-me2 mel 0

and Pi(t) R is defined as
y* z*/LI
Zi zi/Li"

5.5.2 Closed-Loop Error System

Based on the open-loop rotation error system in (5-28) and the subsequent

Lyapunov-based stability analysis, the angular velocity controller is designed as


wc = -Kvq + Rcd,


(5-35)









During the experiment, the images from the camera are processed with a frame

rate of approximately 20 frames/second.

The resulting translation and rotation errors are plotted in Figure 3-15 and

Figure 3-16. The desired image-space trajectory (i.e., pdi(t)) is shown in Figure

3-17, and the current image-space trajectory (i.e., pi(t)) is shown in Figure 3-18.

The tracking error between the current and desired image-space trajectories is

shown in Figure 3-19. The translation and rotation control outputs are shown in

Figure 3-20 and Figure 3-21, respectively. The parameter estimate for z* is shown

in Figure 3-22.

In the tracking control, the steady-state tracking error is approximately

15[pixel] 10[pixel] 0.01 ]. This steady-state error is caused by the image

noise and the camera calibration error in the test-bed. To find the tracking

control error, two homographies are computed between the reference image and

the current image and desired image, respectively. Due to the image noise and

camera calibration error, the error is inserted to the two homographies. Then the

tracking error obtained from the mismatch between the two homographies will

have larger error. Also, the image noise and calibration error inserts error into the

derivative of the desired pixel coordinates, which is used as a feedforward term in

the tracking controller. Furthermore, communication between the controller and

virtual reality system occurs via a TCP socket, introducing some amount of latency

into the system. Note that this pixel error represents less than 1.5% of the image

dimensions.

In the following regulation experiment, the derivative of the desired pixel

coordinates is equal to zero, and only one homography is computed between

the current image and desired set image. The influence of the image noise and

calibration error is weakened greatly.









From standard Euclidean geometry, the relationships between mi(t), mddi(t)

and mn can be determined as

mi = Xf + R~ mdi = Xfd + Rdmn, (5-12)


where xf (t) Xfd (t) E R3 denote the translation vectors expressed in F and Fd,

respectively, and R (t) Rd(t) E SO(3) denote the orientation of F* with respect

to F and Fd, respectively. As also illustrated in Figure 5-3, n* IR3 denotes the

constant unit normal to the plane T, and the constant distance from the origin of

F* to T along the unit normal n* is denoted by d* E IR is defined as

d* A n*T,. (5-13)

By using (5-13), the relationships in (5-12) can be expressed as

mi = Hfm mdi = Hdmn, (5-14)

where H(t), Hd(t) E IR3x3 are the Euclidean homographies defined as

J Xf R zT lXfd*T r 15
H = R + Xn* Hd = Rd+ n* (515)
d* d*

Based on (5-1) and (5-8), the relationship between the Euclidean coordinates in

(5-14) can be expressed in terms of the unit spherical surface coordinates as

rsi = aiHn~ i mdsi = adiHdmi, (5-16)

SL L*
where ai(t) Li (t) R and adi(t) L M are scaling terms.
Li (t) Ldi (t)
5.3 Euclidean Reconstruction

The homogenous pixel coordinates of the features points with respect to the

camera frames ', F* and Fd are denoted as pi (t), p7 and pdi (t) E I3, respectively.

They can be related to the normalized coordinates nmpi (t), mn and mdpi (t) via the







113



different choice of sign of the quaternion estimate, the asymptotic result can still

be achieved. In contrast to the quaternion estimate in Figure 6-2, a quaternion

estimate with different sign is shown in Figure 6-3.



0


;- -0.1 -


-0.2
0 2 4 6 8 10



S -0 .1 ......... ........ ..... .......


0 4 10
0.5

CO 0


-0.5 II
0 2 4 6 8 10
Time [sec]


Figure 6-1: Unitless translation error between ml(t) and m{.















1000 -

800

600

400 -

200
0 10 20 30 40 50 6

Inn...00


10 20 30
Time [sec]


40 50 60


Figure 3-17: Pixel coordinate pd(t) of the four feature points in a sequence of
desired images in the tracking experiment. The upper figure is for the Ud(t) compo-
nent and the bottom figure is for the Vd(t) component.


1000
800

4 0 0 .. . . . .. . . . .. .. . .. . . .
400 -

200
0 10 20 30 40 50 6


1200
1000

800 .

600

400
9nn00


0 10 20 30
Time [sec]


40 50 60


Figure 3-18: Pixel coordinate p(t) of the current pose of the four feature points
in the tracking experiment. The upper figure is for the u(t) component and the
bottom figure is for the v(t) component.


1000 -

G 800 -
a
>o 600 -
400 -

200 -
0















LIST OF FIGURES
Figure page

2-1 Coordinate frame relationships between a camera viewing a planar patch
at different spatiotemporal instances. The coordinate frames 7, T* and
Fd are attached to the current, reference and desired locations, respec-
tively. .... ................ ................ 17

2-2 Coordinate frame relationships between a camera viewing a planar patch
at different spatiotemporal instances. The coordinate frames 7 and F*are
attached to the current and reference locations, respectively. ...... ..18

3-1 Coordinate frame relationships between a fixed camera and the planes
defined by the current, desired, and reference feature points (i.e., T, Td,
and )T*). . . . . . . . ...... .. 33

3-2 Block diagram of the experiment. . . . ..... 42

3-3 The Sony XCD-710CR color firewire camera pointed at the virtual envi-
ronment..................... .. .............. .. 42

3-4 Virtual reality environment exmaple: a virtual recreation of the US Army's
urban warfare training ground at Fort Benning. . . ... 44

3-5 Desired image-space coordinates of the four feature points (i.e., pd(t)) in
the tracking Matlab simulation shown in a 3D graph. In the figure, "0"
denotes the initial image-space positions of the 4 feature points in the
desired trajectory, and "*" denotes the corresponding final positions of
the feature points. .............................. 48

3-6 Current image-space coordinates of the four feature points (i.e., pd(t)) in
the tracking Matlab simulation shown in a 3D graph. In the figure, "0"
denotes the initial image-space positions of the 4 feature points, and "*"
denotes the corresponding final positions of the feature points. . 48

3-7 Translation error e(t) in the tracking Matlab simulation. . ... 49

3-8 Rotation quaternion error q(t) in the tracking Matlab simulation. . 49

3-9 Pixel coordinate pd(t) of the four feature points in a sequence of desired
images in the tracking Matlab simulation. The upper figure is for the
Ud(t) component and the bottom figure is for the vd(t) component. 50









given by
122.5 -3.77 100
A= 0 122.56 100

0 0 1
The best-guess estimation for A was selected as

100 -4 80
A= 0 100 110

0 0 1

The camera is assumed to view an object with four coplanar feature points with

the following Euclidean coordinates (in [m]):


01 = 0.05 0.05 0 02 0.05 -0.05 0 (6-53)

03 = -0.05 0.05 0 04 -0.05 -0.05 0

The normalized coordinates of the vanishing points were selected as


0.02 0.02 1 0.02 -0.02 1

-0.02 0.02 1] -0.02 -0.02 1

Consider an orthogonal coordinate frame I with the z-axis opposite to n* (see

Figure 2-2) with the x-axis and y-axis on the plane 7T. The rotation matrices R,

between F and I, and R2 between F* and I were set as


R, = R,(160)Ry (30) R, (-30) R2 = Rx(120)Ry (-20) R, (80),

where R,(-), Ry (.) and Rz(.) E SO(3) denote rotation of angle (degrees)

along the x-axis, y-axis and z-axis, respectively. The translation vectors xf (t) and

Xf2(t) between F and I (expressed in F-) and between F* and I (expressed in F*),









Based on (5-18) and (5-21), the rotation tracking control objective R(t) -> Rd (t)
can be formulated as

I,(, (t)| -* 0 = R(t) I3 as t oc. (5-23)

To quantify the position mismatch between the actual and desired camera, the

translation tracking error e(t) R3 is defined as

Xi Xdi 'I Ydi i ( Zi (524)
e = me md = In (5-24)
d Zi Zdi Zi di di

where me(t), med(t) E R3 are defined as

me Tmel me2 me3 i In zi (5-25)

med Xdi Ydi md In zdi (5-26)
SZdi Zdi
mned A[medl med2 med3 ] I[ Xdi YdiZ(2

Based on (5-23) and (5-24), the subsequent control development targets the

following objectives:

I|,.(t) -0 and Ie(t)| -- 0 as t -- oc. (5-27)

The error signal qv(t) is measurable since it can be computed from the R (t)

and Rd (t) as in [24]. The first two elements of the translation error e (t) are

measurable because

xil/Li Xdi/Ldi /Li ydi/Ldi
mel medal me2 med2 -
zi/Lii zdi/L-di z iLi z/diLdi
Xe (t) Yi (t) z~ (t) Xdi(t)
where Xi i W i M can be computed from (5-6) and (5-17), and di
Li (t) Li (t) Li (t) Ldi (t)'
d, (t) d can be computed from similar relationships. The third element of
Ldi (t) Ldi (t)
the translation error is also measurable since

Zi (Ld/Ld) z /L adi /L
zdi Zdi/Ldi (L7/Li) ai Zdi/Ldi














APPENDIX E
COMPUTATION OF DEPTH RATIOS

Based on (7-5) and (7-6), the following expression can be obtained:


p ai(R 1 xhn*T) pi,


(E-1)


where R(t) is defined in (7-8), and xh(t) E IR3 and n*(t) E IR3 are defined as


xh = Ax
d*


*T = n*TA-


Using the four corresponding reference feature points, the expression in (E-1) can

be written as


pi = aiGp*=-,: p


(E-2)


where g33 (t) E IR is the (assumed w.l.o.g.) positive third row third column element

of G(t), and G,(t) e IR3x3 is defined as G(t)/g33(t). Based on (E-2), twelve linear

equations can be obtained for the four corresponding reference feature points. A set

of twelve linear equations can then be developed to solve for ai(t)g33(t) and G,(t).

To determine the scalar g33 (t), the following equation can be used:


g33Gn = R + h*T


(E-3)


provided that R (t) is obtained using the four vanishing points, where

r11 r12 r13 9n11 9nl12 n13
R = r21 r22 23 Gn = n21 gn22 9n23

r31 r32 r33 n31 gn32 1

Xh = a4h h2 .h3 t= n1 n2 3 ]









frame F that is attached to the control object as depicted in Figure 4-1. The

camera attached to I captures snapshots of the planar patches associated with F

and F*, respectively. The a priori motion of Fd represents the desired trajectory

of the coordinate system F, where F and Fd are attached to the same object

but at different points in time. The camera attached to ZR is a different camera

(with different calibration parameters) as the camera attached to I. The problem

considered in this chapter is to develop a kinematic controller for the object

attached to F so that the time-varying rotation and translation of F converges to

the desired time-varying rotation and translation of Fd, where the motion of F is

determined from the time-varying overhead camera attached to I.

4.3 Geometric Model

The relationships between the coordinate systems are as follows (see Table 4-

1): R (t), R*(t), R,(t), 7(t), Rd (t), R* E SO(3) denote the rotation from F to Z,

F* to 1, Z to ZR, F to IR, 1Fd to ZR, and F* to IR, respectively, xf (t), x (t) E fR3

denote the respective time-varying translation from F to I and from F* to I

with coordinates expressed in I, and xf, (t), x (t), xfrd (t), ,E IR3 denote the

respective constant translation from I to ZR, F to ZR, Fd to IR, and from F* to

ZR with coordinates expressed in ZR. From Figure 4-1, the translation x',(t) and

the rotation R' (t) can be expressed as


Xl', = x + RR*T(Xf x }) R' = RR*R. (4-1)



As illustrated in Figure 4-1, T, Td and T* denote the planar patches of

feature points associated with F, Fd, and F*, respectively. sli I3 Vz =

1, 2, n (n > 4) denotes the constant Euclidean coordinates of the i-th fea-

ture point in F (and also Fd), and s2i IR3 Vi = 1, 2, n denotes the constant

Euclidean coordinates of the i-th feature point in F*. From the geometry between









Based on the definition of y(t) in (6-28), the inequalities (6-42) and (6-43), and

the assumption that rn* and zi are bounded, there exist two positive bounding

constant cl and 2 E IR satisfying the following inequalities:

zK &I |71 2 K?*^,
2 < c1 and < C2
4Ao7 Ao

the control parameter K, can be selected large enough to ensure that V (t) is

negative semi-definite as


V< K, Ao l 2- A 12. (6-52)


Based on (6-44) and (6-52), standard signal chasing arguments can be used to

conclude that the control inputs and all the closed-loop signals are bounded. The

expression in (6-52) can also be used to conclude that q,(t) and e(t) E C2. Since

qv(t), q,(t), e(t), e(t) E L and q,(t), e(t) E C2, Barbalat's Lemma [96] can be used
to prove the result given in (6-41).E

By modifying the Lyapunov function in (6-44) as


V= qq, + (1 +qo)2 + T,

the same stability analysis arguments can be used to prove Theorem 6.1 for the

case when

v = -yAq,.

Since the sign ambiguity in (6-29) does not affect the control development and

stability analysis, only the positive sign in (6-29) needs to be considered in the

future control development for convenience.

6.7 Simulation Results

Numerical simulations were performed to illustrate the performance of the

controller given in (6-33) and (6-36). The intrinsic camera calibration matrix is








52









3





0 2 4 6 8 10

0.5


0 . . .: . . . . : . .. . .._. .
"

-0.5


0 246810

-1




-1
-C'



0 2 4 6 8 10
Time [sec]



Figure 3 13: Angular camera velocity input w,(t) in the tracking Matlab simula-

tion.

















6


P5


4
2l___ __---I------- --I- ----I ----------




















3-


2-
6 . .. . . . . . .. . . . .





3. .. . .. .














0 2 4 6 8 10
Time [sec]



Figure 3-14: Adaptive on-line estimate of zW in the tracking Matlab simulation.









[66] Y. Mezouar, H. H. Abdelkader, P. Martinet, and F. Chaumette, "Central
catadioptric visual serving from 3D straight lines," in Proc. IEEE/RSJ Int.
Conf. Intell. Robots Syst., 2004, pp. 343-349.

[67] H. Hadj-Abdelkader, Y. Mezouar, N. Andreff, and P. Martinet, "2 1/2 D
visual serving with central catadioptric cameras," in Proc. IEEE/RSJ Int.
Conf. Intell. Robots Syst., 2005, pp. 3572-3577.

[68] G. Mariottini, E. Alunno, J. Piazzi, and D. Prattichizzo, "Epipole-based
visual serving with central catadioptric camera," in Proc. IEEE Int. Conf.
Robot. Automat., 2005, pp. 3516-3521.

[69] G. Mariottini, D. Prattichizzo, and G. Oriolo, "Image-based visual serving
for nonholonomic mobile robots with central catadioptric camera," in Proc.
IEEE Int. Conf. Robot. Automat., 2006, pp. 538-544.

[70] S. Benhimane and E. Malis, "A new approach to vision-based robot control
with omni-directional cameras," in Proc. IEEE Int. Conf. Robot. Automat.,
2006, pp. 526-531.

[71] R. Tatsambon and F. Chaumette, "Visual serving from spheres using a
spherical projection model," in Proc. IEEE Int. Conf. Robot. Automat., 2007,
pp. 2080-2085.

[72] "Visual serving from spheres with paracatadioptric cameras," in Int.
Conf. Advanced Robotics, August 2007.

[73] J. P. Barreto, F. Martin, and R. Horaud, "Visual servoing/tracking using
central catadioptric images," in Proc. Int. Symp. Experimental Robotics, 2002,
pp. 863-869.

[74] K. Hosoda and M. Asada, "Versatile visual serving without knowledge of
true jacobian," in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 1994, pp.
186-193.

[75] M. Jagersand, O. Fuentes, and R. Nelson, "Experimental evaluation of
uncalibrated visual serving for precision manipulation," in Proc. IEEE Int.
Conf. Robot. Automat., 1997, pp. 2874-2880.

[76] M. Shahamiri and M. Jagersand, "Uncalibrated visual serving using a biased
newton method for on-line singularity detection and avoidance," in Proc.
IEEE/RSJ Int. Conf. Intell. Robots Syst., 2005, pp. 3953-3958.

[77] J. A. Piepmeier and H. Lipkin, "Uncalibrated eye-in-hand visual servingg"
Int. J. Robot. Res., vol. 22, pp. 805-819, 2003.

[78] J. A. Piepmeier, G. V. McMurray, and H. Lipkin, "Uncalibrated dynamic
visual servingg" IEEE Trans. Robot. Automat., vol. 24, no. 3, pp. 143-147,
2004.









4.6.2 Closed-Loop Error System

Based on the open-loop rotation error system in (4-49) and the subsequent

Lyapunov-based stability analysis, the angular velocity controller is designed as


J, = -Kq, + Rjcd, (4-52)


where K E R3x3 denotes a diagonal matrix of positive constant control gains.

From (4-49) and (4-52), the rotation closed-loop error system can be determined as

1
0 = Kqv (4-53)
1 1
4, = -(0 o + 3 x) Kwv = --Kwqoqv .
2 2

From (4-51), the translation control input v,(t) is designed as


Vc = -1 (Ke *fed) ) x 8s, (4-54)
ri

where K, E R3x3 denotes a diagonal matrix of positive constant control gains. In

(4-54), the parameter estimate iz (t) R for the unknown constant z*i is designed

as

*i = -Tye ,d, (4-55)

where 7 R denotes a positive constant adaptation gain. By using (4-51) and

(4-54), the translation closed-loop error system is


z e =-Ke m* (4-56)


where (t) E R denotes the following parameter estimation error:


* =* -*
ri ri ri.


(4-57)









Expanding the right side of the above equation gives

2 T
q0 + C4

= (qvlqdl + !,. '.,.,' + !. :., + qoq0d)2 + (qoqvdl + !. ., :'/.,' vlqod)2

+ (qoqvd2 qvlvd3 + :'/, .,- qv2qod) + (qoqvd3 + qvllvd2 1 qv3qOd)2
22 22 2 2 22 22 22 22 22
oqvdl + qoq2 + qvlqdl + qqvd3 + qv2lqd2 + qv2qvdl + qvlqvd3 + qv2qvd2
2 2 2 2 2 2 2 2 22
+ qv3qdl + qv2qvd3 + qv3qd2 + qv3qvd3 + qoqd + qvqd + qv2qd + q3qod

S+ q1 + 22 + q~d)(q + qvdi + qd2 + qd3).

Based on the fact that q(t) and qd(t) are two unit quaternions (i.e., their norms are

equal to 1),

2 + T4v = 1









and the four unknowns are given by R33(t)ai(t). From the definition of R,(t) in

(7-9), the fact that

det (R) = det(A) det(R) det(A-) = 1 (7-10)

can be used to conclude that

/33 det (Rn) = 1, (7-11)

and hence based on (7-9) and (7-11),

R= R (7-12)
det(PR)

After R (t) is obtained, the original four feature points on the reference plane can

be used to determine the depth ratio ai (t) as shown in Appendix E.

7.3 Open-Loop Error System

7.3.1 Rotation Error System

If the rotation matrix R (t) introduced in (2-4) were known, then the corre-

sponding unit quaternion q (t) A q0 (t) qT (t) can be calculated using the

numerically robust method presented presented in Section 2.3. Given R(t), the

quaternion q(t) can also be written as


qo = V1 + tr(R) (7-13)
2
1
q, = u3 tr(R), (7-14)

where u(t) E IR3 is a unit eigenvector of R(t) with respect to the eigenvalue 1. The

open-loop rotation error system for q(t) can be obtained as (see Dixon et al. [23])


S -q we, (7-15)
qv qo01 + qZ

where cj (t) E R3 defines the angular velocity of the camera expressed in F.









and therefore

H = ARA-1 = R. (6-9)

Twelve linear equations can be obtained based on (6-4) for the four vanishing

points. Assume the 3rd row 3rd column element of H(t), denoted as H33(t) e I, is

not equal to zero (w.l.o.g.). The normalized matrix H,(t) E R3x3, defined as


fH= (6-10)
H33

can be computed based on these twelve linear equations. Based on (6-9)

det () = det(A) det(R) det(A-) = 1. (6-11)

From (6-10) and (6-11),

H33 det ( ) = 1, (6-12)

and hence,
H,
H= (6-13)
det(H,)

which is equal to R(t).

6.3 Control Objective

As stated previously, the objective in this chapter is to develop a kinematic

controller to ensure the pose of the camera coordinate frame F is regulated to the

desired pose F* despite uncertainty in the intrinsic camera calibration matrix. This

objective is based on the assumption that the linear and angular velocities of the

camera are control inputs that can be independently controlled (i.e., unconstrained

motion). For example, the linear and angular camera velocities could be controlled

by the end-effector of a robotic manipulator. In addition to uncertainty in the

intrinsic camera calibration, uncertainty could also exist in the extrinsic camera

calibration (e.g., the uncertainty in the rotation and translation of the camera

with respect to the robot end-effector). The development in this chapter could









The quaternion q (t) given in (7-13)-(7-15) is not measurable since R(t) is

unknown. However, since R(t) can be determined as described in (7-12), the same

algorithm as shown in equations (7-13) and (7-14) can be used to determine a

corresponding measurable quaternion (qo(t), q(t))T as


qo = + tr(R) (7-16)
2
q = /3 -tr(R), (7-17)
2

where u(t) E IR3 is a unit eigenvector of R(t) with respect to the eigenvalue 1.

Based on (7 8), tr (R) = tr (ARA-1) = tr(R), where tr (.) denotes the trace

of a matrix. Since R(t) and R(t) are similar matrices, the relationship between

(qo(t), q(t))T and (qo(t), q(t)) can be determined as

qo = q0 q = Aq, -A Aq,, (7-18)

where y(t) E IR is a positive, unknown, time-varying scalar that satisfies the

following inequalities (see Appendix F)


< "(t) < 4,, (7-19)

where ( E IR are positive bounding constants. The inverse of the relationship

between qv(t) and q,(t) in (7-18) can be developed as

1 a12 a13 al12a23
vi -v2 qv3
a11 alla22 a11 l11a22
qv A-q -1 1 a2 3 (7-20)
W-- 2 3
7 7 022 a22
qv3

7.3.2 Translation Error System

The translation error, denoted by e(t) E IR3, is defined as


(7-21)


e(t) = p(tW pe,









The constant coordinates of mT can be projected onto a unit spherical surface

expressed in F* and 0*, respectively, as

7- 7 7 T

_- mi i l i (i
TO* Zl [x
i L i i J
Si L L* L* L( 8

and the time-varying coordinates m di(t) can be projected onto a unit spherical

surface expressed in Fd and Odc, respectively, as
T
mdi Xdi Ydi Zdi
mdsi -
Ldi Ldi Ldi Ldi
SXdi Ydi Zdi +
Ld d Ldi Ldi \

*,7-
where mn, mii, mds (t) mdp (t) R3, and L7, Ldi (t) R are defined as


L* = 2 ,2 + y 2 +L = + y2, + z,2 (5-9)

The normalized coordinates of mdsi (t) denoted as mdsi(t) E IR3 is defined as

dsi Xdi (t) ydi (t) 1 (510)
m -dsi 1 (5-10)
Zdi (t) Zdi (t)

which will be used in the controller development. The signal mTdsi(t) in (5-10) is

measurable because 1mdsi (t) can be computed from the measurable and bounded

pixel coordinates pdi(t) using similar projective relationships as in (5-6) and (5

17). The normalized coordinates of T*i, m dpi (t) denoted as mTn, mdpi (t) E R3,

respectively, are defined as

[ i xI
L z+1 + L zi J
1 21 +
Xdi Ydi
mdpi = 1
zdi + Ldi zdi + Ldia















TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ............................. iv

LIST OF TABLES ................... .............. viii

LIST OF FIGURES ................... ............. ix

ABSTRACT .................... ............... xiii

CHAPTER

1 INTRODUCTION .............................. 1

1.1 Motivation ................... ............. 1
1.2 Problem Statement ................... ........ 2
1.3 Literature Review ................... ......... 8
1.3.1 Basic Visual Servo Control Approaches ............ 8
1.3.2 Visual Servo Control Approaches to Enlarge the FOV .. 9
1.3.3 Robust and Adaptive Visual Servo Control .......... 11
1.4 Contributions .................... .......... 13

2 BACKGROUND AND PRELIMINARY DEVELOPMENT ........ 16

2.1 Geometric Model ................... ........ 16
2.2 Euclidean Reconstruction ................... .. 19
2.3 Unit Quaternion Representation of the Rotation Matrix ....... 21

3 LYAPUNOV-BASED VISUAL SERVO TRACKING CONTROL VIA A
QUATERNION FORMULATION ......................... 25

3.1 Introduction .................... ........... 25
3.2 Control Objective .................... ........ 26
3.3 Control Development ................... ...... 28
3.3.1 Open-Loop Error System ................. .... 28
3.3.2 Closed-Loop Error System .................... 30
3.3.3 Stability Analysis ........................ 32
3.4 Camera-To-Hand Extension . . . .. .. 33
3.4.1 Model Development . . . .... ..... 33
3.4.2 Control Formulation ............ ..... .... 35
3.5 Simulation Results ................... ....... 38
3.6 Experiment Results .................... ....... 41









[53] K. Hashimoto, "A review on vision-based control of robot manipulators,"
Advanced Robotics, vol. 17, no. 10, pp. 969-991, 2003.

[54] K. Deguchi, "Optimal motion control for image-based visual serving by
decoupling translation and rotation," in Proc. IEEE/RSJ Int. Conf. Intell.
Robots Syst., 1998, pp. 705-711.

[55] E. Malis and F. Chaumette, "2 1/2 D visual serving with respect to un-
known objects through a new estimation scheme of camera displacement,"
Int. J. Computer Vision, vol. 37, no. 1, pp. 79-97, 2000.

[56] F. Chaumette and E. Malis, "2 1/2 D visual serving: a possible solution to
improve image-based and position-based visual servingss" in Proc. IEEE Int.
Conf. Robot. Automat., 2000, pp. 630-635.

[57] J. Chen, W. E. Dixon, D. M. Dawson, and M. McIntyre, "Homography-based
visual servo tracking control of a wheeled mobile robot," IEEE Trans. Robot.,
vol. 22, no. 2, pp. 406-415, 2006.

[58] N. Gans and S. Hutchinson, "Stable visual serving through hybrid switched-
system control," IEEE Trans. Robot., vol. 23, no. 3, pp. 530-540, 2007.

[59] E. Malis, "Visual serving invariant to changes in camera intrinsic parame-
ters," in Proc. IEEE Int. Conf. Computer Vision, 2001, pp. 704-709.

[60] Y. Y. Schechner and S. Nayar, "Generalized mosaicing: High dynamic range
in a wide field of view," Int. J. Computer Vision, vol. 53, no. 3, pp. 245-267,
2003.

[61] S. Hsu, H. S. Sawhney, and R. Kumar, "Automated mosaics via topology
inference," IEEE Computer Graphics and Application, vol. 22, no. 2, pp.
44 54, 2002.

[62] M. Irani, P. Anandan, J. Bergen, R. Kumar, and S. Hsu, "Efficient represen-
tations of video sequences and their application," Signal Processing: Image
communication, vol. 8, pp. 327-351, 1996.

[63] A. Smolic and T. Wiegand, "High-resolution image mosaicing," in Proc.
IEEE Int. Conf. Image Processing, 2001, pp. 872-875.

[64] R. Swaminathan and S. Nayar, "Non-metric calibration of wide-angle lenses
and polycameras," in Proc. IEEE Int. Conf. Computer Vision Pattern
Recognition, 2000, pp. 413-419.

[65] D. Burschka and G. Hager, "Vision-based control of mobile robots," in Proc.
IEEE Int. Conf. Robot. Automat., 2001, pp. 1707-1713.















CHAPTER 6
VISUAL SERVO CONTROL IN THE PRESENCE OF CAMERA CALIBRATION
ERROR

6.1 Introduction

Hu et al. [14] introduced a new quaternion-based visual servo controller for

the rotation error system, provided the camera calibration parameters are exactly

known. Since the results by Malis and Chaumette [21] and Fang et al. [87] rely

heavily on properties of the rotation parameterization to formulate state estimates

and a measurable closed-loop error system, the research in this chapter is motivated

by the question: Can state estimates and a measurable closed-loop error system be

crafted in terms of the quaternion parameterization when the camera calibration

parameters are unknown? To answer this question, a contribution of this chapter is

the development of a quaternion-based estimate for the rotation error system that

is related to the actual rotation error, the development of a new closed-loop error

system, and a new Lyapunov-based analysis that demonstrates the stability of the

quaternion error system. One of the challenges is to develop a quaternion estimate

from an estimated rotation matrix that is not a true rotation matrix in general.

To address this challenge, the similarity relationship between the estimated and

actual rotation matrices is used (as in [21] and [87]) to construct the relationship

between the estimated and actual quaternions. A Lyapunov-based stability analysis

is provided that indicates a unique controller can be developed to achieve the

regulation result despite a sign ambiguity in the developed quaternion estimate.

Simulation results are provided in Section 6.7 that illustrate the performance of the

developed controller.









[79] R. Kelly, "Robust asymptotically stable visual serving of planar manipula-
tor," IEEE Trans. Robot. Automat., vol. 12, no. 5, pp. 759-766, 1996.

[80] B. Bishop and M. W. Spong, "Adaptive calibration and control of 2D
monocular visual servo system," in Proc. IFAC Symp. Robot Control, 1997,
pp. 525-530.

[81] L. Hsu and P. L. S. Aquino, "Adaptive visual tracking with uncertain
manipulator dynamics and uncalibrated camera," in Proc. IEEE Conf.
Decision Control, 1999, pp. 1248-1253.

[82] C. J. Taylor and J. P. Ostrowski, "Robust vision-based pose control," in Proc.
IEEE Int. Conf. Robot. Automat., 2000, pp. 2734-2740.

[83] E. Zergeroglu, D. M. Dawson, M. de Queiroz, and A. Behal, "Vision-based
nonlinear tracking controllers in the presence of parametric uncertainty,"
IEEE/ASME Trans. Mechatr., vol. 6, no. 3, pp. 322-337, 2001.

[84] W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, "Adaptive
tracking control of a wheeled mobile robot via an uncalibrated camera
system," IEEE Trans. Syst., Man, CI1, ,, Part B: C:i11, vol. 31, no. 3,
pp. 341 352, 2001.

[85] A. Astolfi, L. Hsu, M. Netto, and R. Ortega, "Two solutions to the adaptive
visual serving problem," IEEE Trans. Robot. Automat., vol. 18, no. 3, pp.
387-392, 2002.

[86] Y. Liu, H. Wang, C. Wang, and K. Lam, "Uncalibrated visual serving of
robots using a depth-independent interaction matrix," IEEE Trans. Robot.,
vol. 22, no. 4, pp. 804-817, 2006.

[87] Y. Fang, W. E. Dixon, D. M. Dawson, and J. Chen, "An exponential class
of model-free visual serving controllers in the presence of uncertain camera
calibration," Int. J. Robot. Automat., vol. 21, 2006.

[88] G. Hu, N. Gans, and W. E. Dixon, "Quaternion-based visual servo control in
the presence of camera calibration error," in IEEE Multi-Conf. Syst. Control,
2007, pp. 1492-1497.

[89] J. Chen, A. Behal, D. M. Dawson, and W. E. Dixon, "Adaptive visual
serving in the presence of intrinsic calibration uncertainty," in Proc. IEEE
Conf. Decision Control, 2003, pp. 5396-5401.

[90] B. Boufama and R. Mohr, "Epipole and fundamental matrix estimation
using virtual parallax," in Proc. IEEE Int. Conf. Computer Vision, 1995, pp.
1030 1036.

[91] J. Shi and C. Tomasi, "Good features to track," in Proc. IEEE Int. Conf.
Computer Vision Pattern Recognition, 1994, pp. 593-600.















CHAPTER 8
CONCLUSIONS

In this dissertation, visual servo control algorithms and architectures are

developed that exploit the visual feedback from a camera system to achieve a

tracking or regulation control objective for a rigid-body object (e.g., the end-

effector of a robot manipulator, a satellite, an autonomous vehicle) identified by a

patch of feature points. These algorithms and architectures can be used widely in

the navigation and control applications in robotics and autonomous systems. The

visual servo control problem in this dissertation were separated into five parts: 1)

visual servo tracking control via a quaternion formulation; 2) collaborative visual

servo tracking control using a daisy-chaining approach; 3) visual servo tracking

control using a central catadioptric camera; 4) robust visual servo control in

presence of camera calibration uncertainty; and 5) combined robust and adaptive

visual servo control via an uncalibrated camera.

An adaptive visual servo tracking control method via a quaternion formulation

is first developed that achieves asymptotic tracking of a rigid-body object to a

desired trajectory determined by a sequence of images. By developing the error

systems and controllers based on a homography decomposition, the singularity

associated with the typical image-Jacobian is eliminated. By utilizing the quater-

nion formulation, a singularity-free error system is obtained. A homography-based

rotation and translation controller is proven to yield the tracking result through

a Lyapunov-based stability analysis. Based on the result for the camera-in-hand

configuration problem, a camera-to-hand extension is given to enable a rigid-

body object to track a desired trajectory. Simulation and experiments results are

provided to show the performance of the proposed visual servo controllers.
















1. R


132 F


130F- .


122 -


118'
118


120 122 124 126 128
ui [pixel]


S... ..



/







/

. . . . .







130 132 134 136


Figure 6-4: Image-space error in pixies between pi(t) and p*. In the figure, "0"

denotes the initial positions of the 4 feature points in the image, and "*" denotes

the corresponding final positions of the feature points.










10 ; .

.... i I
IL



4 .. '

2,

0
140

130 13 5 140
130
120 125
120
v rnip[ I] 110 115


Figure 6-5: Image-space error in pixles between pi(t) and p7 shown in a 3D graph.

In the figure, "0" denotes the initial positions of the 4 feature points in the image,

and "*" denotes the corresponding final positions of the feature points.


p3(Q)
0


S


u [pixel]















CHAPTER 4
COLLABORATIVE VISUAL SERVO TRACKING CONTROL VIA A
DAISY-CHAINING APPROACH

4.1 Introduction

In this chapter, a collaborative trajectory tracking problem is considered for

a six DOF rigid-body object (e.g., an autonomous vehicle) identified by a planar

patch of feature points. Unlike typical visual servo controllers that require either

the camera or the target to remain stationary, a unique aspect of the development

in this chapter is that a moving monocular camera (e.g., a camera mounted on

an unmanned air vehicle (UAV)) is used to provide feedback to a moving control

object. The control objective is for the object to track a desired trajectory that

is encoded by a prerecorded video obtained from a fixed camera (e.g., a camera

mounted on a satellite, a camera mounted on a building).

Several challenges must be resolved to achieve this unexplored control ob-

jective. The relative velocity between the moving planar patch of feature points

and the moving camera presents a significant challenge. By using a daisy-chaining

approach (e.g., [16-19]), Euclidean homography relationships between different

camera coordinate frames and feature point patch coordinate frames are developed.

These homographies are used to relate coordinate frames attached to the moving

camera, the reference object, the control object, and the object used to record

the desired trajectory. Another challenge is that for general six DOF motion by

both the camera and the control object, the normal to the planar patch associated

with the object is unknown. By decomposing the homography relationships, the

normal to the planar patch can be obtained. Likewise, the distance between the

moving camera, the moving control object, and the reference object are unknown.









7.2 Camera Geometry and Assumptions

The camera geometry for this chapter is shown in Figure 2-2 and the corre-

sponding Euclidean and image-space relationships are developed in Sections 2.1 and

2.2. For convenience in the following development, the camera calibration matrix A

is rewritten as

a -a cot 0 U0 an a12 a13

A 0 o = 0 a22 a23 (71)
sin m
0 0 1 0 0 1

Based on the physical meaning of the elements of A, the diagonal calibration

elements are positive (i.e., an, a22 > 0).

The following two assumptions are made for the convenience of the controller

development. They are so reasonable such that they can be considered properties of

the considered vision system.

Assumption 1: The bounds of an and a22 are assumed to be known as


a < a < Cal < a22 -all 22

The absolute values of a12, a13, a23 are upper bounded as


|a12 < a3 <1 a1 3 la23

In (7-2) and (7-3), ( a, (an' 2 (22' C12) (013 and C23 are known positive

constants.

Assumption 2: The reference plane is within the camera's FOV and not at

infinity. That is, there exist positive constants ( and Q, such that


S< zi(t) < (7 4)






5


the moving feature point patch can be obtained. Likewise, the distance between the

moving camera, the moving planar patch, and a reference patch are unknown. By

using the depth ratios obtained from the homography decomposition, the unknown

distance is related to an unknown constant parameter. A Lyapunov-based adaptive

estimation law is designed to compensate for the unknown constant parameter.

Since the moving camera could be attached to a remotely piloted vehicle with

arbitrary rotations, another challenge is to eliminate potential singularities in

the rotation parameterization obtained from the homography decomposition.

To address this issue, homography-based visual servo control techniques (e.g.,

[10, 20-22]) are combined with quaternion-based control methods (e.g., [14,23,24]),

to eliminate singularities associated with the image Jacobian and the rotation

error system. By using the quaternion parameterization, the resulting closed-

loop rotation error system can be stabilized by a proportional rotation controller

combined with a feedforward term that is a function of the desired trajectory.

3) Visual servo tracking control using a central catadioptric camera.

Visual servo controllers require the image-space coordinates of some set of

Euclidean feature points in the control development; hence, the feature points

must remain in the camera's field-of-view (FOV). Since the FOV of conventional

perspective cameras (e.g., pinhole cameras) is restricted, keeping the feature points

in the FOV is a fundamental challenge for visual servo control algorithms. The fun-

damental nature of the FOV problem has resulted in a variety of control and path

planning methods (e.g., [6,7,25-31]). An alternative solution to the aforementioned

algorithmic approaches to resolve the FOV issue is to use advanced optics such

as omnidirectional cameras. Catadioptric cameras (one type of omnidirectional

camera) are devices which use both mirrors (reflective or catadioptric elements) and

lenses (refractive or dioptric elements) to form images [32]. Catadioptric cameras

with a single effective viewpoint are classified as central catadioptric cameras,


























Figure 2-1: Coordinate frame relationships between a camera viewing a planar
patch at different spatiotemporal instances. The coordinate frames 7, T* and Fd
are attached to the current, reference and desired locations, respectively.

reference location for the camera, and the coordinate frame Fd that is attached
to the desired location of the camera. When the desired location of the camera
is a constant, Fd can be chosen the same as F* as shown in Figure 2-2. That
is, the tracking problem becomes a more particular regulation problem for the
configuration in Figure 2-2.
The vectors mf(t), mi, mdi(t) E IR3 in Figure 2-1 are defined as


m A [ x(t) (t) z,(t) (2-1)
A[ /

mdi A Xdi(t) ydi(t) Zdi(t)

where xi(t), /, (t), z,(t) E x,, /:, z ER and Xdi(t), ydi(t), zdi(t) E R denote the
Euclidean coordinates of the feature points Oi expressed in the frames 7, F* and
Td, respectively. From standard Euclidean geometry, relationships between fi(t),









Since the sign of qo(t) is restricted (i.e., assumed to be) positive, then a unique

solution for q (t) can be determined from (6-20) and (6-21).

Based on the similarity between R (t) and R (t) as stated in (6-7), the expres-

sions in (6-19) and (6-20) provide motivation to develop the quaternion estimate

as


qo 1 + tr(R) (6-22)
2

= u sin ( = 3 tr(R). (6-23)


In (6-22) and (6-23), R(t) is the estimated rotation matrix introduced in (6-6)

that is computed from the homography decomposition. Since R (t) is similar to

R (t) (see (6-7)), R(t) is guaranteed to have an eigenvalue of 1, where u (t) is the

unit eigenvector that can be computed from the eigenvalue of 1. Since R(t) is

not guaranteed to be a true rotation matrix (and it will not be in general), the

relationships in (2-15) and (6-21) can not be developed and used to eliminate the

sign ambiguity of the eigenvector u (t). However, the subsequent stability analysis

and simulation results indicate that the same stability result is obtained invariant

of the sign of u(t). Once the initial sign of u(t) is chosen, the same sign can be used

for subsequent computations.

6.4.2 Estimate Relationships

Based on the fact that R (t) is similar to R (t) (see (6-7)), the properties that

similar matrices have the same trace and eigenvalues can be used to relate the

quaternion estimate and the actual quaternion. Since similar matrices have the

same trace, (6-19) and (6-22) can be used to conclude that


qo = qo. (6-24)

As stated earlier, since similar matrices have the same eigenvalues, R(t) is guaran-

teed to have an eigenvalue of 1 with the associated eigenvector u (t). The following









where K IR denotes a positive control gain, and e (t) E IR3 is defined as


zi

where Ani(t) and Trncan be computed from (6-1) and (6-2), respectively, and the
zi
ratio can be computed from the decomposition of the estimated Euclidean
zi
homography in (6-4). The open-loop translation error system can be determined as

1
e = ,-v we + [mn] x c. (6-38)

After substituting (6-33) and (6-36) into (6-38), the resulting closed-loop transla-

tion error system can be determined as


e = A + [K v]x) e K [mfl]x v. (639)


6.6 Stability Analysis

As stated previously, the quaternion estimate q (t) has a sign ambiguity, but

either choice of the sign will yield the same stability result. The following analysis

is developed for the case where

qv = Aq,. (6-40)

A discussion is provided at the end of the analysis, that describes how the stability

can be proven for the case when


q = -yAqv.


Theorem 6.1: The controller given in (6-33) and (6-36) ensures asymptotic

regulation in the sense that


|q(t) | 0, ||e(t)|| 0 as t oo


(6-41)









which are desirable because they yield pure perspective images [33]. In Chapter 5,

a visual servo control scheme is presented that yields a tracking result for a camera-

in-hand central catadioptric camera system. The tracking controller is developed

based on the relative relationships of a central catadioptric camera between the

current, reference, and desired camera poses. To find the relative camera pose rela-

tionships, homographies are computed based on the projection model of the central

catadioptric camera [33-36]. Geyer and Daniilidis [36] proposed a unifying theory

to show that all central catadioptric systems are isomorphic to projective mappings

from the sphere to a plane with a projection center on the perpendicular axis to the

plane. By constructing links between the projected coordinates on the sphere, the

homographies up to scalar multiples can be obtained. Various methods can then

be applied to decompose the Euclidean homographies to find the corresponding

rotation matrices, and depth ratios. The rotation error system is based on the

quaternion formulation which has a full-rank interaction matrix. Lyapunov-based

methods are utilized to develop the controller and to prove asymptotic tracking.

4) Robust visual servo control.

In vision-based control, exact calibration is often required so that the image-

space sensor measurements can be related to the Euclidean or joint space for

control implementation. Specifically, a camera model (e.g., the pinhole model)

is often required to relate pixel coordinates from an image to the (normalized)

Euclidean coordinates. The camera model is typically assumed to be exactly known

(i.e., the intrinsic calibration parameters are assumed to be known); however,

despite the availability of several popular calibration methods (cf. [37-43]),

camera calibration can be time consuming, requires some level of expertise,

and has inherent inaccuracies. If the calibration parameters are not exactly known,

performance degradation and potential unpredictable response from the visual

servo controller may occur. The goal of this research is to develop a visual servo









3-10 Pixel coordinate p(t) of the current pose of the four feature points in the
tracking Matlab simulation. The upper figure is for the u(t) component
and the bottom figure is for the v(t) component. . . ... 50

3-11 Tracking error p(t) pd(t) (in pixels) of the four feature points in the
tracking Matlab simulation. The upper figure is for the u(t) ud(t) com-
ponent and the bottom figure is for the v(t) Vd(t) component. . 51

3-12 Linear camera velocity input vc(t) in the tracking Matlab simulation. 51

3-13 Angular camera velocity input wc(t) in the tracking Matlab simulation. 52

3-14 Adaptive on-line estimate of z* in the tracking Matlab simulation. . 52

3-15 Translation error e(t) in the tracking experiment. . . ... 53

3-16 Rotation quaternion error q(t) in the tracking experiment. . ... 53

3-17 Pixel coordinate pd(t) of the four feature points in a sequence of desired
images in the tracking experiment. The upper figure is for the ud(t) com-
ponent and the bottom figure is for the Vd(t) component. . ... 54

3-18 Pixel coordinate p(t) of the current pose of the four feature points in the
tracking experiment. The upper figure is for the u(t) component and the
bottom figure is for the v(t) component. . . . ... 54

3-19 Tracking error p(t) pd(t) (in pixels) of the four feature points in the
tracking experiment. The upper figure is for the u(t) ud(t) component
and the bottom figure is for the v(t) Vd(t) component. . ... 55

3-20 Linear camera velocity input Vc(t) in the tracking experiment. . 55

3-21 Angular camera velocity input Wc(t) in the tracking experiment. . 56

3-22 Adaptive on-line estimate of z* in the tracking experiment. . ... 56

3-23 Translation error e(t) in the regulation experiment. . . ... 57

3-24 Rotation quaternion error q(t) in the regulation experiment. ...... 57

3-25 Pixel coordinate p(t) (in pixels) of the current pose of the four feature
points in the regulation experiment. The upper figure is for the u(t) com-
ponent and the bottom figure is for the v(t) component. . ... 58

3-26 Regulation error p(t) p*(in pixels) of the four feature points in the reg-
ulation experiment. The upper figure is for the u(t) u*(t) component
and the bottom figure is for the v(t) v*(t) component. . ... 58

3-27 Linear camera velocity input Vc(t) in the regulation experiment. . 59

3-28 Angular camera velocity input Wc(t) in the regulation experiment. . 59






























To my mother Huanyin Zhang, my father Huade Hu, and my wife Yi Feng for their

endless love and support















































Time [sec]



Figure 6-6: Linear camera velocity control input.


Time [sec]



Figure 6-7: Angular camera velocity control input.


Co
2
Co




















Co
-C

S










Proof: Let V1 (q, qo) IR denote the following non-negative function:


Vi q q, + (1


qo)2


(7-28)


Based on the open-loop error system in (7-15), the time-derivative of V (t) can be

determined as


i = 2q, 2(1 qo)qo = q wc = qvlcl + + qv2c2 + v3c3.


(7-29)


After substituting (7-20) for qv(t) and substituting (7-25) for ~c(t), the expression

in (7-29) can be simplified as


(k 11-, q+ k22 -I2 q2
a11 a22


+ k.. ::


, 12
a22


+ k,21 a1 2
a223
(7-30)


+ k -1223
\ a22


a13) qv1qv3 + k331all v3


1
[2 k22a23qv2qv3 + k.: ,,, J
a22

After completing the squares on each of the bracketed terms in (7-30), the expres-

sion in (7-30) can be written as


( 1 1 2 9
(k l- qa 1 + k,22-22 2 + k. :
a 011 a2112


S V1 -
all 1

ll
a11 [ v

+a11 (k31



a22 v


k a212 2
2a22


1
2


k1 2
4 ali


a12a23
a 22


a(12
Sa2


2


+ a~ k-21
a22 (

a13 ) v3 )

S)2
-23 ~13


+ a22 (k32


1 a2 2
4 a11a22





q]

!k2 a23 21
4 a22 q3


ll1
a1


(7 31)


11









[92] C. Tomasi and T. Kanade, "Detection and tracking of point features,"
Carnegie Mellon University, Tech. Rep., 1991.

[93] 0. Faugeras and F. Lustman, "Motion and structure from motion in a
piecewise planar environment," Int. J. Pattern Recognition and Artificial
Intelligence, vol. 2, no. 3, pp. 485-508, 1988.

[94] Z. Zhang and A. R. Hanson, "Scaled euclidean 3D reconstruction based on
externally uncalibrated cameras," in IEEE Symp. Computer Vision, 1995, pp.
37 42.

[95] Y. Fang, A. Behal, W. E. Dixon, and D. M. Dawson, "Adaptive 2.5D visual
serving of kinematically redundant robot manipulators," in Proc. IEEE
Conf. Decision Control, 2002, pp. 2860-2865.

[96] J. J. Slotine and W. Li, Applied Nonlinear Control. Englewood Cliff, NJ:
Prentice Hall, Inc., 1991.

[97] G. Bradski, "The OpenCV library," Dr. Dobb 's Journal of Software Tools,
vol. 25, pp. 120, 122-125, 2000.

[98] M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, and
F. Rossi, GNU Scientific Library: Reference Manual, Network Theory Ltd.,
Bristol, UK, 2005.

[99] P. K. Allen, A. Timcenko, B. Yoshimi, and P. Michelman, "Automated
tracking and grasping of a moving object with a robotic hand-eye system,"
IEEE Trans. Robot. Automat., vol. 9, no. 2, pp. 152-165, 1993.

[100] G. D. Hager, W.-C. Chang, and A. S. Morse, "Robot hand-eye coordination
based on stereo vision," IEEE Contr. Syst. Mag., vol. 15, no. 1, pp. 30-39,
1995.

[101] S. Wiiesoma, D. Wolfe, and R. Richards, "Eye-to-hand coordination for
vision-guided robot control applications," Int. J. Robot. Res., vol. 12, no. 1,
pp. 65-78, 1993.

[102] N. Gans, G. Hu, and W. E. Dixon, Cionpl.ritf! and Nonlinearity in Au-
tonomous Robotics, Encyclopedia of Coirlh.ritfi and System Science.
Springer, to appear 2008, ch. Image-Based State Estimation.

[103] A. Almansa, A. Desolneux, and S. Vamech, "Vanishing point detection
without any a priori information," IEEE Trans. Pattern Anal. Machine
Intell., vol. 25, no. 4, pp. 502-507, 2003.







































0 2 4 6 8
Time [sec]


Figure 6-2: Quaternion rotation error.


S0.5

0 2
0 2


4 6 8


Time [sec]


Figure 6-3: Quaternion rotation error for comparison with different sign.


0.61
0.6








This parameterization facilitates the subsequent problem formulation, control
development, and stability analysis since the unit quaternion provides a globally
nonsingular parameterization of the rotation matrix.
Given (p, k), the unit quaternion vector q (t) can be constructed as

(t)cos )
Sqot) ( Cs p(t (2 13)
q (t) k(t)sin (

Based on (2-13), the rotation matrix in (2-9) can be expressed as

R (q) = I3 + 2qoq, + 2(q, )2 = (q Tq,) 13 + 2qvq + 2qoq (2-14)

The rotation matrix in (2-14) is typical in robotics literature where the moving
coordinate system is expressed in terms of a fixed coordinate system (typically the
coordinate system attached to the base frame). However, the typical representation
of the rotation matrix in aerospace literature (e.g., [15]) is

R (q) = (q2 qT v) 13 + 2qq 2qoqx. (2-15)

The difference is due to the fact that the rotation matrix in (2-15) (which is used
in the current dissertation) relates the moving coordinate frame T to the fixed
coordinate frame T* with the corresponding states expressed in F. The rotation
matrix in (2-15) can be expanded as

q0 + q,1 qW2 q3 2(qvqv2 + qv3qo) 2(q~vqv3 qv2qo0)
R (q) = 2(qqv2 qv3qo) q q1 + q,2 q3 2(qv2qv3 + qvqo) (216)

2(qvlqv3 + qv2qo) 2(v, _,1, :- qqo) qO q2 2 + q3

From (2-16) various approaches could be used to determine qo(t) and qv(t);
however, numerical significance of the resulting computations can be lost if qo(t)
is close to zero [15]. Shuster [15] developed a method to determine qo(t) and

q (t) that provides robustness against such computational issues. Specifically, the









[13] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision.
New York: NY: Cambridge University Press, 2000.

[14] G. Hu, W. E. Dixon, S. Gupta, and N. Fitz-coy, "A quaternion formulation
for homography-based visual servo control," in Proc. IEEE Int. Conf. Robot.
Automat., 2006, pp. 2391-2396.

[15] M. Shuster, "A survey of attitude representations," J. Astronautical Sciences,
vol. 41, no. 4, pp. 439-518, 1993.

[16] G. Hu, S. Mehta, N. Gans, and W. E. Dixon, "Daisy chaining based visual
servo control part I: Adaptive quaternion-based tracking control," in IEEE
Multi-Conf. Syst. Control, 2007, pp. 1474-1479.

[17] G. Hu, N. Gans, S. Mehta, and W. E. Dixon, "Daisy chaining based visual
servo control part II: Extensions, applications and open problems," in IEEE
Multi-Conf. Syst. Control, 2007, pp. 729-734.

[18] S. Mehta, W. E. Dixon, D. MacArthur, and C. D. Crane, "Visual servo
control of an unmanned ground vehicle via a moving airborne monocular
camera," in Proc. American Control Conf., 2006, pp. 5276-5211.

[19] S. Mehta, G. Hu, N. Gans, and W. E. Dixon, "Adaptive vision-based
collaborative tracking control of an ugv via a moving airborne camera: A
daisy chaining approach," in Proc. IEEE Conf. Decision Control, 2006, pp.
3867-3872.

[20] E. Malis, F. Chaumette, and S. Bodet, "2 1/2 D visual servingg" IEEE
Trans. Robot. Automat., vol. 15, no. 2, pp. 238-250, 1999.

[21] E. Malis and F. Chaumette, "Theoretical improvements in the stability
analysis of a new class of model-free visual serving methods," IEEE Trans.
Robot. Automat., vol. 18, no. 2, pp. 176-186, 2002.

[22] Y. Fang, W. E. Dixon, D. M. Dawson, and P. Chawda, "Homography-based
visual serving of wheeled mobile robots," IEEE Trans. Syst., Man, C.i, I -,
Part B: C;I., i,. vol. 35, no. 5, pp. 1041-1050, 2005.

[23] W. E. Dixon, A. Behal, D. M. Dawson, and S. Nagarkatti, Nonlinear Control
of Engineering Systems: A L!lIiniio'-Based Approach. Birkhauser Boston,
2003.

[24] G. Hu, S. Gupta, N. Fitz-coy, and W. E. Dixon, "Lyapunov-based visual
servo tracking control via a quaternion formulation," in Proc. IEEE Conf.
Decision Control, 2006, pp. 3861-3866.

[25] P. Corke and S. Hutchinson, "A new partitioned approach to image-based
visual servo control," IEEE Trans. Robot. Automat., vol. 17, no. 4, pp.
507-515, 2001.









By using the depth ratios obtained from the homography decomposition, the

unknown time-varying distance is related to an unknown constant parameter. A

Lyapunov-based adaptive estimation law is designed to compensate for the un-

known constant parameter. The moving camera could be attached to a remotely

piloted vehicle with arbitrary rotations, this requires a parameterization that is

valid over a large (possibly unbounded) domain. Additionally, since this work is

motivated by problems in the aerospace community, homography-based visual servo

control techniques (e.g., [10, 20, 22]) are combined with quaternion-based control

methods (e.g., [14, 23, 24]) to facilitate large rotations. By using the quaternion

parameterization, the resulting closed-loop rotation error system can be stabilized

by a proportional rotation controller combined with a feedforward term that is a

function of the desired trajectory.

4.2 Problem Scenario

Over the past decade, a variety of visual servo controllers have been addressed

for both camera-to-hand and camera-in-hand configurations (e.g., see [1, 99-

101]). For visual servo control applications that exploit either of these camera

configurations, either the object or the camera is required to remain stationary.

In contrast to typical camera-to-hand or camera-in-hand visual servo control

configurations, a moving airborne monocular camera (e.g., a camera attached to

a remote controlled aircraft, a camera mounted on a satellite) is used by Mehta

et al. [18, 19] to provide pose measurements of a moving sensorless unmanned

ground vehicle (UGV) relative to a goal configuration. The results in [18, 19]

are restricted to three DOF, and the rotation error system is encoded by Euler

angle-axis parameterization.

Consider a stationary coordinate frame ZR that is attached to a camera and

a time-varying coordinate frame Fd that is attached to some object (e.g., an

autonomous vehicle) as depicted in Figure 4-1. The object is identified in an image









Zi (t) Zdi(t}
where I) can be computed from (5-6) and (5-17), can be computed from
Li (t) Ldi(t)
similar relationships, and the depth ratios ai (t) adi (t) can be obtained from the

homography decompositions in (5-16).

5.5 Control Development

5.5.1 Open-Loop Error System

The open-loop rotation error system can be developed as [23]


q 2 [cq RWcd) (5-28)
2 4 0 13 + c

where q(t) = (qo(t), T(t))T, wc(t) E IR3 denotes the camera angular velocity control

input, and Wcd (t) E IR3 denotes the desired angular velocity of the camera that is

assumed to be a priori generated as a bounded and continuous function (see [10] for

a discussion regarding the development of a smooth desired trajectory from a series

of images).

Based on (5-7) and (5-10), the derivative of mdi(t) is obtained as


mdi = ( -mdsi (5-29)

where Pdi(t) E IR is defined as

V*/L*
di -- di
Zdi Zdi/Ldi

Based on (5-29) and the fact that [22]

mdi = -Vcd + fdiWcd, (5-30)

where Vcd (t) E IR3 denotes the desired linear velocity of the camera expressed in -d,

it can be obtained that

z* X d (
Vcd = cmdsWcd z -T i (5-31)
Pdi t Pdi
















APPENDIX A
UNIT NORM PROPERTY FOR THE QUATERNION ERROR

Property: The quaternion error (qo(t), q (t))T defined in (3-5) has a unit

norm given that q(t) and qd(t) are two unit quaternions.

Proof: The quaternion components in (3-5) can be expanded as




qv1 qvdl

o4 = 0qoqd + qv2 qvd2 = 1vqvd + "'l. .,' + :'/..,: + 0qod (A-1)

qv3 Wvd3

and


qvdl qvi 0 -qv3 Wv2 qvdl

qv = qo qvd2 Od qv2 + qv3 0 -qv qvd2 (A-2)

qvd3 qv3 -qv2 1v 0 qvd3

qoqvdl + qv2qvd3 qv3qvd2 qvlq0d

qoqvd2 qvlqvd3 + /. :'/, qv2qod

qoqvd3 + qvlqvd2 qv2qvdl qv3qod

Based on (A-1) and (A-2),


2 T 4v

= (qvqvdl + qv2qvd2 + qv3qvd3 + qoqod)2
T
qoqvdl + i, /,'.,: i, :'/ ., qviqod qo0vdl + i, /,'.,: i, :'/,.,' qvlod

+ qoqvd2 qvlqvd3 + /, :'/. i- qv2qd qoqvd2 qvlqvd3 + /, :'/,., qv2qod

qoqvd3 + qvlqvd2 qv2qvdl qv3qod qoqvd3 + qvlqvd2 qv2qvdl qv3qod















CHAPTER 7
COMBINED ROBUST AND ADAPTIVE HOMOGRAPHY-BASED VISUAL
SERVO CONTROL VIA AN UNCALIBRATED CAMERA

7.1 Introduction

In this chapter, a new combined robust and adaptive visual servo controller

is developed to asymptotically regulate the feature points of a rigid-body object

(identified a planar patch of feature points) in an image to the desired feature

point locations while also regulating the six DOF pose of the camera (which is

affixed to the object). These dual objectives are achieved by using a homography-

based approach that exploits both image-space and reconstructed Euclidean

information in the feedback loop. In comparison to pure image-based feedback

approaches, some advantages of using a homography-based method include:

realizable Euclidean camera trajectories (see Chaumette [48] and Corke and

Hutchinson [25] for a discussion of Chaumette's Conundrum); a nonsingular image-

Jacobian; and both the camera position and orientation and the feature point

coordinates are included in the error system. Since some image-space information is

used in the feedback-loop of the developed homography-based controller, the image

features are less likely to leave the FOV in comparison with pure position-based

approaches. The developed controller is composed of the same adaptive translation

controller as in the preliminary results in Chen et al. [89] and a new robust rotation

controller. The contribution of the result is the development of the robust angular

velocity controller that accommodates for the time-varying uncertain scaling factor

by exploiting the upper triangular form of the rotation error system and the fact

that the diagonal elements of the camera calibration matrix are positive.









algorithm is implemented for Jacobian estimation, and a dynamic Gauss-Newton

method is used to minimize the squared error in the image plane.

Robust control approaches based on static best-guess estimation of the

calibration matrix have been developed to solve the uncalibrated visual servo

regulation problem (cf. [21, 82, 87, 88]). Specifically, under a set of assumptions

on the rotation and calibration matrix, a kinematic controller was developed by

Taylor and Ostrowski [82] that utilizes a constant, best-guess estimate of the

calibration parameters to achieve local set-point regulation for the six DOF visual

servo control problem. Homography-based visual serving methods using best-guess

estimation are used by Malis and Chaumette [21] and Fang et al. [87] to achieve

asymptotic or exponential regulation with respect to both camera and hand-eye

calibration errors for the six DOF problem.

The development of traditional adaptive control methods to compensate for

uncertainty in the camera calibration matrix is inhibited because of the time-

varying uncertainty injected in the transformation from the normalization of the

Euclidean coordinates. As a result, initial adaptive control results such as [79-85]

were limited to scenarios where the optic axis of the camera was assumed to be

perpendicular with the plane formed by the feature points (i.e., the time-varying

uncertainty is reduced to a constant uncertainty) or assumed an additional sensor

(e.g., ultrasonic sensors, laser-based sensors, additional cameras) could be used to

measure the depth information.

More recent approaches exploit geometric relationships between multiple

spatiotemporal views of an object to transform the time-varying uncertainty into

known time-varying terms multiplied by an unknown constant [9,21, 86-89]. In

Ruf et al. [9], an on-line calibration algorithm was developed for position-based

visual serving. In Liu et al. [86], an adaptive image-based visual servo controller

was developed that regulated the feature points in an image to desired locations.















APPENDIX B
ONE PROPERTY OF UNIT QUATERNIONS

Property: (13 + q)-lqv = qv.

Proof: The term 13 qv can be expanded as

1 0 0 0 -q3 qv2 1 v3

13 q = 0 1 0 q3 0 -qiv -qv3 1

0 0 1 -q2 qv1 0 q2 -qvi

Faking the inverse of the expanded matrix in (B 1) gives

-1
1 qv3 -qv2

(13 q,) = -q33 1 qv

v2 -qvl 1


qvl + 1
1

ql + q2 + q3 + 1 qv3 + qvlqv2
-v2 + qvq v3
W2 + qvlqv3


-qv3 + qvlqv2

q22 + 1

qv~ + 1, 2"1,


-qv2

qv (B-1)

1











qv2 + qv qv3

-qvi + qv2qv3

qW3 +1

(B-2)


Multiplying qv on both sides of (B-2) gives


(13 q,)- v


qv3 (qv2 + qvlqv3) + qv2 (-qv3 + qvlqv2) + qvi (4q1 + 1)

qi (qv3 + qvlqv2) + qv3 (-qv1 + "1. :) + v2 (q2 + 1)

qv2 (qvi + qv2qv3) + qvi (-qv2 + qvlqv3) + q3 (qO, + 1)
qv1 + q2 + q3 +1


r








provided the control gains satisfy the sufficient conditions given in (7-26).
Proof: See proofs in Propositions 7.1 and 7.2.
7.5 Simulation Results
Numerical simulations were performed to illustrate the performance of the
controller given in (7-25) and (7-35) and the adaptive law given in (7-36). The
intrinsic camera calibration matrix is given by


122.5
A= 0

0


-3.77 100
122.56 100
0 1


The camera is assumed to view an object with four coplanar feature points with
the following Euclidean coordinates (in [m]):


01= 0.1 0.1 0 T 2 = 0.1 -0.1 0

03 -0.1 0.1 0 04 = -0.1 -0.1 0

The normalized coordinates of the vanishing points were selected as

0.01 0.01 1 0.01 -0.01 1


(7


-0.01 0.01 1


40)


[,,, ,,,T




























0o


-0.1


-0.1


5 10 15 20
Time [sec]


Figure 3-27: Linear camera velocity input v,(t) in the regulation experiment.








0.2


0

s


0.2


0


-0.2
0 5 10 15 2


0.5


0


-0.5
0 5


Time [sec]


Figure 3-28: Angular camera velocity input wc(t) in the regulation experiment.


~ch~r j


0


0










Likewise, if q,1 (t) has the maximum value in (2-20) then the greatest numerical

accuracy can be obtained by computing qo(t) and q,(t) as

R23 R32
qo =

S R1 R22 R33 + 1

R12 + R21
qv2 =
R13 + R31
qv3 I

where the sign of qv(t) is selected so that qo(t) > 0. If qv2(t) is the maximum, then

R31 R13
qo -
R12 + R21

-- 4,/, _, (2-23)
2 /-22 -R11 R33 + 1
92 =
4
R23 + R32
v3 --
,1, _,

or if q23(t) is the maximum, then

R12 R21
qo -
R13 + R31
qv- = 4,J--
S- 4,1 :(2-24)
R23 + R32 ( 2
qv2 =

R33 R11 R22 + 1
9v3 =
4

where the sign of qv2(t) or qv3(t) is selected so that qo(t) > 0.

The expressions in (2-21)-(2-24) indicate that given the rotation matrix R(t)

from the homography decomposition, the unit quaternion vector can be determined

that represents the rotation without introducing a singularity. The expressions in

(2-21)-(2-24) will be utilized in the subsequent control development and stability

analysis.









7.4 Control Development

7.4.1 Rotation Control Development and Stability Analysis

Based on the relationship in (7-18), the open-loop error system in (7-15), and

the subsequent stability analysis, the rotation controller is designed as


c = -klq = (k11 + 2) q i (7-25)

c2 = -.. 1 = (k,21 + k,22 + I) v2

Wc3 -kw34qv3 (kw31 + kw32 + k,33) 9v3


where ki E R, i, = 1,2,3and 7k E R, i,j = 1,2,3,j < i, are positive

constants. The expressed form of the controller in (7-25) is motivated by the use of

completing the squares in the subsequent stability analysis. In (7-25), the damping

control gains k,21, kw31, kJ32 are selected according to the following sufficient

conditions to facilitate the subsequent stability analysis
-2


1 1
31 2 1 ( a12-a23 + 2a13
k,31 > 4 1 k '-( + a13
-a11 a22
2
kw32 > kw2-2
-a22

where a1, an11, Ca22 a22, )a12, (a13 and (a23 are defined in (7-2) and (7-3), and

k11, kI22, kJ33 are feedback gains that can be selected to adjust the performance of

the rotation control system.

Proposition 7.1: Provided the sufficient gain conditions given in (7-26) are

satisfied, the controller in (7-25) ensures asymptotic regulation of the rotation error

in the sense that


I q, (t) | 0, as t c. o.


(7-27)








expressed in F is defined as wc(t) R3 where

ij = c Rwcd. (3-11)

The camera angular velocities can be related to the time derivatives of q (t) and

qd (t) as [23]

= :] q (3-12)
q 2 qo2 0 + q,

and
qOd 1 1
2 j Wed, (313)
qvd qod13 + qvd
respectively.
As stated in Remark 3 in Chen et al. [10], a sufficiently smooth function can
be used to fit the sequence of feature points to generate the desired trajectory

pdi(t); hence, it is assumed that Ped(t) and Ped(t) are bounded functions of time.
In practice, the a priori developed smooth functions adi(t), Rd(t), and can
be constructed as bounded functions with bounded time derivatives. Based on
the assumption that Rd(t) is a bounded first order differentiable function with a
bounded derivative, the algorithm for computing quaternions in [14] can be used to
conclude that (qod(t), q7d(t)) are bounded first order differentiable functions with
a bounded derivative; hence, (qod(t), QT(t)) and (qod(t), d(t)) are bounded. In
the subsequent tracking control development, the desired signals Ped(t) and qvd(t)
will be used as feedforward control terms. To avoid the computational singularity
in Od(t), the desired trajectory in [10] was generated by carefully choosing the
smooth function such that the workspace is limited to (-r, r). Unlike [10], the use
of the quaternion alleviates the restriction on the desired trajectory pd(t).
From (3-13), the signal w d (t) can be calculated as


Jcd = 2(qodqvd qvdqod) 2qqXdvd,


(3-14)









1.3 Literature Review

1.3.1 Basic Visual Servo Control Approaches

Different visual servo control methods can be divided into three main cate-

gories including: image-based, position-based, and approaches that make use of a

blend of image and position-based approaches. Image-based visual servo control

(e.g., [1, 44-47]) consists of a feedback signal that is composed of pure image-space

information (i.e., the control objective is defined in terms of an image pixel er-

ror). This approach is considered to be more robust to camera calibration and

robot kinematic errors and is more likely to keep the relevant image features in

the FOV than position-based methods because the feedback is directly obtained

from the image without the need to transfer the image-space measurement to

another space. A drawback of image-based visual servo control is that since the

controller is implemented in the robot joint space, an image-Jacobian is required

to relate the derivative of the image-space measurements to the camera's linear

and angular velocities. However, the image-Jacobian typically contains singularities

and local minima (see Chaumette [48]), and the controller stability analysis is

difficult to obtain in the presence of calibration uncertainty (see Espiau et al. [49]).

Another drawback of image-based methods are that since the controller is based

on image-feedback, the robot could be commanded along a trajectory that is not

physically possible. This issue is described as Chaumette's conundrum. Further

discussion of the Chaumette's conundrum is provided in Chaumette [48] and Corke

and Hutchinson [25].

Position-based visual servo control (e.g., [1, 44, 50-52]) uses reconstructed

Euclidean information in the feedback loop. For this approach, the image-Jacobian

singularity and local minima problems are avoided, and physically realizable

trajectories are generated. However, the approach is susceptible to inaccuracies

in the task-space reconstruction if the transformation is corrupted (e.g., uncertain



















.x

S600


400
0 5 10 15 20




1400 -

S1200 -

1000
8 0 0 . . .. . .. .. . . .. ... . .
800

600
0 5 10 15 20
Time [sec]



Figure 4-5: Pixel coordinate p(t) of the four feature points on the plane Tr in a se-
quence of images taken by the moving camera attached to I. The upper figure is
for the u(t) component and the bottom figure is for the v(t) component.






0.5


0
x .


-0.5
0 5 10 15 20
0.5


0


-0.5
0 5 10 15 20
0.5


Figure 4-6: Translation error e(t).


0 5 10 15 20
Time [sec]









in the experiment is currently capable of displaying three simultaneous displays. A

picture of the displays can be seen in Figure 3-3.

The virtual reality simulator utilizes MultiGen-Paradigm's Vega Prime, an

OpenGL-based, commercial software package for Microsoft Windows. The virtual

environment in the experiment is a recreation of the U.S. Army's urban warfare

training ground at Fort Benning, Georgia. The environment has a dense polygon

count, detailed textures, high frame rate, and the effects of soft shadows, resulting

in very realistic images. A scene from the Fort Benning environment can be seen in

Figure 3 4.

The visual sensor in the experiment is a Sony XCD-710CR color firewire

camera with a resolution of 1280 x 768 pixels and fitted with a 12.5mm lens. The

camera captures the images on the large screens as shown in Figure 3-3. The

images are processed in a vision processing workstation. An application written

in C++ acquires images from the camera and process the images to locate and

track the feature points (the initial feature points were chosen manually, then

the application will identify and track the feature points on its own). The C++

application generates the current and desired pixel coordinates, which can be used

to formulate the control command.

In addition to the image processing application, a control command generation

application programmed in Matlab also runs in this workstation. The Matlab appli-

cation communicates data with the image processing application (written in C++)

via shared memory buffers. The Matlab application reads the current and desired

pixel coordinates from the shared memory buffers, and writes linear and angular

camera velocity input into the shared memory buffers. The C++ application writes

the current and desired pixel coordinates into the shared memory, and reads cam-

era velocity input from the shared memory buffer. The linear and angular camera

velocity control input are sent from the vision processing workstation to the virtual









and mii E IR3 denotes the constant Euclidean coordinates of the feature points on

the planar patch T* expressed in ZR as


i V Vxri Yri I
[ T


After some algebraic manipulation, the expressions in (4-2)-(4-4) can be rewritten

as

m1 = xT + R ni (4-10)

mi = Xf + Rm, mrd = Xd + Rrdm* (4-11)
*ri = Xfr + Rrfi i = Xfr + Rrf(i, (4-12)


where R, (t), R (t), Rrd(t), Rr(t) E SO (3) and x,(t), Xf (t), Xfrd(t), Xf,(t) E 3 are

new rotational and translational variables, respectively, defined as


R, = R*RT R = RR*T (4-13)

Rrd = rdR*T Rr = RR*T

and



x = x} Rn (xf R (2i sli)) (4-14)

Xf = Xf R (x} + R* (s2i si)) (4-15)

Xfrd = Xfrd Rd (Xfr + R (S2i Sli)) (4-16)

Xfr = X, Rrf = Xf, RrXf. (4-17)

Note that Rn (t), R(t) and Rd (t) in (4-13) are the rotation matrices between F

and F*, F* and F, and F* and Ed, respectively, but Xn(t), xf(t) and Xfrd(t) in

(4-14)-(4-16) are not the translation vectors between the corresponding coordinate















ACKNOWLEDGMENTS

Thank you to my advisor, Dr. Warren Dixon, for his guidance and encour-

agement which will benefit me a life time. As an advisor, he kept polishing my

methodology and skills in resolving problems and formulating new problems. As a

mentor, he helped me develop professional skills and gave me opportunities to get

exposed to professional working environment.

Thanks to my committee members Dr. Thomas Burks, Dr. Carl Crane III, Dr.

Seth Hutchinson, and Dr. Rick Lind, for the time and help that they provided.

Thanks to Dr. Nick Gans and Sid Mehta for all the insightful discussions.

Thanks to Dr. Joe Kehoe for his suggestions in formatting my defense presentation

and dissertation. Finally, thanks to my NCR lab fellows for their friendship during

the past three years of joy.









where 01 (t) E IR" 2 (t) E n2, 3 (t) E In3 denote the intrinsic calibration
parameter mismatch defined as


01 ) =01-01) 02 () =02- 02 t) 03 (t) =03 03 (t)

Proposition 7.2: The controller given in (7-35) along with the adaptive

update law in (7-36) ensures asymptotic regulation of the translation error system
in the sense that

|Ie(t) |-I 0 as t -oo.

Proof: Let V2(e, 01, 02, 03) E R denote the following non-negative function:


2 ai 2 i 2 2 2 2

After taking the time derivative of (7-38) and substituting for the closed-loop error

system developed in (7-37), the following simplified expression can be obtained:

V2 = -kie 1., k3e (7-39)

Based on (7-38) and (7-39), el(t), e2(t), e3(t) 6 o n 2, and 0i (t) ,i (t) i =

1, 2, 3 Loo. Based on the assumption that < zi(t) < Cz, the expression
in (7-35) can be used to conclude that Vc(t) L,. Based on the previous

stability analysis for the rotation controller, wJc(t) E L; hence, (7-37) can be
used to conclude that el(t), e2(t), 3(t) E L (i.e., e(t), e2(t), 3(t) are uniformly

continuous). Barbalat's lemma [96] can now be used to show that el (t), e2(t), 3 (t)
- 0 as t -- oc.
Based on Propositions 7.1 and 7.2, the main result can be stated as follows.

Theorem 7.1: The controller given in (7-25) and (7-35) along with the adap-
tive update law in (7-36) ensures asymptotic translation and rotation regulation in
the sense that


Iq,(t)II 0 and Ie(t)ll 0 as t oo,









for the object to go to a desired pose that is encoded by a prerecorded image. The

regulation problem can be considered as a particular case of the tracking problem.

For example, when all the images in the sequence of desired images are identical,

the tracking problem becomes a regulation problem. The dissertation will address

the following problems of interest: 1) visual servo tracking control via a quaternion

formulation; 2) collaborative visual servo tracking control using a daisy-chaining

approach; 3) visual servo tracking control using a central catadioptric camera; 4)

robust visual servo control in presence of camera calibration uncertainty; and 5)

combined robust and adaptive visual servo control via an uncalibrated camera. The

control development in the dissertation is proven by using nonlinear Lyapunov-

based methods and is demonstrated by Matlab simulation and/or experimental

results.

1) Visual servo tracking control via a quaternion formulation.

Much of the previous visual servo controllers have only been designed to

address the regulation problem. Motivated by the need for new advancements to

meet visual servo tracking applications, previous research has concentrated on

developing different types of path planning techniques [5-9]. Recently, Chen et al.

[10] provided a new formulation of the tracking control problem. A homography-

based adaptive visual servo controller is developed to enable a robot end-effector

to track a prerecorded time-varying reference trajectory determined by a sequence

of images. The Euler angle-axis representation is used to represent the rotation

error system. Due to the computational singularity limitation of the angle axis

extraction algorithm (see Spong and Vidyasagar [11]), rotation angles of T

were not considered. Motivated by the desire to avoid the rotation singularity

completely, an error system and visual servo tracking controller is developed in

Chapter 3 based on the quaternion formulation. A homography is constructed

from image pairs and decomposed via textbook methods (e.g., Faugeras [12] and








and IR, were set as

R (0) = R,(1800)Ry (0) R, (400)

R* (0) = R,(180)R, (0o) R, (-20)

Rd (0) = R,(1800)Ry (00) R, (20)
R. = R,(180l)Ry (o0) R, (80o).

The initial translation vectors xf (0) between F' and I (expressed in Z), x* (0)
between F* and I (expressed in 1), and Xfrd (0) between Fd and IR (expressed in
IR), and the constant translation vector i, (0) between F* and IR (expressed in
IR), were selected as

f (0) = -0.5 0.5 4.0
r T
} (0) = 1.0 1.5 3.5
r T
Xfrd (0) = 0.5 1.5 6.0
r T
x, (0) = -1.0 1.5 4.0

The initial Euclidean relationship between the cameras, the reference object, the
control object, and the object that was used to generate the desired trajectory is
shown in Figure 4-2.
The initial image-space coordinates (i.e., pi(0)) of the four feature points
attached to the plane Tr, expressed in I, were computed as (in pixels)



i(0) = 409.62 660.75 1 P2(0)= 454.00 623.51 1

P3(0) 491.25 667.89 1 (0) 402.48 742.38 1]T













APPENDIX D
PROPERTY ON MATRIX NORM
Property: I[]x 2 Ill VE C f3.
Proof: The spectral norm A | 2 of a real matrix is the square root of the
largest eigenvalue of the matrix multiplied by its transpose, i.e.,

A|2 \= max- {ATA}.

For any given vector R IR3,

0 3 2
[] = 0 -0 &
-2 1 0

The Euclidean norm of [(]x is given by


I x 2 max x L x

2+32 + -
212 + 13



= /62 + + 2.


The norm of ( is given by



Hence, I[]Xl = Il .









feature points on T, T* and 7Td can be expressed in terms of 1, respectively, as


M)A x(t) '/ (t) z (t) XA z (328)
T
ndi() A [Xdi(t) Ydi(t) Zdi(t)

under the standard assumption that the distances from the origin of I to the

feature points remains positive (i.e., zi (t) z~, Zdi(t) > e where e denotes an

arbitrarily small positive constant). Orthogonal coordinate systems F', F*, and

.Fd are attached to the planes T, T*, and Trd, respectively. To relate the coordinate

systems, let R (t) R*, Rd (t) e SO(3) denote the orientations of F, F* and Fd

with respect to 1, respectively, and let xf (t) x, Xfd (t) E R3 denote the respective

translation vectors expressed in I. As also illustrated in Figure 3-1, n* IR3

denotes the constant unit normal to the plane T* expressed in I, and si IR3

denotes the constant coordinates of the i th feature point expressed in the

corresponding coordinate frames F, F*, and Fd.

From the geometry between the coordinate frames depicted in Figure 3-1, the

following relationships can be developed

m, = Xf + Rmn md = Xfd + Rdm, (3-29)

where R (t) Rd(t) E SO (3) and Xf (t) Xfd (t) R3 denote new rotation and

translation variables, respectively, defined as

R= R(R*) R = Rd (R*) (330)

Xf = RX Rx* Xfd = Xfd ., .

Similar to the camera-in-hand configuration, the relationships in (3-29) can be

expressed as

m*T (R + ) md = (Rd + TXfd ) m) .
1nTn fin 1ni R R+ *d n








provided K, is selected sufficiently large (see the subsequent proof), and the
following inequalities are satisfied

Amin A + AT) > Ao (6-42)

Amax{ (A+ AT) < A, (6-43)

where A0, A1 E IR are positive constants, and Amin {'} and Amax {'} denote the
minimal and maximal eigenvalues of -A + A respectively.
Proof: Let V(t) E IR denote the following differentiable non-negative function
(i.e., a Lyapunov candidate):

V= q q, + (1 qo)2 + Te. (6-44)

After cancelling common terms, V(t) can be expressed as



Vi = 2qq, 2(1 qo)qo + eT
= --Kq (qol3 + qx) Aq, ,(1 qo)q Aq,

+eT ( -KA + 7 [KwAq-] ) e -7Kwe [mf]x Aq

= -Kq ( [- (qo3 + qx) (1 qo)3] Aqv

+eCT ( -K-A + 7 [KAq] ) e- 7Ke [mn]x Aqv

= -7KqAqv K, e Ae + yeT A e x yKeT [n]< Aqv. (6-45)
zi

By using the inequality (6 42), the term yKq, Aq, satisfies

-K&q/ Aq = -7KqK (A + AT) qV
2
> Amin (A+ AT) I||q 2 > ,Ao qvl 2. (646)
I z/~' i 2 --









Based on the development in [36], m~si(t) can be expressed in the coordinate frame

Oc, which is attached to the reprojection center, as

[ ~T
\7/ iX I Zi -3)
m = I -- + C (53)
L i i i

where R IR is a known intrinsic parameter of the central catadioptric camera that

represents the distance between the single effective viewpoint and the reprojection

center (see Figure 5-2). The normalized coordinates of mrpi(t) in (5-3), denoted by

mpi (t) E R3, can be expressed as

Xi iI T
Li Li xi I(
z z z+ + Li zi + L (
Li Li+

By using (5-4) and the following relationship:

i + + (Z5)
1 =( + + (5-5)
Lie Lie Li

the coordinates of the feature points on the unit spherical surface rni(t) can be

determined as
T
S1 + 2 m (1 +1
2 2 pypix
m2i + m + 1
piuc piy
e+ + T-,+Y) 2) + 1
Tmsi = ++l(m +- --y (5-6)
m npi + miy + 1

-E +
mi + m2 + 1

where mpi (t) and mpiy (t) E R are the first two elements of mpi (t). The bijective

mapping in (5-6) is unique (i.e., there is no sign ambiguity) because 0 < < 1 [36],

and the geometry of the central catadioptric camera given in Figure 5-2 guarantees

that the third element of rpi(t) is positive; otherwise, the feature point Oi can not

be projected onto the image plane.









attached to F and F* are the same object), the assumption is generally restrictive

and is the focus of future research. As described by Hu et al. [17], this assumption

can be avoided by using the geometric reconstruction approach [102] under an

alternative assumption that the distance between two feature points is precisely

known.

4.4 Euclidean Reconstruction

The relationships given by (4-19)-(4-23) provide a means to quantify a

translation and rotation error between the different coordinate systems. Since the

pose of T, Td, and T* cannot be directly measured, a Euclidean reconstruction is

developed to obtain the pose error by comparing multiple images acquired from

the hovering monocular vision system. To facilitate the subsequent development,

the normalized Euclidean coordinates of the feature points in T and T* can be

expressed in terms of I as mi (t) R3 and m (t) E IR3, respectively, as
Aml *
n A ti mi (4-24)
Zi Zi

Similarly, the normalized Euclidean coordinates of the feature points for the

current, desired, and reference image can be expressed in terms of IZ as mi(t),

mdi (t), T~i I 3, respectively, as
'(t) r '
mn(t) m -- rndi (t) A \ n -*. (4-25)
Zi (t) ,di (t) Z1

From the expressions given in (4-20) and (4-24), the rotation and translation

between the coordinate systems F and F*, between F* and Fd, and between I and




Full Text

PAGE 1

VISUALSERVOTRACKINGCONTROLVIAALYAPUNOV-BASED APPROACH By GUOQIANGHU ADISSERTATIONPRESENTEDTOTHEGRADUATESCHOOL OFTHEUNIVERSITYOFFLORIDAINPARTIALFULFILLMENT OFTHEREQUIREMENTSFORTHEDEGREEOF DOCTOROFPHILOSOPHY UNIVERSITYOFFLORIDA 2007

PAGE 2

c 2007GuoqiangHu

PAGE 3

TomymotherHuanyinZhang,myfatherHuadeHu,andmywifeYiFengfortheir endlessloveandsupport

PAGE 4

ACKNOWLEDGMENTS Thankyoutomyadvisor,Dr.WarrenDixon,forhisguidanceandencouragementwhichwillbene tmealifetime.Asanadvisor,hekeptpolishingmy methodologyandskillsinresolvingproblemsandformulatingnewproblems.Asa mentor,hehelpedmedevelopprofessionalskillsandgavemeopportunitiestoget exposedtoprofessionalworkingenvironment. ThankstomycommitteemembersDr.ThomasBurks,Dr.CarlCraneIII,Dr. SethHutchinson,andDr.RickLind,forthetimeandhelpthattheyprovided. ThankstoDr.NickGansandSidMehtaforalltheinsightfuldiscussions. ThankstoDr.JoeKehoeforhissuggestionsinformattingmydefensepresentation anddissertation.Finally,thankstomyNCRlabfellowsfortheirfriendshipduring thepastthreeyearsofjoy. iv

PAGE 5

TABLEOFCONTENTS page ACKNOWLEDGMENTS.............................iv LISTOFTABLES........... ......................viii LISTOFFIGURES.... ............................ixABSTRACT.............. ......................xiii CHAPTER 1INTRODUCTION.. ............................1 1.1Motivation.... ............................1 1.2ProblemStatement...........................2 1.3LiteratureReview............................8 1.3.1BasicVisualServoControlApproaches............8 1.3.2VisualServoControlApproachestoEnlargetheFOV....9 1.3.3RobustandAdaptiveVisualServoControl..........11 1.4Contributions ..............................13 2BACKGROUNDANDPRELIMINARYDEVELOPMENT........16 2.1GeometricModel............................16 2.2EuclideanReconstruction.......................19 2.3UnitQuaternionRepresentationoftheRotationMatrix.......21 3LYAPUNOV-BASEDVISUALSERVOTRACKINGCONTROLVIAA QUATERNIONFORMULATION......................25 3.1Introduction ...............................25 3.2ControlObjective............................26 3.3ControlDevelopment..........................28 3.3.1Open-LoopErrorSystem....................28 3.3.2Closed-LoopErrorSystem...................30 3.3.3StabilityAnalysis........................32 3.4Camera-To-HandExtension......................33 3.4.1ModelDevelopment.......................33 3.4.2ControlFormulation.......................35 3.5SimulationResults...........................38 3.6ExperimentResults...........................41 v

PAGE 6

3.6.1ExperimentCon gurations...................41 3.6.2ExperimentforTracking....................45 3.6.3ExperimentforRegulation...................47 4COLLABORATIVEVISUALSERVOTRACKINGCONTROLVIAA DAISY-CHAININGAPPROACH......................61 4.1Introduction ................... ............61 4.2ProblemScenario............................62 4.3GeometricModel............................64 4.4EuclideanReconstruction.......................68 4.5ControlObjective............................71 4.6ControlDevelopment..........................73 4.6.1Open-LoopErrorSystem....................73 4.6.2Closed-LoopErrorSystem...................74 4.6.3StabilityAnalysis........................75 4.7SimulationResults...........................76 5ADAPTIVEVISUALSERVOTRACKINGCONTROLUSINGACENTRALCATADIOPTRICCAMERA....................84 5.1Introduction ................... ............84 5.2GeometricModel............................85 5.3EuclideanReconstruction.......................90 5.4ControlObjective............................91 5.5ControlDevelopment..........................94 5.5.1Open-LoopErrorSystem....................94 5.5.2Closed-LoopErrorSystem...................95 5.5.3StabilityAnalysis........................96 6VISUALSERVOCONTROLINTHEPRESENCEOFCAMERACALIBRATIONERROR.............................98 6.1Introduction ................... ............98 6.2FeedbackControlMeasurements....................99 6.3ControlObjective............................101 6.4QuaternionEstimation.........................103 6.4.1EstimateDevelopment.....................103 6.4.2EstimateRelationships.....................104 6.5ControlDevelopment..........................106 6.5.1RotationControl........................106 6.5.2TranslationControl.......................106 6.6StabilityAnalysis............................107 6.7SimulationResults...........................110 vi

PAGE 7

7COMBINEDROBUSTANDADAPTIVEHOMOGRAPHY-BASED VISUALSERVOCONTROLVIAANUNCALIBRATEDCAMERA..117 7.1Introduction... ............................117 7.2CameraGeometryandAssumptions.................118 7.3Open-LoopErrorSystem.. ......................120 7.3.1RotationErrorSystem.....................120 7.3.2TranslationErrorSystem....................121 7.4ControlDevelopment..........................123 7.4.1RotationControlDevelopmentandStabilityAnalysis....123 7.4.2TranslationControlDevelopmentandStabilityAnalysis...125 7.5SimulationResults...........................128 8CONCLUSIONS......... ......................135 APPENDIX AUNITNORMPROPERTYFORTHEQUATERNIONERROR.....138 BONEPROPERTYOFUNITQUATERNIONS..............140 COPEN-LOOPTRANSLATIONERRORSYSTEM............141 DPROPERTYONMATRIXNORM.....................143 ECOMPUTATIONOFDEPTHRATIOS..................144 FINEQUALITYDEVELOPMENT......................147 GLINEARPARAMETERIZATIONOFTRANSLATIONERRORSYSTEM..................... .................148 REFERENCES............. ......................149 BIOGRAPHICALSKETCH............................157 vii

PAGE 8

LISTOFTABLES Table page 41Coordinateframesrelationships.......................65 viii

PAGE 9

LISTOFFIGURES Figure page 2Coordinateframerelationshipsbetweenacameraviewingaplanarpatch atdi erentspatiotemporalinstances.Thecoordinateframes F Fand Fareattachedtothecurrent,referenceanddesiredlocations,respectively................ ......................17 2Coordinateframerelationshipsbetweenacameraviewingaplanarpatch atdi erentspatiotemporalinstances.Thecoordinateframes F and Fare attachedtothecurrentandreferencelocations,respectively........18 3Coordinateframerelationshipsbetweena xedcameraandtheplanes de nedbythecurrent,desired,andreferencefeaturepoints(i.e., and ).................... .................33 32Blockdiagramoftheexperiment.......................42 3TheSonyXCD-710CRcolor rewirecamerapointedatthevirtualenvironment............... ......................42 3Virtualrealityenvironmentexmaple:avirtualrecreationoftheUSArmys urbanwarfaretraininggroundatFortBenning...............44 35Desiredimage-spacecoordinatesofthefourfeaturepoints(i.e., ( ) )in thetrackingMatlabsimulationshownina3Dgraph.Inthe gure,O denotestheinitialimage-spacepositionsofthe4featurepointsinthe desiredtrajectory,and*denotesthecorresponding nalpositionsof thefeaturepoints.. ................. ............48 36Currentimage-spacecoordinatesofthefourfeaturepoints(i.e., ( ) )in thetrackingMatlabsimulationshownina3Dgraph.Inthe gure,O denotestheinitialimage-spacepositionsofthe4featurepoints,and* denotesthecorresponding nalpositionsofthefeaturepoints......48 3Translationerror ( ) inthetrackingMatlabsimulation..........49 3Rotationquaternionerror ( ) inthetrackingMatlabsimulation.....49 3Pixelcoordinate ( ) ofthefourfeaturepointsinasequenceofdesired imagesinthetrackingMatlabsimulation.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component....50 ix

PAGE 10

3Pixelcoordinate ( ) ofthecurrentposeofthefourfeaturepointsinthe trackingMatlabsimulation.Theupper gureisforthe ( ) component andthebottom gureisforthe ( ) component..............50 3Trackingerror ( ) ( ) (inpixels)ofthefourfeaturepointsinthe trackingMatlabsimulation.Theupper gureisforthe ( ) ( ) componentandthebottom gureisforthe ( ) ( ) component......51 3Linearcameravelocityinput ( ) inthetrackingMatlabsimulation...51 3Angularcameravelocityinput ( ) inthetrackingMatlabsimulation..52 314Adaptiveon-lineestimateof 1inthetrackingMatlabsimulation.....52 3Translationerror ( ) inthetrackingexperiment..............53 3Rotationquaternionerror ( ) inthetrackingexperiment.........53 3Pixelcoordinate ( ) ofthefourfeaturepointsinasequenceofdesired imagesinthetrackingexperiment.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component.........54 3Pixelcoordinate ( ) ofthecurrentposeofthefourfeaturepointsinthe trackingexperiment.Theupper gureisforthe ( ) componentandthe bottom gureisforthe ( ) component...................54 3Trackingerror ( ) ( ) (inpixels)ofthefourfeaturepointsinthe trackingexperiment.Theupper gureisforthe ( ) ( ) component andthebottom gureisforthe ( ) ( ) component..........55 3Linearcameravelocityinput ( ) inthetrackingexperiment.......55 3Angularcameravelocityinput ( ) inthetrackingexperiment......56 322Adaptiveon-lineestimateof 1inthetrackingexperiment.........56 3Translationerror ( ) intheregulationexperiment.............57 3Rotationquaternionerror ( ) intheregulationexperiment........57 3Pixelcoordinate ( ) (inpixels)ofthecurrentposeofthefourfeature pointsintheregulationexperiment.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component..........58 3Regulationerror ( ) (inpixels)ofthefourfeaturepointsintheregulationexperiment.Theupper gureisforthe ( ) ( ) component andthebottom gureisforthe ( ) ( ) component..........58 3Linearcameravelocityinput ( ) intheregulationexperiment......59 3Angularcameravelocityinput ( ) intheregulationexperiment.....59 x

PAGE 11

329Adaptiveon-lineestimateof 1intheregulationexperiment........60 41Geometricmodel. ................... ............63 4This gureshowstheinitialpositionsofthecamerasandthefeaturepoint planes.Theinitialpositionsofthecamerasattachedto I and Iare denotedbyO.Thefeaturepointsontheplanes and aredenotedby .Theoriginsofthecoordinateframes F Fand Faredenotedby ..................... ............79 4Pixelcoordinate ( ) ofthefourfeaturepointsontheplane inasequenceofdesiredimagestakenbythecameraattachedto I.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component......... ......................80 4Pixelcoordinate ( ) ofthefourfeaturepointsontheplane inasequenceofreferenceimagestakenbythemovingcameraattachedto I Theupper gureisforthe ( ) componentandthebottom gureisfor the ( ) component.............................80 4Pixelcoordinate ( ) ofthefourfeaturepointsontheplane inasequenceofimagestakenbythemovingcameraattachedto I .Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component....... ............................81 4Translationerror ( ) .............................81 4Rotationquaternionerror ( ) ........................82 4Linearcameravelocityinput ( ) ......................82 49Angularcameravelocityinput ( ) .....................83 51Centralcatadioptricprojectionrelationship.................85 52Projectionmodelofthecentralcatadioptriccamera............86 53Camerarelationshipsrepresentedinhomography..............88 61Unitlesstranslationerrorbetween 1( ) and 1..............113 62Quaternionrotationerror...........................114 6Quaternionrotationerrorforcomparisonwithdi erentsign........114 6Image-spaceerrorinpixlesbetween ( ) and .Inthe gure,Odenotestheinitialpositionsofthe4featurepointsintheimage,and* denotesthecorresponding nalpositionsofthefeaturepoints.......115 xi

PAGE 12

6Image-spaceerrorinpixlesbetween ( ) and shownina3Dgraph. Inthe gure,Odenotestheinitialpositionsofthe4featurepointsin theimage,and*denotesthecorresponding nalpositionsofthefeaturepoints............. ......................115 66Linearcameravelocitycontrolinput.....................116 67Angularcameravelocitycontrolinput....................116 7Unitlesstranslationerror ( ) .........................131 7Quaternionrotationerror ( ) .........................131 7Pixelcoordinate ( ) (inpixels)ofthecurrentposeofthefourfeature pointsinthesimulation.Theupper gureisforthe ( ) componentand thebottom gureisforthe ( ) component.................132 7Regulationerror ( ) (inpixels)ofthefourfeaturepointsinthesimulation.Theupper gureisforthe ( ) ( ) componentandthebottom gureisforthe ( ) ( ) component................132 7Image-spaceerrorinpixlesbetween ( ) and .Inthe gure,Odenotestheinitialpositionsofthe4featurepointsintheimage,and* denotesthecorresponding nalpositionsofthefeaturepoints.......133 7Image-spaceerrorinpixlesbetween ( ) and shownina3Dgraph.In the gure,Odenotestheinitialpositionsofthe4featurepointsinthe image,and*denotesthecorresponding nalpositionsofthefeature points............... ......................133 7Linearcameravelocitycontrolinput ( ) ..................134 78Angularcameravelocitycontrolinput ( ) .................134 xii

PAGE 13

AbstractofDissertationPresentedtotheGraduateSchool oftheUniversityofFloridainPartialFul llmentofthe RequirementsfortheDegreeofDoctorofPhilosophy VISUALSERVOTRACKINGCONTROLVIAALYAPUNOV-BASED APPROACH By GuoqiangHu December2007 Chair:Dr.WarrenE.Dixon Major:MechanicalEngineering Recentadvancesinimageprocessing,computationaltechnologyandcontrol theoryareenablingvisualservocontroltobecomemoreprevalentinrobotics andautonomoussystemsapplications.Inthisdissertation,visualservocontrol algorithmsandarchitecturesaredevelopedthatexploitthevisualfeedbackfroma camerasystemtoachieveatrackingorregulationcontrolobjectiveforarigid-body object(e.g.,theend-e ectorofarobotmanipulator,asatellite,anautonomous vehicle)identi edbyapatchoffeaturepoints. The rsttwochapterspresenttheintroductionandbackgroundinformationfor thisdissertation.Inthethirdchapter,anewvisualservotrackingcontrolmethod forarigid-bodyobjectisdevelopedbyexploitingacombinationofhomography techniques,aquaternionparameterization,adaptivecontroltechniques,and nonlinearLyapunov-basedcontrolmethods.Thedesiredtrajectorytobetracked isrepresentedbyasequenceofimages(e. g.,avideo),whichcanbetakenonline oro inebyacamera.Thiscontrollerissingularity-freebyusingthehomography techniquesandthequaternionparameterization.Inthefourthchapter,anew collaborativevisualservocontrolmethodisdevelopedtoenablearigid-body xiii

PAGE 14

objecttotrackadesiredtrajectory.Incontrasttotypicalcamera-to-handand camera-in-handvisualservocontrolcon gurations,theproposedcontrolleris developedusingamovingon-boardcameraviewingamovingobjecttoobtain feedbacksignals.Thiscollaborativemethodweakensthe eld-of-viewrestriction andenablesthecontrolobjecttoperformlargeareamotion.Inthe fthchapter, avisualservocontrollerisdevelopedthatyieldsanasymptotictrackingresultfor thecompletelynonlinearcamera-in-han dcentralcatadioptriccamerasystem.A panoramic eld-of-viewisobtainedbyusingth ecentralcatadioptriccamera.In thesixthchapter,arobustvisualservocontrolmethodisdevelopedtoachievea regulationcontrolobjectiveinpresenceofin trinsiccameracalibrationuncertainties. Aquaternion-basedestimatefortherotationerrorsignalisdevelopedandused inthecontrollerdevelopment.Thesimilarityrelationshipbetweentheestimated andactualrotationmatricesisusedtoconstructtherelationshipbetweenthe estimatedandactualquaternions.ALyapunov-basedstabilityanalysisisprovided thatindicatesauniquecontrollercanbedevelopedtoachievetheregulationresult despiteasignambiguityinthedevelopedquaternionestimate.Intheseventh chapter,anewcombinedrobustandadap tivevisualservocontrolmethodis developedtoasymptoticallyregulatethefeaturepointsinanimagetothedesired locationswhilealsoregulatingtheposeofthecontrolobjectwithoutcalibrating thecamera.Thesedualobjectivesareachievedbyusingahomography-based approachthatexploitsbothimage-spacea ndreconstructedEuclideaninformation inthefeedbackloop.Therobustrotationcontrolleraccommodatesforthetimevaryinguncertaintiesintherotationerrorsystem,andtheadaptivetranslation controllercompensatesfortheunknowncalibrationparametersinthetranslation errorsystem.Chapter8servesastheconclusionsofthisdissertation. xiv

PAGE 15

CHAPTER1 INTRODUCTION 1.1Motivation Controlsystemsthatuseinformationacquiredfromanimagingsourceinthe feedbacklooparede nedasvisualservocontrolsystems.Visualservocontrolhas developedintoalargesubsetofroboticsliterature(see[1]forareview)because oftheenablingcapabilitiesitcanprovideforautonomy.Recentadvancesinimage processing,computationaltechnologyan dcontroltheoryareenablingvisualservo controltobecomemoreprevalentinautonomoussystemsapplications(e.g.,the autonomousgroundvehiclesgrandchallengeandurbanchallengesponsoredbythe U.S.DefenseAdvancedResearchProjectsAgency(DARPA)).Insteadofrelying solelyonaglobalpositioningsystem(GPS)orinertialmeasurementunits(IMU) fornavigationandcontrol,image-basedmethodsareapromisingapproachto provideautonomousvehicleswithpositionandorientation(i.e.,pose)information. Speci cally,ratherthanobtainaninertialmeasurementofanautonomoussystem, visionsystemscanbeusedtorecastthenavigationandcontrolprobleminterms oftheimagespace.Inadditiontoprovidingfeedbackrelatingthelocalposeof thecamerawithrespecttosometarget,animagesensorcanalsobeusedtorelate localsensorinformationtoaninertialreferenceframeforglobalcontroltasks. Visualservoingrequiresmultidisciplinaryexpertisetointegrateavisionsystem withthecontrollerfortasksincluding:selectingtheproperimaginghardware; extractingandprocessingimagesatratesamenabletoclosed-loopcontrol;image analysisandfeaturepointextraction/tracking;andrecovering/estimatingnecessary stateinformationfromanimage,etc.Whileeachoftheaforementionedtasks areactivetopicsofresearchinterestincomputervisionandimageprocessing 1

PAGE 16

2 societies,theywillnotbethefocusofthisdissertation.Thedevelopmentinthis dissertationisbasedontheassumptionthatimagescanbeacquired,analyzed,and theresultingdatacanbeprovidedtotheco ntrollerwithoutrestrictingthecontrol rates. Theuseofimage-basedfeedbackaddscomplexityandnewchallengesfor thecontrolsystemdesign.Thescopeofthisdissertationisfocusedonissues associatedwithusingreconstructedandestimatedstateinformationfroma sequenceofimagestodevelopastableclosed-looperrorsystem.Particularly,this dissertationfocusesonthefollowingproblems:1)howtodesignavisualservo trackingcontrollerthatachievesasymptotictrackingviaaquaternionformulation? 2)howtodesignacollaborativevisualservocontrolschemewhenboththecamera andthecontrolobjectaremoving?3)howtodesignavisualservocontrollerusing acentralcatadioptriccamera?4)andhowtodesignavisualservocontrollerthatis robusttocameracalibrationuncertainty? 1.2ProblemStatement Inthisdissertation,visualservocontrolalgorithmsandarchitecturesare developedthatexploitthevisualfeedbackfromacamerasystemtoachievea trackingorregulationcontrolobjectiveforasixdegreesoffreedom(DOF)rigidbodycontrolobject(e.g.,theend-e ectorofarobotmanipulator,asatellite,an autonomousvehicle)identi edbyapatchoffeaturepoints.Thetrackingcontrol objectiveisforthecontrolobjecttotrackadesiredtrajectorythatisencodedby avideoobtainedfromacameraineithert hecamera-in-handorcamera-to-hand con guration.Thisvideocanbetakenonlineoro inebyacamera.Forexample, themotionofacontrolobjectcanbeprerecordedbyacamera(forthecamerato-handcon guration)beforehandandusedasadesiredtrajectory,or,avideoof thereferenceobjectcanbeprerecordedasadesiredtrajectorywhilethecamera moves(forthecamera-in-handcon guration).Theregulationcontrolobjectiveis

PAGE 17

3 fortheobjecttogotoadesiredposethatisencodedbyaprerecordedimage.The regulationproblemcanbeconsideredasaparticularcaseofthetrackingproblem. Forexample,whenalltheimagesinthesequenceofdesiredimagesareidentical, thetrackingproblembecomesaregulationproblem.Thedissertationwilladdress thefollowingproblemsofinterest:1)visualservotrackingcontrolviaaquaternion formulation;2)collaborativevisualservotrackingcontrolusingadaisy-chaining approach;3)visualservotrackingcontrolusingacentralcatadioptriccamera;4) robustvisualservocontrolinpresenceofcameracalibrationuncertainty;and5) combinedrobustandadaptivevisualservocontrolviaanuncalibratedcamera.The controldevelopmentinthedissertationisprovenbyusingnonlinearLyapunovbasedmethodsandisdemonstratedbyMat labsimulationand/orexperimental results. 1) Visualservotrackingcontrolviaaquaternionformulation Muchofthepreviousvisualservocontrollershaveonlybeendesignedto addresstheregulationproblem.Motivatedbytheneedfornewadvancementsto meetvisualservotrackingapplications,previousresearchhasconcentratedon developingdi erenttypesofpathplanningtechniques[5].Recently,Chenetal. [10]providedanewformulationofthetrackingcontrolproblem.Ahomographybasedadaptivevisualservocontrollerisdevelopedtoenablearobotend-e ector totrackaprerecordedtime-varyingreferencetrajectorydeterminedbyasequence ofimages.TheEulerangle-axisrepresentationisusedtorepresenttherotation errorsystem.Duetothecomputationalsingularitylimitationoftheangleaxis extractionalgorithm(seeSpongandV idyasagar[11]),rotationanglesof werenotconsidered.Motivatedbythede siretoavoidtherotationsingularity completely,anerrorsystemandvisualservotrackingcontrollerisdevelopedin Chapter3basedonthequaternionformulation.Ahomographyisconstructed fromimagepairsanddecomposedviatextbookmethods(e.g.,Faugeras[12]and

PAGE 18

4 HartleyandZisserman[13])todeterminetherotationmatrix.Oncetherotation matrixhasbeendetermined,thecorrespondingunitquaternioncanbeobtainedby numericallyrobustalgorithms(seeHuetal.[14]andShuster[15]).Thenanerror systemisconstructedintermsoftheunitquaternion,whichisvoidofsingularities. Anadaptivecontrolleristhendevelopedandproventomakeacameratracka desiredtrajectorythatisdeterminedfr omasequenceofimages.Thecontroller containsanadaptivefeedforwardtermtocompensatefortheunknowndistance fromthecameratotheobservedplanarpatch.Aquaternion-basedLyapunov functionisdevelopedtofacilitatethecontroldesignandthestabilityanalysis. 2) Collaborativevisualservotrackingcontrolusingadaisy-chainingapproach. Unliketypicalvisualservocontrollersincamera-in-handandcamera-to-hand con gurations,auniqueaspectofthedevelopmentforthisproblemisthata movingcamera(e.g.,acameramountedonanunmannedairvehicle)isusedto providevisualfeedbacktoamovingautonomousvehicle.Thecontrolobjective isfortheautonomousvehicle(identi edbyaplanarpatchoffeaturepoints)to tracktheposeofadesiredvehicletrajectorythatisencodedbyaprerecorded videoobtainedfroma xedcamera(e.g.,acameramountedonasatellite,a cameramountedonabuilding).Severalchallengesmustberesolvedtoachieve thisunexploredcontrolobjective.Therelativevelocitybetweenthemovingfeature pointpatchandthemovingcamerapresentsasigni cantchallenge.Byusing adaisy-chainingapproach(e.g.,[1619] ) ,Euclideanhomographyrelationships betweendi erentcameracoordinateframesandfeaturepointpatchcoordinate framesaredeveloped.Thesehomographiesareusedtorelatecoordinateframes attachedtothemovingcamera,thereferenceobject,thecontrolobject,andthe objectusedtorecordthedesiredtrajector y.Anotherchallengeisthatforgeneral sixDOFmotionbyboththecameraandtheplanarpatch,thenormaltoplanar patchisunknown.Bydecomposingthehomographyrelationships,thenormalto

PAGE 19

5 themovingfeaturepointpatchcanbeobtained.Likewise,thedistancebetweenthe movingcamera,themovingplanarpatch,andareferencepatchareunknown.By usingthedepthratiosobtainedfromthehomographydecomposition,theunknown distanceisrelatedtoanunknownconstantparameter.ALyapunov-basedadaptive estimationlawisdesignedtocompensatefortheunknownconstantparameter. Sincethemovingcameracouldbeattachedtoaremotelypilotedvehiclewith arbitraryrotations,anotherchallengeistoeliminatepotentialsingularitiesin therotationparameterizationobtainedfromthehomographydecomposition. Toaddressthisissue,homography-basedvisualservocontroltechniques(e.g., [10,20])arecombinedwithquaternion-b asedcontrolmethods(e.g.,[14,23,24]), toeliminatesingularitiesassociatedwiththeimageJacobianandtherotation errorsystem.Byusingthequaternionparameterization,theresultingclosedlooprotationerrorsystemcanbestabilizedbyaproportionalrotationcontroller combinedwithafeedforwardtermthatisafunctionofthedesiredtrajectory. 3) Visualservotrackingcontrolusingacentralcatadioptriccamera Visualservocontrollersrequiretheimage-spacecoordinatesofsomesetof Euclideanfeaturepointsinthecontroldevelopment;hence,thefeaturepoints mustremaininthecameras eld-of-view(FOV).SincetheFOVofconventional perspectivecameras(e.g.,pinholecameras)isrestricted,keepingthefeaturepoints intheFOVisafundamentalchallengeforvisualservocontrolalgorithms.ThefundamentalnatureoftheFOVproblemhasresultedinavarietyofcontrolandpath planningmethods(e.g.,[6,7,25]).Anal ternativesolutiontotheaforementioned algorithmicapproachestoresolvetheFOVissueistouseadvancedopticssuch asomnidirectionalcameras.Catadioptriccameras(onetypeofomnidirectional camera)aredeviceswhi chusebothmirrors(re ectiveorcatadioptricelements)and lenses(refractiveordioptricelements)toformimages[32].Catadioptriccameras withasinglee ectiveviewpointareclassi edascentralcatadioptriccameras,

PAGE 20

6 whicharedesirablebecausetheyyieldpureperspectiveimages[33].InChapter5, avisualservocontrolschemeispresentedthatyieldsatrackingresultforacamerain-handcentralcatadioptriccamerasystem.Thetrackingcontrollerisdeveloped basedontherelativerelationshipsofacentralcatadioptriccamerabetweenthe current,reference,anddesiredcameraposes.To ndtherelativecameraposerelationships,homographiesarecomputedbasedontheprojectionmodelofthecentral catadioptriccamera[33].GeyerandDaniilidis[36]proposedaunifyingtheory toshowthatallcentralcatadioptricsystemsareisomorphictoprojectivemappings fromthespheretoaplanewithaprojectioncenterontheperpendicularaxistothe plane.Byconstructinglinksbetweentheprojectedcoordinatesonthesphere,the homographiesuptoscalarmultiplescanbeobtained.Variousmethodscanthen beappliedtodecomposetheEuclideanhomographiesto ndthecorresponding rotationmatrices,anddepthratios.Therotationerrorsystemisbasedonthe quaternionformulationwhichhasafull-rankinteractionmatrix.Lyapunov-based methodsareutilizedtodevelopthecontrollerandtoproveasymptotictracking. 4) Robustvisualservocontrol. Invision-basedcontrol,exactcalibrationisoftenrequiredsothattheimagespacesensormeasurementscanberelatedtotheEuclideanorjointspacefor controlimplementation.Speci cally,acameramodel(e.g.,thepinholemodel) isoftenrequiredtorelatepixelcoordinatesfromanimagetothe(normalized) Euclideancoordinates.Thecameramodelistypicallyassumedtobeexactlyknown (i.e.,theintrinsiccalibrationparametersareassumedtobeknown);however, despitetheavailabilityofseveralpopularcalibrationmethods(cf.[37]), cameracalibrationcanbetimeconsuming,requiressomelevelofexpertise, andhasinherentinaccuracies.Ifthecalibrationparametersarenotexactlyknown, performancedegradationandpotentialunpredictableresponsefromthevisual servocontrollermayoccur.Thegoalofthisresearchistodevelopavisualservo

PAGE 21

7 controllerwhichisrobusttotheintrinsiccalibrationparameters.Asintheprevious threeproblems,thequaternionparameterizationwillbeusedtorepresentthe rotationerrorsystem.Sincethequaternionerrorcannotbemeasuredprecisely duetotheuncertaincalibration,anestimatedquaternionisrequiredtodevelop thecontroller.Oneofthechallengestodevelopaquaternionestimateisthatthe estimatedrotationmatrixisnotatruerotationmatrixingeneral.Toaddressthis challenge,thesimilarityrelationshipbetweentheestimatedandactualrotation matricesisusedtoconstructtherelati onshipbetweentheestimatedandactual quaternions.ALyapunov-basedstabilityanalysisisprovidedthatindicatesa uniquecontrollercanbedevelopedtoachievetheregulationresult. 5) Combinedrobustandadaptivevisualservocontrol. Thisresearchisalsomotivatedbythedesiretocompensateforuncertaincameracalibration.Thiscontrollerhasadap tiveupdatedtermswhichcancompensate fortheunknowncalibrationparameters.Theopen-looperrorsystemiscomposed ofarotationerrorsystemandatranslationerrorsystem.Onechallengeisthatthe rotationquaternionerrorisnotmeasurable.Toaddressthisproblem,anestimated quaternionisobtainedbasedontheimage-spaceinformationandisusedtodevelop thecontroller.Thetransformationbetweentheactualandestimatedquaternions isanuppertriangularmatrixdeterminedbythecalibrationparametersandthe diagonalelementsarepositive.Thisfactisexploitedtodesignarobusthigh-gain controller.Anotherchallengeisthattheunknowncalibrationmatrixiscoupledin thetranslationerrorsystem.Toaddressthisproblem,thetranslationerrorsystem islinearlyparameterizedintermsofthecalibrationparameters.Anadaptiveupdatelawisusedtoestimatetheunknowncalibrationparameters,andatranslation controllercontainingtheadaptivecompensationtermsisusedtoasymptotically regulatethetranslationerror.

PAGE 22

8 1.3LiteratureReview 1.3.1BasicVisualServoControlApproaches Di erentvisualservocontrolmethodscanbedividedintothreemaincategoriesincluding:image-based,position-based,andapproachesthatmakeuseofa blendofimageandposition-basedapproaches.Image-basedvisualservocontrol (e.g.,[1,44])consistsofafeedbacksignalthatiscomposedofpureimage-space information(i.e.,thecontrolobjectiveisde nedintermsofanimagepixelerror).Thisapproachisconsideredtobe morerobusttocameracalibrationand robotkinematicerrorsandi smorelikelytokeeptherelevantimagefeaturesin theFOVthanposition-basedmethodsbecausethefeedbackisdirectlyobtained fromtheimagewithouttheneedtotransfertheimage-spacemeasurementto anotherspace.Adrawbackofimage-basedvisualservocontrolisthatsincethe controllerisimplementedintherobotjointspace,animage-Jacobianisrequired torelatethederivativeoftheimage-spacemeasurementstothecameraslinear andangularvelocities.However,theimage-Jacobiantypicallycontainssingularities andlocalminima(seeChaumette[48]),andthecontrollerstabilityanalysisis di culttoobtaininthepresenceofcalibrationuncertainty(seeEspiauetal.[49]). Anotherdrawbackofimage-basedmethodsarethatsincethecontrollerisbased onimage-feedback,therobotcouldbecommandedalongatrajectorythatisnot physicallypossible.ThisissueisdescribedasChaumettesconundrum.Further discussionoftheChaumettesconundrumisprovidedinChaumette[48]andCorke andHutchinson[25]. Position-basedvisualservocontrol(e.g.,[1,44,5052])usesreconstructed Euclideaninformationinthefeedbackloop .Forthisapproach,theimage-Jacobian singularityandlocalminimaproblemsareavoided,andphysicallyrealizable trajectoriesaregenerated.However,theapproachissusceptibletoinaccuracies inthetask-spacereconstructionifthetransformationiscorrupted(e.g.,uncertain

PAGE 23

9 cameracalibration).Also,sincethecontrollerdoesnotdirectlyusetheimage featuresinthefeedback,thecommandedrobottrajectorymaycausethefeature pointstoleavetheFOV.Areviewofthesetwoapproachesisprovidedin[1,2,53]. Thethirdclassofvisualservocontrollersusesomeimage-spaceinformationcombinedwithsomereconstructedinformationasameanstocombine theadvantagesofthesetwoapproache swhileavoidingtheirdisadvantages (e.g.,[10,20,24,25,54 58]).Oneparticularapproachwascoined2.5Dvisual servocontrolin[20,21,55,56]becausethis classofcontrollersexploitstwodimensionalimagefeedbackandreconstructedthree-dimensionalfeedback.Thisclassof controllersisalsocalledhomography-ba sedvisualservocontrolin[10,22,24,57] becauseoftheunderlyingrelianceoftheconstructionanddecompositionofa homography. 1.3.2VisualServoControlApproachestoEnlargetheFOV Visualservocontrollersoftenrequiretheimage-spacecoordinatesofsomeset ofEuclideanfeaturepointsinthecontroldevelopment;hence,thefeaturepoints mustremaininthecamerasFOV.SincetheFOVofconventionalperspective cameras(e.g.,pinholecameras)isrestricted,keepingthefeaturepointsintheFOV isafundamentalchallengeforvisualservocontrolalgorithms.Thefundamental natureoftheFOVproblemhasresultedinavarietyofcontrolandpathplanning methods(e.g.,[6,7,25]).CorkeandHutchinson[25]andChesietal.[26] usedpartitionedorswitchingvisualservoingmethodstokeeptheobjectin theFOV.In[6,7,27],potential elds(ornavigationfunctions)areusedto ensurethevisibilityofallfeaturesduringthecontroltask.InBenhimaneand Malis[30],thefocallengthofthecamerawasautomaticallyadjusted(i.e.,zoom control)tokeepallfeaturesintheFOVduringthecontroltaskbyusingan intrinsic-freevisualservoingapproachdevelopedbyMalis[59].InGarcka-Aracilet al.[31],acontinuouscontrollerisobtainedbyusinganewsmoothtaskfunction

PAGE 24

10 withweightedfeaturesthatallowsvisibilitychangesintheimagefeatures(i.e., somefeaturescancomeinandoutoftheFOV)duringthecontroltask.Some researchershavealsoinvestigatedmethod stoenlargetheFOV[60].In[60], imagemosaicingisusedtocapturemultipleimagesofthesceneasacameramoves andtheimagesarestitchedtogethertoobtainalargerimage.InSwaminathanand Nayar[64],multipleimagesarefusedfrommultiplecamerasmountedinorderto haveminimallyoverlappingFOV. Analternativesolutiontotheaforem entionedalgorithmicapproachesto resolvetheFOVissueistouseadvancedopticssuchasomnidirectionalcameras. Catadioptriccameras(onetypeofomnidirectionalcamera)aredeviceswhich usebothmirrors(re ectiveorcatadioptricelements)andlenses(refractiveor dioptricelements)toformimages[32].Catadioptricsystemswithasinglee ective viewpointareclassi edascentralcatadioptricsystems,whicharedesirablebecause theyyieldpureperspectiveimages[33].InBakerandNayar[34],thecomplete classofsingle-lenssingle-mirrorcatadioptricsystemsisderivedthatsatisfythe singleviewpointconstraint.Recently,catadioptricsystemshavebeeninvestigated toenlargetheFOVforvisualservocontroltasks(e.g.,[35,652]).Burschka andHager[65]addressedthevisualservoingproblemofmobilerobotsequipped withcentralcatadioptriccameras,inwhichanestimationofthefeatureheight totheplaneofmotionisrequired.Barretoetal.[73]developedamodel-based trackingapproachofarigidobjectusingacentralcatadioptriccamera.Mezouar etal.[66]controlledaroboticsystemusingtheprojectionof3Dlinesintheimage planeofacentralcatadioptricsystem. In[35,65,66],theinverseoftheimage Jacobianisrequiredinthecontrollerdevelopmentwhichmayleadtoasingularity problemforcertaincon gurations.Hadj-Abdelkaderetal.[67]presentedthe21/2 Dvisualservoingapproachusingomnidirectionalcameras,inwhichtheinverse ofanestimatedimage-Jacobian(containingpotentialsingularities)isrequiredin

PAGE 25

11 thecontroller.Mariottinietal.[68,69]developedanimage-basedvisualservoing strategyusingepipolargeometryforathreeDOFmobilerobotequippedwitha centralcatadioptriccamera.Particular ly,thesingularityproblemintheimage JacobianwasaddressedbyMariottinietal.[69]usingepipolargeometry.In BenhimaneandMalis[70],anewapproachtovisualservoregulationforomnidirectionalcameraswasdevelopedthatusesthefeedbackofahomographydirectly (withoutrequiringadecomposition)thatalsodoesnotrequireanymeasureofthe 3Dinformationontheobservedscene.However,theresultin[70]isrestrictedtobe localsincethecontrollerisdevelopedbasedonalinearizedopen-looperrorsystem attheorigin,andthetaskfunctionisonlyisomorphictoarestrictedregionwithin theomnidirectionalview.InTatsambonandChaumette[71,72],anewoptimal combinationofvisualfeaturesisproposedforvisualservoingfromspheresusing centralcatadioptricsystems. 1.3.3RobustandAdaptiveVisualServoControl Motivatedbythedesiretoincorporaterobustnesstocameracalibration, di erentcontrolapproachesthatdonotdependonexactcameracalibrationhave beenproposed(cf.[9,21,748]).E ortssuchas[74]haveinvestigatedthe developmentofmethodstoestimatetheimageandrobotmanipulatorJacobians ThesemethodsarecomposedofsomeformofrecursiveJacobianestimationlaw andacontrollaw.Speci cally,HosodaandAsada[74]developedavisualservo controllerbasedonaweightedrecursiveleast-squaresupdatelawtoestimate theimageJacobian.InJagersandetal.[75],aBroydenJacobianestimatoris appliedandanonlinearleast-squareoptimizationmethodisusedforthevisual servocontroldevelopment.ShahamiriandJagersand[76]usedanullspace-biased Newton-stepvisualservostrategywithaBroydenJacobianestimationforonline singularitydetectionandavoidanceinanuncalibratedvisualservocontrolproblem. InPiepmeierandLipkin[77]andPiepmeieretal.[78],arecursiveleast-squares

PAGE 26

12 algorithmisimplementedforJacobianestimation,andadynamicGauss-Newton methodisusedtominimizethesquarederrorintheimageplane. Robustcontrolapproachesbasedonstaticbest-guessestimationofthe calibrationmatrixhavebeendevelopedtosolvetheuncalibratedvisualservo regulationproblem(cf.[21,82,87,88]).Speci cally,underasetofassumptions ontherotationandcalibrationmatrix,akinematiccontrollerwasdevelopedby TaylorandOstrowski[82]thatutilizesaconstant,best-guessestimateofthe calibrationparameterstoachievelocalset-pointregulationforthesixDOFvisual servocontrolproblem.Homography-basedvisualservoingmethodsusingbest-guess estimationareusedbyMalisandChaumette[21]andFangetal.[87]toachieve asymptoticorexponentialregulationwithrespecttobothcameraandhand-eye calibrationerrorsforthesixDOFproblem. Thedevelopmentoftraditionaladaptivecontrolmethodstocompensatefor uncertaintyinthecameracalibrationmatrixisinhibitedbecauseofthetimevaryinguncertaintyinjectedinthetransformationfromthenormalizationofthe Euclideancoordinates.Asaresult,initialadaptivecontrolresultssuchas[795] werelimitedtoscenarioswheretheopticaxisofthecamerawasassumedtobe perpendicularwiththeplaneformedbythefeaturepoints(i.e.,thetime-varying uncertaintyisreducedtoaconstantuncertainty)orassumedanadditionalsensor (e.g.,ultrasonicsensors,laser-basedsensors,additionalcameras)couldbeusedto measurethedepthinformation. Morerecentapproachesexploitgeometricrelationshipsbetweenmultiple spatiotemporalviewsofanobjecttotransformthetime-varyinguncertaintyinto knowntime-varyingtermsmultipliedbyanunknownconstant[9,21,869].In Rufetal.[9],anon-linecalibrationalgorithmwasdevelopedforposition-based visualservoing.InLiuetal.[86],anadaptiveimage-basedvisualservocontroller wasdevelopedthatregulatedthefeaturepointsinanimagetodesiredlocations.

PAGE 27

13 Oneproblemwithmethodsbasedontheimage-Jacobianisthattheestimated image-Jacobianmaycontainsingularities.Thedevelopmentin[86]exploitsan additionalpotentialforcefunctiontodrivetheestimatedparametersawayfromthe valuesthatresultinasingularJacobianmatrix.InChenetal.[89],anadaptive homography-basedcontrollerwasproposedtoaddressproblemsofuncertainty intheintrinsiccameracalibrationpara metersandlackofdepthmeasurements. Speci cally,anadaptivecontrolstrategywasdevelopedfromaLyapunov-based approachthatexploitsthetriangularstructureofthecalibrationmatrix.Tothe bestofourknowledge,theresultin[89]wasthe rstresultthatregulatesthe robotend-e ectortoadesiredposition/orientationthroughvisualservoingby activelycompensatingforthelackofde pthmeasurementsanduncertaintyin thecameraintrinsiccalibrationmatrixwithregardtothesixDOFregulation problem.However,therelationshipbetweentheestimatedrotationaxisandthe actualrotationaxisisnotcorrectlydeveloped.Atime-varyingscalingfactorwas omittedwhichisrequiredtorelatetheestimatedrotationmatrixandtheactual rotationmatrix.Speci cally,theestimatedrotationmatrixandtheactualrotation matrixwereincorrectlyrelatedthrougheigenvectorsthatareassociatedwiththe eigenvalueof1.Anunknowntime-varyingscalarisrequiredtorelatethesevectors, andthemethodsdevelopedin[89]donotappeartobesuitabletoaccommodate forthisuncertainty. 1.4Contributions Themaincontributionofthisdissertationisthedevelopmentofvisualservo controlalgorithmsandarchitecturesthatexploitthevisualfeedbackfromacamera systemtoachieveatrackingorregulationcontrolobjectiveforarigid-bodycontrol object(e.g.,theend-e ectorofarobotmanipulator,asatellite,anautonomous vehicle)identi edbyapatchoffeaturepoints.Intheprocessofachievingthemain contribution,thefollowingcontributionsweremade:

PAGE 28

14 Anewadaptivehomography-basedvisualservocontrolmethodviaaquaternionformulationisdevelopedthatachievesasymptotictrackingcontrol.This controlschemeissingularity-freebyexploitingthehomographytechniques andaquaternionparameterization.Theadaptiveestimationtermintheproposedcontrollercompensatesfortheunknowndepthinformationdynamically whilethecontrollerachievestheasymptotictrackingresults. Anewcollaborativevisualservocontrolmethodisdevelopedtoenablea rigid-bodyobjecttotrackadesiredtrajectoryviaadaisy-chainingmulti-view geometry.Incontrasttotypicalcamera -to-handandcamera-in-handvisual servocontrolcon gurations,theproposedcontrollerisdevelopedusinga movingon-boardcameraviewingamovingobjecttoobtainfeedbacksignals. ThiscollaborativemethodweakenstheFOVrestrictionandenablesthe controlobjecttoperformlargeareamotion. Avisualservocontrollerisdevelopedthatyieldsanasymptotictracking resultforthecompletenonlinearsixDOFcamera-in-handcentralcatadioptriccamerasystem.ApanoramicFOVisobtainedbyusingthecentral catadioptriccamera. Arobustvisualservocontrollerisdevelopedtoachievearegulationcontrolobjectiveinpresenceofintrinsiccameracalibrationuncertainties.A quaternion-basedestimatefortherotationerrorsignalisdevelopedand usedinthecontrollerdevelopment.Thesimilarityrelationshipbetweenthe estimatedandactualrotationmatricesisusedtoconstructtherelationship betweentheestimatedandactualquaternions.ALyapunov-basedstability analysisisprovidedthatindicatesauniquecontrollercanbedeveloped toachievetheregulationresultdespiteasignambiguityinthedeveloped quaternionestimate.

PAGE 29

15 Anewcombinedrobustandadaptivevisualservocontrolmethodisdevelopedtoasymptoticallyregulatethefeaturepointsinanimagetothe desiredlocationswhilealsoregulatingthesixDOFposeofthecontrolobject withoutcalibratingthecamera.Thesedualobjectivesareachievedbyusing ahomography-basedapproachthatex ploitsbothimage-spaceandreconstructedEuclideaninformationinthefeedbackloop.Therobustrotation controllerthataccommodatesforthet ime-varyinguncertainscalingfactor isdevelopedbyexploitingtheuppertriangularformoftherotationerror systemandthefactthatthediagonalelementsofthecameracalibration matrixarepositive.Theadaptivetranslationcontrollerthatcompensatesfor theconstantunknownparametersinthetranslationerrorsystemisdeveloped byacertainty-equivalence-basedadaptivecontrolmethodandanonlinear Lyapunov-baseddesignapproach.

PAGE 30

CHAPTER2 BACKGROUNDANDPRELIMINARYDEVELOPMENT Thepurposeofthischapteristoprovidesomebackgroundinformation pertainingtothecamerageometricmodel,Euclideanreconstruction,andunit quaternionparameterizationapproach.Sections2.1and2.2developthenotation andframeworkforthecamerageometricmodelandEuclideanreconstructionused inChapters3,6and7.TheirextensionsarealsousedinChapters4and5.Section 2.3reviewstheunitquaternion,arotationrepresentationapproachthatisused throughoutthisdissertation. 2.1GeometricModel Imageprocessingtechniquescanoftenbeusedtoselectcoplanarandnoncollinearfeaturepointswithinanimage.However,iffourcoplanarfeaturepoints arenotavailablethenthesubsequentdevelopmentcanalsoexploittheclassic eight-pointsalgorithmwithnofouroftheeightfeaturepointsbeingcoplanar (seeHartleyandZisserman[13])orthevirtualparallaxmethod(seeBoufama andMohr[90]andMalis[55])wherethenon-coplanarpointsareprojectedonto avirtualplane.Withoutlossofgenerality,thesubsequentdevelopmentisbased ontheassumptionthatanobject(e.g.,theend-e ectorofarobotmanipulator, anaircraft,atumblingsatellite,anautonomousvehicle,etc.)hasfourcoplanar andnon-collinearfeaturepointsdenotedby =1 2 3 4 ,andthefeature pointscanbedeterminedfromafeaturepointtrackingalgorithm(e.g.,KanadeLucas-Tomasi(KLT)algorithmdiscussedbyShiandTomasi[91]andTomasi andKanade[92]).Theplanede nedbythefourfeaturepointsisdenotedby asdepictedinFigure2.Thecoordinateframe F inFigure21isa xed toacameraviewingtheobject,thestationarycoordinateframe Fdenotesa 16

PAGE 31

17 n* O id* xf, R mi m *i xfd, Rd mdi n*n* O i O id xf, R mi mi m m xfd, Rd mdi mdi Fd F F n*n* O i O id*d* xf, R mi mi m *i xfd, Rd mdi mdi n*n* O i O id xf, R mi mi m m xfd, Rd mdi mdi Fd F F Figure2:Coordinateframerelationshipsbetweenacameraviewingaplanar patchatdi erentspatiotemporalinstances.Thecoordinateframes F Fand Fareattachedtothecurrent,referenceanddesiredlocations,respectively. referencelocationforthecamera,andthecoordinateframe Fthatisattached tothedesiredlocationofthecamera.Whenthedesiredlocationofthecamera isaconstant, Fcanbechosenthesameas FasshowninFigure22.That is,thetrackingproblembecomesamoreparticularregulationproblemforthe con gurationinFigure2. Thevectors ( ) ( ) R3inFigure2arede nedas ( ) ( ) ( ) (21) ( ) ( ) ( ) where ( ) ( ) ( ) R R and ( ) ( ) ( ) R denotethe Euclideancoordinatesofthefeaturepoints expressedintheframes F Fand F,respectively.FromstandardEuclideangeometry,relationshipsbetween ( ) ,

PAGE 32

18 n* O id* xf, R mi m n*n* O i O idd xf mi mi m mi F F n* O id* xf, R mi m n*n* O i O idd xf mi mi m mi F F Figure2:Coordinateframerelationshipsbetweenacameraviewingaplanar patchatdi erentspatiotemporalinstances.Thecoordinateframes F and Fare attachedtothecurrentandreferencelocations,respectively. and ( ) canbedeterminedas = + = + (22) where ( ) ( ) (3) denotetheorientationsof Fwithrespectto F and F,respectively,and ( ) ( ) R3denotetranslationvectorsfrom F to Fand Fto Fexpressedinthecoordinatesof F and F,respectively.Asalso illustratedinFigure21, R3denotestheconstantunitnormaltotheplane andtheconstantdistancefromtheoriginof Fto alongtheunitnormalis denotedby R .ThenormalizedEuclideancoordinates,denotedby

PAGE 33

19 ( ) ( ) R3arede nedas = 1 (23) = 1 = 1 withthestandardassumptionthat ( ) ( ) where isanarbitrarily smallpositiveconstant.From(23),therelationshipsin(22)canbeexpressedas = |{z} + | {z } (24) = |{z} + | {z } where ( ) ( ) R arescalingterms,and ( ) ( ) R3 3denotethe Euclideanhomographies. 2.2EuclideanReconstruction Eachfeaturepointon hasaprojectedpixelcoordinate ( ) R3, R3and ( ) R3in F Fand Frespectively,denotedby 1 (25) 1 1 where ( ) ( ) ( ) ( ) R .Theprojectedpixelcoordinates ( ) and ( ) arerelatedtothenormalizedtask-spacecoordinates ( ) and ( ) bythefollowingglobalinvertibletransformation(i.e.,thepinholecamera

PAGE 34

20 model) = = = (26) where R3 3isaconstant,uppertriangular,andinvertibleintrinsiccamera calibrationmatrixthatisexplicitlyde nedas[13] cot 00 sin 0001 (27) In(2), 00 R denotethepixelcoordinatesoftheprincipalpoint(i.e.,the imagecenterthatisde nedastheframebu ercoordinatesoftheintersection oftheopticalaxiswiththeimageplane), R representtheproductofthe camerascalingfactorsandthefocallength,and R istheskewanglebetween thecameraaxes. Basedon(26),theEuclideanrelationshipin(2)canbeexpressedinterms oftheimagecoordinatesas = 1 | {z } = 1 | {z } (28) Byusingthefeaturepointpairs ( ( )) and ( ( )) ,theprojectivehomographyuptoascalarmultiple(i.e., and )canbedetermined(seeChenet al.[10]).Variousmethodscanthenbeapplied(e.g.,seeFaugerasandLustman[93] andZhangandHanson[94])todecomposetheEuclideanhomographiestoobtain therotationmatrices ( ) ( ) andthedepthratios ( ) ( ) .

PAGE 35

21 2.3UnitQuaternionRepresentationoftheRotationMatrix Foragivenrotationmatrix,severaldi erentrepresentations(e.g.,Eulerangleaxis,directioncosinesmatrix,Eulerangles,unitquaternion(orEulerparameters), etc.)canbeutilizedtodeveloptheerrorsystem.Inprevioushomography-based visualservocontrolliterature,theEulerangle-axisrepresentationhasbeenused todescribetherotationmatrix .Intheangle-axisparameters ( ) ( ) R representsarotationangle aboutasuitableunitvector ( ) R3.Theparameters ( ) canbeeasilycalculated(e.g.,usingthealgorithmshowninSpongand Vidyasagar[11]). Givenunitvector ( ) andangle ( ) ,therotationmatrix ( )= canbe calculatedusingtheRodriguesformula = = 3+ sin( )+( )2(1 cos( )) (29) where 3isthe 3 3 identitymatrix,andthenotation ( ) denotesthefollowing skew-symmetricformofthevector ( ) : = 0 3230 1 210 = 123 (2) Theunitquaternionisafourdimensionalvectorwhichcanbede nedas[23] 0 (2) In(2), ( ) 1( ) 2( ) 3( ) 0( ) ( ) R =1 2 3 .Theunit quaternionmustalsosatisfythefollowingnonlinearconstraint =1 (2)

PAGE 36

22 Thisparameterizationfacilitatesthesubsequentproblemformulation,control development,andstabilityanalysissincetheunitquaternionprovidesaglobally nonsingularparameterizationoftherotationmatrix. Given ( ) ,theunitquaternionvector ( ) canbeconstructedas 0( ) ( ) = cos ( ) 2 ( )sin ( ) 2 (2) Basedon(213),therotationmatrixin(29)canbeexpressedas ( )= 3+2 0 +2( )2= 2 0 3+2 +2 0 (2) Therotationmatrixin(2)istypicalinroboticsliteraturewherethemoving coordinatesystemisexpressedintermsofa xedcoordinatesystem(typicallythe coordinatesystemattachedtothebaseframe).However,thetypicalrepresentation oftherotationmatrixinaerospaceliterature(e.g.,[15])is ( )= 2 0 3+2 2 0 (2) Thedi erenceisduetothefactthattherotat ionmatrixin(2)(whichisused inthecurrentdissertation)relatesthemovingcoordinateframe F tothe xed coordinateframe Fwiththecorrespondingstatesexpressedin F .Therotation matrixin(215)canbeexpandedas ( )= 2 0+ 2 1 2 2 2 32( 1 2+ 30) 2( 1 2 30) 2 0 2 1+ 2 2 2 32( 1 3+ 20)2( 2 3 10) 2( 1 3 20) 2( 2 3+ 10) 2 0 2 1 2 2+ 2 3 (2) From(2)variousapproachescouldbeusedtodetermine 0( ) and ( ) ; however,numericalsigni canceoftheresultingcomputationscanbelostif 0( ) isclosetozero[15].Shuster[15]developedamethodtodetermine 0( ) and ( ) thatprovidesrobustnessagainstsuchcomputationalissues.Speci cally,the

PAGE 37

23 diagonaltermsof ( ) canbeobtainedfrom(2)and(2)as 11=1 2 2 2+ 2 3 (2) 22=1 2 2 1+ 2 3 (2) 33=1 2 2 1+ 2 2 (2) Byutilizing(2)and(2)-(2),the followingexpressionscanbedeveloped: 2 0= 11+ 22+ 33+1 4 (2) 2 1= 11 22 33+1 4 2 2= 22 11 33+1 4 2 3= 33 11 22+1 4 where 0( ) isrestrictedtobenon-negativewithoutlossofgenerality(thisrestrictionenablestheminimumrotationtobeobtained).Asstatedin[15],thegreatest numericalaccuracyforcomputing 0( ) and ( ) isobtainedbyusingtheelement in(220)withthelargestvalueandthencomputingtheremainingtermsrespectively.Forexample,if 2 0( ) hasthemaximumvaluein(20)thenthegreatest numericalaccuracycanbeobtainedbycomputing 0( ) and ( ) as 0= r 11+ 22+ 33+1 4 1= 23 32 4 0 2= 31 13 4 0 3= 12 21 4 0 (2)

PAGE 38

24 Likewise,if 2 1( ) hasthemaximumvaluein(2)thenthegreatestnumerical accuracycanbeobtainedbycomputing 0( ) and ( ) as 0= 23 32 4 1 1= r 11 22 33+1 4 2= 12+ 21 4 1 3= 13+ 31 4 1 (2) wherethesignof 1( ) isselectedsothat 0( ) 0 .If 2 2( ) isthemaximum,then 0= 31 13 4 2 1= 12+ 21 4 2 2= r 22 11 33+1 4 3= 23+ 32 4 2 (2) orif 2 3( ) isthemaximum,then 0= 12 21 4 3 1= 13+ 31 4 3 2= 23+ 32 4 3 3= r 33 11 22+1 4 (2) wherethesignof 2( ) or 3( ) isselectedsothat 0( ) 0 Theexpressionsin(21)-(24)indicatethatgiventherotationmatrix ( ) fromthehomographydecomposition,theunitquaternionvectorcanbedetermined thatrepresentstherotationwithoutintroducingasingularity.Theexpressionsin (21)-(24)willbeutilizedinthesubsequentcontroldevelopmentandstability analysis.

PAGE 39

CHAPTER3 LYAPUNOV-BASEDVISUALSERVOTRACKINGCONTROLVIAA QUATERNIONFORMULATION 3.1Introduction Previousvisualservocontrollerstypicallyonlyaddresstheregulationproblem. Motivatedbytheneedfornewadvancementstomeetvisualservotrackingapplications,previousresearchhasconcentratedondevelopingdi erenttypesofpath planningtechniques[5].Recently,Chenetal.[10]developedanewformulation ofthetrackingcontrolproblem.Thehomography-basedadaptivevisualservo controllerin[10]isdevelopedtoenableanactuatedobjecttotrackaprerecorded time-varyingdesiredtrajectorydeterminedbyasequenceofimages,wherethe Eulerangle-axisrepresentationisusedtorepresenttherotationerrorsystem.Due tothecomputationalsingularitylimitationoftheangleaxisextractionalgorithm (seeSpongandVidyasagar[11]),rotationanglesof werenotconsidered. ThischapterconsidersthepreviouslyunexaminedproblemofsixDOFvisual servotrackingcontrolwithanonsingular rotationparameterization.Ahomography isconstructedfromimagepairsanddecomposedviatextbookmethods(e.g., Faugeras[12]andHartleyandZisserman[1 3])toobtaintherotationmatrix.Once therotationmatrixhasbeendetermined,thecorrespondingunitquaternioncan bedeterminedfromgloballynonsingularandnumericallyrobustalgorithms(e.g., Huetal.[14]andShuster[15]).Anerrorsystemisconstructedintermsofthe unitquaternion.Anadaptivecontrolleristhendevelopedandproventoenable acamera(attachedtoarigid-bodyobject)totrackadesiredtrajectorythatis determinedfromasequenceofimages.Theseimagescanbetakenonlineoro ine byacamera.Forexample,asequenceofimagesofthereferenceobjectcanbe 25

PAGE 40

26 prerecordedasthecameramoves(acamera-in-handcon guration),andthese imagescanbeusedasadesiredtrajectoryinalaterreal-timetrackingcontrol. Thecameraisattachedtoarigid-bodyobject(e.g.,theend-e ectorofarobot manipulator,asatellite,anautonomousvehicle,etc.)thatcanbeidenti edbya planarpatchoffeaturepoints.Thecontrollercontainsanadaptivefeedforward termtocompensatefortheunknowndistancefromthecameratotheobserved features.Aquaternion-basedLyapunovfunctionisdevelopedtofacilitatethe controldesignandthestabilityanalysis. Theremainderofthischapterisorganizedasfollows.InSection3.2,the controlobjectiveisformulatedintermsofunitquaternionrepresentation.In Section3.3,thecontrollerisdeveloped,andclosed-loopstabilityanalysisisgiven basedonLyapunov-basedmethods.InSection3.4,thecontroldevelopmentis extendedtothecamera-to-handcon guration.InSections3.5and3.6,Matlab simulationsandtrackingexperimentsthatwereperformedinavirtual-reality test-bedforunmannedsystemsattheUniversityofFloridaareusedtoshowthe performanceoftheproposedvisualservotrackingcontroller. 3.2ControlObjective Thecontrolobjectiveisforacameratotrackadesiredtrajectorythatis determinedbyasequenceofimages.Thisobjectiveisbasedontheassumption thatthelinearandangularvelocitiesofthecameraarecontrolinputsthatcanbe independentlycontrolled(i.e.,unconstrainedmotion)andthatthecameraiscalibrated(i.e., isknown).Thesignalsin(25)aretheonlyrequiredmeasurements todevelopthecontroller. Oneoftheoutcomesofthehomographydecompositionistherotationmatrices ( ) and ( ) .Fromtheserotationmatrices,severaldi erentrepresentationscan beutilizedtodeveloptheerrorsystem.In previoushomography-basedvisualservo controlliterature,theEulerangle-axisrepresentationhasbeenusedtodescribethe

PAGE 41

27 rotationmatrix.Inthischapter,theunitquaternionparameterizationwillbeused todescribetherotationmatrix.Thisparameterizationfacilitatesthesubsequent problemformulation,controldevelopment,andstabilityanalysissincetheunit quaternionprovidesaglobalnonsingularparameterizationofthecorresponding rotationmatrices.Section2.3providesbackground,de nitionsanddevelopment relatedtotheunitquaternion. Giventherotationmatrices ( ) and ( ) ,thecorrespondingunitquaternions ( ) and ( ) canbecalculatedbyusingthenumericallyrobustmethod (see[14]and[15])basedonthecorrespondingrelationships ( )= 2 0 3+2 2 0 (31) ( )= 2 0 3+2 2 0 (32) where 3isthe 3 3 identitymatrix,andthenotation ( ) denotestheskewsymmetricformofthevector ( ) asin(2). Toquantifytheerrorbetweentheactualanddesiredcameraorientations,the mismatchbetweenrotationmatrices ( ) and ( ) isde nedas = (33) Basedon(31)-(3), = 2 0 3+2 2 0 (34) wheretheerrorquaternion ( 0( ) ( ))isde nedas 0= 00 + (35) = 0 0+

PAGE 42

28 Thede nitionof 0( ) and ( ) in(35)makes ( 0( ) ( ))aunitquaternionbasedonthefactthat ( ) and ( ) aretwounitquaternions(seeAppendix A). Thetranslationerror,denotedby ( ) R3,isde nedas = (36) where ( ) ( ) R3arede nedas = ln( ) = ln( ) (37) where { 1 4 } IntheEuclidean-space(seeFigure2),thetrackingobjectivecanbequantiedas ( ) 3as (38) and k ( ) k 0 as (39) Since ( ) isaunitquaternion,(34),(3)and(38)canbeusedtoquantifythe rotationtrackingobjectiveasthedesiretoregulate ( ) as k ( ) k 0 as (3) Thesubsequentsectionwilltargetthecontroldevelopmentbasedontheobjectives in(39)and(30). 3.3ControlDevelopment 3.3.1Open-LoopErrorSystem Theactualangularvelocityofthecameraexpressedin F isde nedas ( ) R3,thedesiredangularvelocityofthecameraexpressedin Fisde nedas ( ) R3,andtherelativeangularvelocityofthecamerawithrespectto F

PAGE 43

29 expressedin F isde nedas ( ) R3where = (3) Thecameraangularvelocitiescanberelatedtothetimederivativesof ( ) and ( ) as[23] 0 = 1 2 03+ (3) and 0 = 1 2 0 3+ (3) respectively. AsstatedinRemark3inChenetal.[10],asu cientlysmoothfunctioncan beusedto tthesequenceoffeaturepointstogeneratethedesiredtrajectory ( ) ;hence,itisassumedthat ( ) and ( ) areboundedfunctionsoftime. Inpractice,theaprioridevelopedsmoothfunctions ( ) ( ) ,and ( ) can beconstructedasboundedfunctionswithboundedtimederivatives.Basedon theassumptionthat ( ) isabounded rstorderdi erentiablefunctionwitha boundedderivative,thealgorithmforcomputingquaternionsin[14]canbeusedto concludethat 0 ( ) ( ) arebounded rstorderdi erentiablefunctionswith aboundedderivative;hence, 0 ( ) ( ) and 0 ( ) ( ) arebounded.In thesubsequenttrackingcontroldevelopment,thedesiredsignals ( ) and ( ) willbeusedasfeedforwardcontrolterms.Toavoidthecomputationalsingularity in ( ) ,thedesiredtrajectoryin[10]wasgeneratedbycarefullychoosingthe smoothfunctionsuchthattheworkspaceislimitedto ( ) .Unlike[10],theuse ofthequaternionalleviatestherestrictiononthedesiredtrajectory ( ) From(3),thesignal ( ) canbecalculatedas =2( 0 0 ) 2 (3)

PAGE 44

30 where 0 ( ) ( ) 0 ( ) ( ) arebounded,so ( ) isalsobounded. Basedon(34),(3),(312)and(313),theopen-looprotationerrorsystemcan bedevelopedas = 1 2 03+ (3) where ( )=( 0( ) ( )). Byusing(2),(2),(3),(3),andthefactthat[95] = + (3) where ( ) R3denotestheactuallinearvelocityofthecameraexpressedin F theopen-looptranslationerrorsystemcanbederivedas[10] = +( ) (3) where ( ) R3 3arede nedas = 00 000 0000 10 01 001 (3) Theauxiliaryterm ( ) isaninvertibleuppertriangularmatrix. 3.3.2Closed-LoopErrorSystem Basedontheopen-looprotationerrorsystemin(315)andthesubsequent Lyapunov-basedstabilityanalysis,theangularvelocitycontrollerisdesignedas = ( 3+ ) 1 + = + (3) where R3 3denotesadiagonalmatrixofpositiveconstantcontrolgains.See AppendixBfortheproofthat ( 3+ ) 1 = .Basedon(3),(3)and

PAGE 45

31 (39),therotationclosed-loope rrorsystemcanbedeterminedas 0= 1 2 (3) = 1 2 03+ Thecontributionofthischapteristhedevelopmentofthequaternion-based rotationtrackingcontroller.Severalothe rhomography-basedtranslationcontrollers couldbecombinedwiththedevelopedrotationcontroller.Forcompleteness,the followingdevelopmentillustrateshowthetranslationcontrollerandadaptive updatelawin[10]canbeusedtocompletethesixDOFtrackingresult. Basedon(317),thetranslationcontrolinput ( ) isdesignedas = 1 1 + (3) where R3 3denotesadiagonalmatrixofpositiveconstantcontrolgains.In (31),theparameterestimate ( ) R fortheunknownconstant isde nedas = (3) where R denotesapositiveconstantadapta tiongain.Thecontrollerin(3) doesnotexhibitasingularitysince ( ) isinvertibleand ( ) 0 .From(3) and(31),thetranslationclosed-looperrorsystemcanbelistedas = +( ) (3) where ( ) R denotesthefollowingpara meterestimationerror: = (3)

PAGE 46

32 3.3.3StabilityAnalysis Theorem3.1 :Thecontrollergivenin(3)and(3),alongwiththe adaptiveupdatelawin(3)ensuresglobalasymptotictrackinginthesensethat k ( ) k 0 k ( ) k 0 (3) Proof :Let ( ) R denotethefollowingdi erentiablenon-negativefunction (i.e.,aLyapunovcandidate): = +(1 0)2+ 2 + 1 2 2 (3) Thetime-derivativeof ( ) canbedeterminedas =2 +2(1 0)( 0)+ + 1 = 03+ (1 0) + ( +( ) ) = 03+ +(1 0) 3 = (3) where(320)and(322)-(324)wereutilized.Itcanbeseenfrom(37)that ( ) isnegativesemi-de nite. Basedon(3)and(3), ( ) ( ) 0( ) ( ) Land ( ) ( ) L2. Since ( ) L,itisclearfrom(3)that ( ) L.Basedonthefactthat ( ) L,(2),(2),(3)and(3)canbeusedtoprovethat ( ) L. Since ( ) L,(3)impliesthat ( ) 1 ( ) L.Basedonthefactthat ( ) 0( ) L,(3)canbeusedtoprovethat ( ) L.Since ( ) Land ( ) isaboundedfunction,(311)canbeusedtoconcludethat ( ) L. Since ( ) ( ) ( ) ( ) ( ) 1 ( ) Land ( ) isassumedtobe

PAGE 47

33 bounded,(3)and(3)canbeutilizedtoprovethat ( ) L.Fromthe previousresults,(3)-(3 17)canbeusedtoprovethat ( ) ( ) L.Since ( ) ( ) L L2,and ( ) ( ) L,BarbalatsLemma[96]canbeusedto concludetheresultgivenin(3). 3.4Camera-To-HandExtension ( xf, R )( xf* R )( xfd, Rd)d* n* sisisimdimi mi Fixed camerapdp*p ( xf, R )( xf* R )( xfd, Rd)d* n* sisisidi mi mi i i Fixed camerapdp*p F F I F d ( xf, R )( xf* R )( xfd, Rd)d* n* sisisimdimi mi mi Fixed camerapdp*p ( xf, R )( xf* R )( xfd, Rd)d* n* sisisidi mi mi i i Fixed camerapdp*p F F I F d Figure3:Coordinateframerelationshipsbetweena xedcameraandtheplanes de nedbythecurrent,desired,andreferencefeaturepoints(i.e., ,and ). 3.4.1ModelDevelopment Forthe xedcameraproblem,considerthe xedplane thatisde nedbya referenceimageoftheobject.Inaddition,considertheactualanddesiredmotion oftheplanes and (seeFigure3).Todeveloparelationshipbetweenthe planes,aninertialcoordinatesystem,denotedbyI,isde nedwheretheorigin coincideswiththecenterofa xedcamera.TheEuclideancoordinatesofthe

PAGE 48

34 featurepointson and canbeexpressedintermsofI,respectively,as ( ) ( ) ( ) ( ) (3) ( ) ( ) ( ) ( ) underthestandardassumptionthatthedistancesfromtheoriginofItothe featurepointsremainspositive(i.e., ( ) ( ) where denotesan arbitrarilysmallpositiveconstant).OrthogonalcoordinatesystemsF,F,andFareattachedtotheplanes ,and ,respectively.Torelatethecoordinate systems,let ( ) ( ) (3) denotetheorientationsofF,FandFwithrespecttoI,respectively,andlet ( ) ( )R3denotetherespective translationvectorsexpressedinI.AsalsoillustratedinFigure31, R3denotestheconstantunitnormaltotheplane expressedinI,and R3denotestheconstantcoordinatesofthe featurepointexpressedinthe correspondingcoordinateframesF,F,andF. FromthegeometrybetweenthecoordinateframesdepictedinFigure3,the followingrelationshipscanbedeveloped = + = + (3) where ( ) ( ) (3) and ( ) ( )R3denotenewrotationand translationvariables,respectively,de nedas = ( ) = ( ) = = (3) Similartothecamera-in-handcon guration,therelationshipsin(3)canbe expressedas = + = +

PAGE 49

35 Therotationmatrices ( ) ( ) andthedepthratios ( ) and ( ) can beobtainedasdescribedinSection2.2.Theconstantrotationmatrix can beobtainedapriorusingvariousmethods(e.g.,asecondcamera,Euclidean measurements)[10].Basedon(3), ( ) and ( ) canbedetermined. TheorientationsofFwithrespecttoFandFcanbeexpressedas ( ) and ( ) respectively.Toquantifytheerrorbetweentheactualanddesired planeorientations,themismatchbetweenrotationmatrices ( ) and ( ) isde nedas = = (3) Similartothedevelopmentforthecamera-in-handcon guration,thefollowing expressioncanbeobtained: = 2 0 3+2 2 0 (3) wheretheerrorquaternion ( 0( ) ( ))isde nedas 0= 00 + (3) = 0 0+ where ( 0( ) ( ))and ( 0 ( ) ( ))areunitquaternionscomputedfromthe rotationmatrices ( ) and ( ) followingthemethodgivenin[14]. 3.4.2ControlFormulation Byexpressingthetranslationvectors,angularvelocityandlinearvelocityin thebodyxedcoordinateframeF,andbyde ningrotationmatrices ( ) and ( ) ,thecontrolobjectiveinthissectioncanbeformulatedinthesame mannerasdoneforthecamera-in-handcon gurationproblem.So,therotation andtranslationerrorscanbedescribedthesameasthoseforthecamera-in-hand con gurationproblem.TheactualangularvelocityoftheobjectexpressedinFis

PAGE 50

36 de nedas ( )R3,thedesiredangularvelocityoftheobjectexpressedinFis de nedas ( )R3,andtherelativeangularvelocityoftheobjectwithrespect toFexpressedinFisde nedas ( )R3where = (3) where ( ) hasthesameformas ( ) in(3).Theopen-looprotationerror systemis = 1 2 03+ (3) andthetranslationerrorsystemis[10] = + (3) where ( )R3denotesthelinearvelocityoftheobjectexpressedinF. Basedontheopen-looperrorsystems(3)and(3),andthesubsequent stabilityanalysis,theangularandlinearcameravelocitycontrolinputsforthe objectarede nedas = + (3) =1 1 ( ) .(3) In(3)and(3), R3 3denotediagonalmatricesofpositiveconstant controlgains,theparameterestimates ( )R ( )R3fortheunknown constants ( ) and ( ) aregeneratedaccordingtothefollowingadaptiveupdate laws = (3) = (3) where R3 3denotesapositiveconstantdiagonaladaptationgainmatrix.

PAGE 51

37 From(3)and(3),therotationclosed-looperrorsystemcanbedeterminedas 0= 1 2 (3) =1 2 03+ Basedon(336)and(338),thetranslationclosed-looperrorsystemisgivenas = + (3) wheretheparameterestimationerrorsignals ( )R and ( )R3arede ned as = = .(3) Theorem3.2 :Thecontrollergivenin(3)and(3),alongwiththe adaptiveupdatelawsin(3)and(3)ensureglobalasymptotictrackinginthe sensethatk ( )k 0 k ( )k 0 (3) Proof :Let ( )R denotethefollowingdi erentiablenon-negativede nite function(i.e.,aLyapunovcandidate): = +(1 0)2+ 2 + 1 2 2 + 1 2 1 (3)

PAGE 52

38 Thetime-derivativeof ( ) canbedeterminedas =2 +2(1 0)( 0)+ + 1 + 1 = 03+ (1 0) + ( + )+ + = 03+ +(1 0) 3 + + = + + = + + = + + = (3) where(3)-(3)wereu tilized.From(3)and(3),signalchasingargumentsdescribedintheprevioussectioncanbeusedtoconcludethatthecontrol inputsandalltheclosed-loopsignalsarebounded.BarbalatsLemma[96]canthen beusedtoprovetheresultgivenin(3). 3.5SimulationResults Anumericalsimulationwasperformedtoillustratetheperformanceofthe trackingcontrollergivenin(3),(3),andtheadaptiveupdatelawin(3). Inthissimulation,thedevelopedtrackingcontrolleraimsenablethecontrolobject totrackthedesiredtrajectoryencodedbyasequenceofimagesandrotatemore than 360(seeFig.3).

PAGE 53

39 Thecameraisassumedtoviewanobjectwithfourcoplanarfeaturepoints withthefollowingEuclideancoordinates(in[m]): 1= 0 150 150 2= 0 150 150 (3) 3= 0 150 150 4= 0 150 150 Thetime-varyingdesiredimagetrajectorywasgeneratedbythekinematicsofthe featurepointplanewherethedesiredlinearandangularvelocitieswereselectedas = 0 1sin( )0 1sin( )0 [ ] = 001 5 [ ] Theinitialanddesiredimage-spacecoordinateswerearti ciallygenerated.For thisexample,consideranorthogonalcoordinateframeIwiththe -axisopposite to (seeFigure2)withthe -axisand -axisontheplane .Therotation matrices 1betweenFandI,and 2betweenFandIweresetas 1= (120) (20) (80) (3) 2= (160) (30) (30) (3) where () () and () (3) denoterotationofangle (degrees) alongthe -axis, -axisand -axis,respectively.Thetranslationvectors 1and 2betweenFandI(expressedinF)andbetweenFandI(expressedinF), respectively,wereselectedas 1= 0 50 54 0 (3) 2= 1 01 04 5 (3)

PAGE 54

40 Theinitialrotationmatrix 3andtranslationvector 3betweenFandIwere setas 3= (240) (90) (30) 3= 0 515 0 (3) Theinitial(i.e., (0) )andreference(i.e., )image-spacecoordinatesofthefour featurepointsin(3)werecomputedas(inpixels) 1(0)= 907 91716 041 2(0)= 791 93728 951 3(0)= 762 84694 881 4(0)= 871 02683 251 1= 985 70792 701 2= 1043 4881 201 3= 980 90921 901 4= 922 00829 001 Theinitial(i.e., (0) )image-spacecoordinatesofthefourfeaturepointsin (37)forgeneratingthedesiredtrajectorywerecomputedas(inpixels) 1(0)= 824 61853 911 2(0)= 770 36878 491 3(0)= 766 59790 501 4(0)= 819 03762 691 Thecontrolgains in(3)and in(3)andadaptationgain in(3) wereselectedas = diag{3 3 3}= diag{15 15 15} =0 0002 Thedesiredandcurrentimage-spacetrajectoriesofthefeaturepointplaneare showninFigure3andFigure3,respectively.Thefeaturepointplanerotates morethan 360degreesasshowninthesetwo gures.Theresultingtranslation

PAGE 55

41 androtationerrorsareplottedinFigure3andFigure3,respectively.The errorsgotozeroasymptotically.Thed esiredimage-spacetrajectory(i.e., ( )) andthecurrentimage-spacetrajectory(i.e., ( )) areshowninFigure3and Figure30,respectively.Thetrackingerrorbetweenthecurrentanddesired image-spacetrajectoryisshowninFigure 3.TheFigures3-3showthat thecurrenttrajectorytracksthedesiredtrajectoryasymptotically.Thetranslation androtationcontrolinputsareshowninFi gure3andFigure3,respectively. Theparameterestimatefor 1isshowninFigure3. 3.6ExperimentResults Simulationsveri edtheperformanceofthetrackingcontrollergivenin(319), (31),andtheadaptiveupdatelawin(322).Experimentswerethenperformed totestrobustnessandperformanceinthepresenceofsignalnoise,measurement error,calibrationerror,etc.Theexperimentswereperformedinatest-bedatthe UniversityofFloridaforsimulation,designandimplementationofvision-based controlsystems.Forthetest-bed,a3Denvironmentcanbeprojectedontolarge monitorsorscreensandviewedbyaphysicalcamera.Communicationbetweenthe cameraandcontrolprocessingcomputersandtheenvironmentrenderingcomputers allowsclosed-loopcontrolofthevirtualscene. 3.6.1ExperimentCon gurations Ablockdiagramdescribingtheexperimentaltest-bedisprovidedinFigure 32.Thetest-bedisbasedonavirtualenvironmentgeneratedbyavirtualreality simulator,composedof veworkstationsandadatabaseserverrunningvirtual realitysoftware.Thisallowsmultipleinstancesofthevirtualenvironmenttorunat thesametime.Inthisway,cameraviewscanberigidlyconnectedinamosaicfor alargeFOV.Alternately,multiple,independentcameraviewscanpursuetheirown tasks,suchascoordinatedcontrolofmultiplevehicles.Thevirtualrealitysimulator

PAGE 56

42 Figure3:Blockdiagramoftheexperiment. Figure3:TheSonyXCD-710CRcolor rewirecamerapointedatthevirtual environment.

PAGE 57

43 intheexperimentiscurrentlycapableofdisplayingthreesimultaneousdisplays.A pictureofthedisplayscanbeseeninFigure33. ThevirtualrealitysimulatorutilizesMultiGen-ParadigmsVegaPrime,an OpenGL-based,commercialsoftwarepackageforMicrosoftWindows.Thevirtual environmentintheexperimentisarecreationoftheU.S.Armysurbanwarfare traininggroundatFortBenning,Georgia.Theenvironmenthasadensepolygon count,detailedtextures,highframerate,andthee ectsofsoftshadows,resulting inveryrealisticimages.AscenefromtheFortBenningenvironmentcanbeseenin Figure3. ThevisualsensorintheexperimentisaSonyXCD-710CRcolor rewire camerawitharesolutionof 1280768 pixelsand ttedwitha12.5mmlens.The cameracapturestheimagesonthelargescreensasshowninFigure33.The imagesareprocessedinavisionprocessingworkstation.Anapplicationwritten inC++acquiresimagesfromthecameraandprocesstheimagestolocateand trackthefeaturepoints(theinitialfeaturepointswerechosenmanually,then theapplicationwillidentifyandtrackthefeaturepointsonitsown).TheC++ applicationgeneratesthecurrentanddesiredpixelcoordinates,whichcanbeused toformulatethecontrolcommand. Inadditiontotheimageprocessingapplication,acontrolcommandgeneration applicationprogrammedinMatlabalsorunsinthisworkstation.TheMatlabapplicationcommunicatesdatawiththeimageprocessingapplication(writteninC++) viasharedmemorybu ers.TheMatlabapplicationreadsthecurrentanddesired pixelcoordinatesfromthesharedmemorybu ers,andwriteslinearandangular cameravelocityinputintothesharedmemorybu ers.TheC++applicationwrites thecurrentanddesiredpixelcoordinatesintothesharedmemory,andreadscameravelocityinputfromthesharedmemorybu er.Thelinearandangularcamera velocitycontrolinputaresentfromthevisionprocessingworkstationtothevirtual

PAGE 58

44 realitysimulatorviaaTCPsocketconnection.Thisdevelopmentmakesextensive useofIntelsOpenSourceComputerVision(OpenCV)Library(seeBradski[97]) andtheGNUScienti cLibrary(GSL)(seeGalassietal.[98]). Figure3:Virtualrealityenvironmentexmaple:avirtualrecreationoftheUS ArmysurbanwarfaretraininggroundatFortBenning. Algorithms,suchastheHomographydecomposition,areimplementedasif thevirtualenvironmentisatrue3Dscenewhichthephysicalcameraisviewing. Ofcourse,thecameradoesnotlookatthe3Dscenedirectly.Thecameraviews consistofa3Dscenethatareprojectedontoa2Dplane,thenprojectedonto theimageplane.Thatis,theprojectivehomographyneededforcontrolexists betweenthe on-screen currentimageandthe on-screen goalimage,butwhat aregivenarethecameraviewsofthe on-screen images.Thus,thereexistsan additionaltransformactionbetweenthepointsonthescreenandthepointsinthe cameraimage.Ascreen-cameracalibrationmatrix canbeusedtodescribethis transformationrelationship. Everypointonthescreencorrespondstoonlyonepointintheimage.Thus, theconstantmatrix isahomographyandcanbedeterminedthrougha calibrationprocedure,ande ectivelyreplacesthestandardcalibrationofthe

PAGE 59

45 physicalcamera.Intheexperiment,thismatrixisdeterminedtobe = 0 91410 003990 90650 03750 935850 7003 001 Inadditionto ,thecameracalibrationmatrix ,correspondingtothevirtual camerawithinVegaPrime,isstillrequired.Thismatrixcanbedeterminedfrom thesettingsofthevirtualrealityprogram.Inthisexperiment, wasdeterminedto be = 1545 10640 01545 1512 001 3.6.2ExperimentforTracking Thedesiredtrajectoryisinaformatof aprerecordedvideo(asequenceof images).AstheviewpointintheVegaPrimemoves(whichcanbeimplemented bysomechosenvelocityfunctionsormanually),theimagescapturedbythe camerachanges.Thepixelcoordinatesofthefeaturespointsontheimageswill berecordedasthedesiredtrajectory.Thecontrolobjectiveinthistracking experimentistosendcontrolcommandtothevirtualrealitysimulatorsuch thatthecurrentposeofthefeaturepointstracksthedesiredpose.Theconstant referenceimagewastakenasthe rstimageinthesequence. Thecontrolgains in(3)and in(3),andadaptationgain in (32)wereselectedas = {0 1 0 1 1 5}= {0 5 0 5 0 5} =0 005

PAGE 60

46 Duringtheexperiment,theimagesfromthecameraareprocessedwithaframe rateofapproximately20frames/second. TheresultingtranslationandrotationerrorsareplottedinFigure315and Figure316.Thedesiredimage-spacetrajectory(i.e., ( )) isshowninFigure 37,andthecurrentimage-spacetrajectory(i.e., ( )) isshowninFigure318. Thetrackingerrorbetweenthecurrentanddesiredimage-spacetrajectoriesis showninFigure3.Thetranslationandrotationcontroloutputsareshownin Figure3andFigure3,respectively.Theparameterestimatefor 1isshown inFigure3. Inthetrackingcontrol,thesteady-statetrackingerrorisapproximately 15 [pixel] 10 [pixel] 0 01 .Thissteady-stateerroriscausedbytheimage noiseandthecameracalibrationerrorinthetest-bed.To ndthetracking controlerror,twohomographiesarecomputedbetweenthereferenceimageand thecurrentimageanddesiredimage,respectively.Duetotheimagenoiseand cameracalibrationerror,theerrorisinsertedtothetwohomographies.Thenthe trackingerrorobtainedfromthemismatchbetweenthetwohomographieswill havelargererror.Also,theimagenoiseandcalibrationerrorinsertserrorintothe derivativeofthedesiredpixelcoordinates,whichisusedasafeedforwardtermin thetrackingcontroller.Furthermore,communicationbetweenthecontrollerand virtualrealitysystemoccursviaaTCPsocket,introducingsomeamountoflatency intothesystem.Notethatthispixelerrorrepresentslessthan1.5%oftheimage dimensions. Inthefollowingregulationexperiment,thederivativeofthedesiredpixel coordinatesisequaltozero,andonlyonehomographyiscomputedbetween thecurrentimageanddesiredsetimage.Thein uenceoftheimagenoiseand calibrationerrorisweakenedgreatly.

PAGE 61

47 3.6.3ExperimentforRegulation Whenthedesiredposeisaconstant,thetrackingproblembecomesaregulationproblem.Thecontrolobjectiveinthisregulationexperimentistosendcontrol commandtothevirtualrealitysimulatorsuchthatthecurrentposeofthefeatures pointsisregulatedtothedesiredsetpose. Intheexperiment,thecontrolgains in(3)and in(3),and adaptationgain in(3)wereselectedas = {0 4 0 4 0 9}= {0 5 0 25 0 25} =0 005 TheresultingtranslationandrotationerrorsareplottedinFigure323and Figure34,respectively.Theerrorsgotozeroasymptotically.Thecurrent image-spacetrajectory(i.e., ( )) isshowninFigure3.Theregulationerror betweenthecurrentanddesiredsetimage-spaceposeisshowninFigure36.The translationandrotationcontroloutputsareshowninFigure3andFigure3, respectively.Theparameterestimatefor 1isshowninFigure329.

PAGE 62

48 0 500 1000 1500 0 500 1000 0 2 4 6 8 10 udi [pixel] vdi [pixel] Time [sec]Figure35:Desiredimage-spacecoordinatesofthefourfeaturepoints(i.e., ( ) ) inthetrackingMatlabsimulationshownina3Dgraph.Inthe gure,Odenotes theinitialimage-spacepositionsofthe4featurepointsinthedesiredtrajectory, and*denotesthecorresponding nalpositionsofthefeaturepoints. 0 500 1000 1500 0 500 1000 0 2 4 6 8 10 ui [pixel] vi [pixel] Time [sec]Figure3:Currentimage-spacecoordinatesofthefourfeaturepoints(i.e., ( ) ) inthetrackingMatlabsimulationshownina3Dgraph.Inthe gure,Odenotestheinitialimage-spacepositionsofthe4featurepoints,and*denotesthe corresponding nalpositionsofthefeaturepoints.

PAGE 63

49 0 2 4 6 8 10 100 0 100 200 e1 [pixel] 0 2 4 6 8 10 100 0 100 200 e2 [pixel] 0 2 4 6 8 10 0.5 0 0.5 Time [sec]e3Figure3:Translationerror ( ) inthetrackingMatlabsimulation. 0 2 4 6 8 10 0.5 1 1.5 ~ q0 0 2 4 6 8 10 0.5 0 0.5 ~ qv1 0 2 4 6 8 10 0.5 0 0.5 ~ qv2 0 2 4 6 8 10 0.5 0 0.5 Time [sec]~ qv3Figure3:Rotationquaternionerror ( ) inthetrackingMatlabsimulation.

PAGE 64

50 0 2 4 6 8 10 0 200 400 600 800 1000 1200 ud [pixel] 0 2 4 6 8 10 0 200 400 600 800 1000 Time [sec]vd [pixel]Figure3:Pixelcoordinate ( ) ofthefourfeaturepointsinasequenceofdesiredimagesinthetrackingMatlabsimulation.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component. 0 2 4 6 8 10 0 200 400 600 800 1000 1200 u [pixel] 0 2 4 6 8 10 0 200 400 600 800 1000 Time [sec]v [pixel]Figure3:Pixelcoordinate ( ) ofthecurrentposeofthefourfeaturepointsin thetrackingMatlabsimulation.Theupper gureisforthe ( ) componentandthe bottom gureisforthe ( ) component.

PAGE 65

51 0 2 4 6 8 10 50 0 50 100 Time [sec]u ud [pixel] 0 2 4 6 8 10 200 100 0 100 200 Time [sec]v vd [pixel]Figure3:Trackingerror ( )( ) (inpixels)ofthefourfeaturepointsinthe trackingMatlabsimulation.Theupper gureisforthe ( )( ) componentand thebottom gureisforthe ( )( ) component. 0 2 4 6 8 10 5 0 5 vc1 [m.s 1] 0 2 4 6 8 10 20 10 0 10 vc2 [m.s 1] 0 2 4 6 8 10 4 2 0 2 Time [sec]vc3 [m.s 1]Figure3:Linearcameravelocityinput ( ) inthetrackingMatlabsimulation.

PAGE 66

52 0 2 4 6 8 10 3 2 1 0 c1 [rad.s 1] 0 2 4 6 8 10 1 0.5 0 0.5 c2 [rad.s 1] 0 2 4 6 8 10 1 0 1 2 Time [sec]c3 [rad.s 1]Figure313:Angularcameravelocityinput ( ) inthetrackingMatlabsimulation. 0 2 4 6 8 10 1 2 3 4 5 6 7 8 9 Time [sec]^ z1 *[m]Figure314:Adaptiveon-lineestimateof 1inthetrackingMatlabsimulation.

PAGE 67

53 0 10 20 30 40 50 60 200 0 200 400 e1 [pixel] 0 10 20 30 40 50 60 100 0 100 e2 [pixel] 0 10 20 30 40 50 60 0.05 0 0.05 Time [sec]e3Figure3:Translationerror ( ) inthetrackingexperiment. 0 10 20 30 40 50 60 0.99 1 1.01 ~ q0 0 10 20 30 40 50 60 0.1 0 0.1 ~ qv1 0 10 20 30 40 50 60 0.1 0 0.1 ~ qv2 0 10 20 30 40 50 60 0.1 0 0.1 Time [sec]~ qv3Figure3:Rotationquaternionerror ( ) inthetrackingexperiment.

PAGE 68

54 0 10 20 30 40 50 60 200 400 600 800 1000 1200 ud [pixel] 0 10 20 30 40 50 60 200 400 600 800 1000 1200 Time [sec]vd [pixel]Figure3:Pixelcoordinate ( ) ofthefourfeaturepointsinasequenceof desiredimagesinthetrackingexperiment.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component. 0 10 20 30 40 50 60 200 400 600 800 1000 1200 u [pixel] 0 10 20 30 40 50 60 200 400 600 800 1000 1200 Time [sec]v [pixel]Figure3:Pixelcoordinate ( ) ofthecurrentposeofthefourfeaturepoints inthetrackingexperiment.Theupper gureisforthe ( ) componentandthe bottom gureisforthe ( ) component.

PAGE 69

55 0 10 20 30 40 50 60 200 0 200 400 Time [sec]u ud [pixel] 0 10 20 30 40 50 60 200 0 200 400 Time [sec]v vd [pixel]Figure3:Trackingerror ( )( ) (inpixels)ofthefourfeaturepointsinthe trackingexperiment.Theupper gureisforthe ( )( ) componentandthe bottom gureisforthe ( )( ) component. 0 10 20 30 40 50 60 0.1 0 0.1 vc1 [m.s 1] 0 10 20 30 40 50 60 0.05 0 0.05 vc2 [m.s 1] 0 10 20 30 40 50 60 0.02 0 0.02 Time [sec]vc3 [m.s 1]Figure3:Linearcameravelocityinput ( ) inthetrackingexperiment.

PAGE 70

56 0 10 20 30 40 50 60 0.02 0 0.02 c1 [rad.s 1] 0 10 20 30 40 50 60 0.02 0 0.02 c2 [rad.s 1] 0 10 20 30 40 50 60 0.1 0 0.1 Time [sec]c3 [rad.s 1]Figure3:Angularcameravelocityinput ( ) inthetrackingexperiment. 0 10 20 30 40 50 60 1.5 1 0.5 0 0.5 Time [sec]^ z1 *[m]Figure322:Adaptiveon-lineestimateof 1inthetrackingexperiment.

PAGE 71

57 0 5 10 15 20 200 0 200 400 e1 [pixel] 0 5 10 15 20 200 0 200 400 e2 [pixel] 0 5 10 15 20 0.1 0 0.1 Time [sec]e3Figure3:Translationerror ( ) intheregulationexperiment. 0 5 10 15 20 0.5 1 q0 0 5 10 15 20 0.5 0 0.5 qv1 0 5 10 15 20 0.5 0 0.5 qv2 0 5 10 15 20 0.5 0 0.5 Time [sec]qv3Figure3:Rotationquaternionerror ( ) intheregulationexperiment.

PAGE 72

58 0 5 10 15 20 0 200 400 600 800 1000 u [pixel] 0 5 10 15 20 0 200 400 600 800 1000 Time [sec]v [pixel]Figure3:Pixelcoordinate ( ) (inpixels)ofthecurrentposeofthefourfeature pointsintheregulationexperiment.Theupper gureisforthe ( ) component andthebottom gureisforthe ( ) component. 0 5 10 15 20 200 0 200 400 u ud [pixel] 0 5 10 15 20 200 0 200 400 Time [sec]v vd [pixel]Figure3:Regulationerror ( )(inpixels)ofthefourfeaturepointsinthe regulationexperiment.Theupper gureisforthe ( )( ) componentandthe bottom gureisforthe ( )( ) component.

PAGE 73

59 0 5 10 15 20 0.1 0 0.1 vc1 [m.s 1] 0 5 10 15 20 0.1 0 0.1 vc2 [m.s 1] 0 5 10 15 20 0.1 0 0.1 Time [sec]vc3 [m.s 1]Figure3:Linearcameravelocityinput ( ) intheregulationexperiment. 0 5 10 15 20 0.2 0 0.2 c1 [rad.s 1] 0 5 10 15 20 0.2 0 0.2 c2 [rad.s 1] 0 5 10 15 20 0.5 0 0.5 Time [sec]c3 [rad.s 1]Figure3:Angularca meravelocityinput ( ) intheregulationexperiment.

PAGE 74

60 0 5 10 15 20 1 0.8 0.6 0.4 0.2 0 0.2 Time [sec]^ z1 *[m]Figure329:Adaptiveon-lineestimateof 1intheregulationexperiment.

PAGE 75

CHAPTER4 COLLABORATIVEVISUALSERVOTRACKINGCONTROLVIAA DAISY-CHAININGAPPROACH 4.1Introduction Inthischapter,acollaborativetrajectorytrackingproblemisconsideredfor asixDOFrigid-bodyobject(e.g.,anautonomousvehicle)identi edbyaplanar patchoffeaturepoints.Unliketypicalvisualservocontrollersthatrequireeither thecameraorthetargettoremainstationary,auniqueaspectofthedevelopment inthischapteristhatamovingmonocularcamera(e.g.,acameramountedon anunmannedairvehicle(UAV))isusedtoprovidefeedbacktoamovingcontrol object.Thecontrolobjectiveisfortheobjecttotrackadesiredtrajectorythat isencodedbyaprerecordedvideoobtainedfroma xedcamera(e.g.,acamera mountedonasatellite,acameramountedonabuilding). Severalchallengesmustberesolvedtoachievethisunexploredcontrolobjective.Therelativevelocitybetweenthemovingplanarpatchoffeaturepoints andthemovingcamerapresentsasigni cantchallenge.Byusingadaisy-chaining approach(e.g.,[16] ) ,Euclideanhomographyrelationshipsbetweendi erent cameracoordinateframesandfeaturepointpatchcoordinateframesaredeveloped. Thesehomographiesareusedtorelatecoordinateframesattachedtothemoving camera,thereferenceobject,thecontrolobject,andtheobjectusedtorecord thedesiredtrajectory.AnotherchallengeisthatforgeneralsixDOFmotionby boththecameraandthecontrolobject,thenormaltotheplanarpatchassociated withtheobjectisunknown.Bydecomposingthehomographyrelationships,the normaltotheplanarpatchcanbeobtained.Likewise,thedistancebetweenthe movingcamera,themovingcontrolobject,andthereferenceobjectareunknown. 61

PAGE 76

62 Byusingthedepthratiosobtainedfro mthehomographydecomposition,the unknowntime-varyingdistanceisrelatedtoanunknownconstantparameter.A Lyapunov-basedadaptiveestimationlawisdesignedtocompensatefortheunknownconstantparameter.Themovingcameracouldbeattachedtoaremotely pilotedvehiclewitharbitraryrotations,thisrequiresaparameterizationthatis validoveralarge(possiblyunbounded)domain.Additionally,sincethisworkis motivatedbyproblemsintheaerospacecommunity,homography-basedvisualservo controltechniques(e.g.,[10,20,22])ar ecombinedwithquaternion-basedcontrol methods(e.g.,[14,23,24])tofacilitatelargerotations.Byusingthequaternion parameterization,theresultingclosed-looprotationerrorsystemcanbestabilized byaproportionalrotationcontrollercombinedwithafeedforwardtermthatisa functionofthedesiredtrajectory. 4.2ProblemScenario Overthepastdecade,avarietyofvisualservocontrollershavebeenaddressed forbothcamera-to-handandcamera-in-handcon gurations(e.g.,see[1,99 101]).Forvisualservocontrolapplicationsthatexploiteitherofthesecamera con gurations,eithertheobjectorthecameraisrequiredtoremainstationary. Incontrasttotypicalcamera-to-handorcamera-in-handvisualservocontrol con gurations,amovingairbornemonocularcamera(e.g.,acameraattachedto aremotecontrolledaircraft,acameramountedonasatellite)isusedbyMehta etal.[18,19]toprovideposemeasurementsofamovingsensorlessunmanned groundvehicle(UGV)relativetoagoalcon guration.Theresultsin[18,19] arerestrictedtothreeDOF,andtherotationerrorsystemisencodedbyEuler angle-axisparameterization. ConsiderastationarycoordinateframeIthatisattachedtoacameraand atime-varyingcoordinateframeFthatisattachedtosomeobject(e.g.,an autonomousvehicle)asdepictedinFigure41.Theobjectisidenti edinanimage

PAGE 77

63 d* dD e s i r e d P a t c h T r a j e c t o r yd*dF Fd r**s1 iC u r r en t P a t c hReference CameraC u r r e n t C a m e r aIIRFReference Objects2 is1 i(R ,x )*f*(R ,x )*f r*r(R,x )f (R x )r d f r d Figure4:Geometricmodel. byacollectionoffeaturepointsthatareassumed(withoutlossofgenerality)to becoplanarandnon-collinear(i.e.,aplanarpatchoffeaturepoints).Thecamera attachedtoIapriorirecordsaseriesofsnapshots(i.e.,avideo)ofthemotion oftheobjectattachedtoFuntilitcomestorest(orthevideostopsrecording). AstationarycoordinateframeFisattachedtoareferenceobjectidenti edby anotherplanarpatchoffeaturepointsthatareassumedtobevisibleineveryframe ofthevideorecordedbythecamera.Forexample,thecameraattachedtoIisonboardastationarysatellitethattakesaseriesofsnapshotsoftherelativemotion ofFwithrespecttoF.Therefore,thedesiredmotionofFcanbeencodedas aseriesofrelativetranslationsandrotationswithrespecttothestationaryframeFapriori.Splinefunctionsor lteralgorithmscanbeusedtogenerateasmooth desiredfeaturepointtrajectory[10]. Consideratime-varyingcoordinateframeIthatisattachedtoacamera(e.g., acameraattachedtoaremotecontrolledaircraft)andatime-varyingcoordinate

PAGE 78

64 frameFthatisattachedtothecontrolobjectasdepictedinFigure4.The cameraattachedtoIcapturessnapshotsoftheplanarpatchesassociatedwithFandF,respectively.TheapriorimotionofFrepresentsthedesiredtrajectory ofthecoordinatesystemF,whereFandFareattachedtothesameobject butatdi erentpointsintime.ThecameraattachedtoIisadi erentcamera (withdi erentcalibrationparameters)asthecameraattachedtoI.Theproblem consideredinthischapteristodevelopakinematiccontrollerfortheobject attachedtoFsothatthetime-varyingrotationandtranslationofFconvergesto thedesiredtime-varyingrotationandtranslationofF,wherethemotionofFis determinedfromthetime-varyingoverheadcameraattachedtoI. 4.3GeometricModel Therelationshipsbetweenthecoordinatesystemsareasfollows(seeTab le41): ( ) ( ) ( ) 0( ) ( ) (3) denotetherotationfromFtoI,FtoI,ItoI,FtoI,FtoI,andFtoI,respectively, ( ) ( )R3denotetherespectivetime-varyingtranslationfromFtoIandfromFtoIwithcoordinatesexpressedinI,and ( ) 0 ( ) ( ) R3denotethe respectiveconstanttranslationfromItoI,FtoI,FtoI,andfromFtoIwithcoordinatesexpressedinI.FromFigure4,thetranslation 0 ( ) and therotation 0( ) canbeexpressedas 0 = + ( ) 0= (41) AsillustratedinFigure4, and denotetheplanarpatchesof featurepointsassociatedwithF,F,andF,respectively. 1 R3 = 1 2 ( 4) denotestheconstantEuclideancoordinatesofthe -thfeaturepointinF(andalsoF),and 2 R3 =1 2 denotestheconstant Euclideancoordinatesofthe -thfeaturepointinF.Fromthegeometrybetween

PAGE 79

65 Motion Frames ( ) ( ) FtoIinI ( ) ( ) FtoIinI ( ) ( ) ItoI 0( ) 0( ) FtoIinI FtoIinI ( ) ( ) FtoIinI Table41:Coordinateframesrelationships thecoordinateframesdepictedinFigure4,thefollowingrelationshipscanbe developed = + 1 = + 1 (42) = + 2 0= 0 + 01 (43) = + 2 (44) In(42)-(4), ( ) ( )R3denotetheEuclideancoordinatesofthefeature pointson and ,respectively,expressedinIas ( ) ( ) ( ) ( ) (45) ( ) ( ) ( ) ( ) (46) 0( ) ( )R3denotetheactualanddesiredtime-varyingEuclideancoordinates,respectively,ofthefeaturepointson expressedinIas 0( ) 0( ) 0( ) 0( ) (47) ( ) ( ) ( ) ( ) (48)

PAGE 80

66 and R3denotestheconstantEuclideancoordinatesofthefeaturepointson theplanarpatch expressedinIas (49) Aftersomealgebraicmanipulation,theexpressionsin(42)-(44)canberewritten as = + (4) = + = + (4) = + 0= + (4) where ( ) ( ) ( ) ( ) (3) and ( ) ( ) ( ) ( )R3are newrotationalandtranslationalvariables,respectively,de nedas = = (4) = = and = ( ( 2 1 )) (4) = + ( 2 1 ) (4) = + ( 2 1 ) (4) = = 0 (4) Notethat ( ) ( ) and ( ) in(4)aretherotationmatricesbetweenFandF,FandF,andFandF,respectively,but ( ) ( ) and ( ) in (4)-(4)arenotthetra nslationvectorsbetweenthecorrespondingcoordinate

PAGE 81

67 frames.However,thiswillnota ectthefollowingcontrollerdesignbecauseonly therotationmatriceswillbeusedinthecontrollerdevelopment. TofacilitatethedevelopmentofarelationshipbetweentheactualEuclidean translationofFtotheEuclideantranslationtha tisreconstructedfromtheimage information,thefollowingprojectiverelationshipsaredeveloped: ( )= ( )= = (4) where ( )R representsthedistancefromtheoriginofIto alongtheunit normal(expressedinI)to denotedas ( )R3, ( )R representsthe distancefromtheoriginofIto alongtheunitnormal(expressedinI)to denotedas ( )R3,and R representsthedistancefromtheoriginofIto alongtheunitnormal(expressedinI)to denotedas R3where ( )= ( ) .In(4), ( ) ( ) forsomepositiveconstant R Basedon(418),therelationshipsin(410)-(42)canbeexpressedas = + (4) = + (4) = + (4) = + (4) 0= + (4) AsinChenetal.[10],thesubsequentdevelopmentrequiresthattheconstant rotationmatrix beknown.Theconstantrotationmatrix canbeobtaineda prioriusingvariousmethods(e.g.,asecondcamera,Euclideanmeasurements).The subsequentdevelopmentalsoassumesthatthedi erencebetweentheEuclidean distances ( 2 1 ) isaconstant =1 .Whiletherearemanypractical applicationsthatsatisfythisassumption(e.g.,asimplescenarioisthattheobjects

PAGE 82

68 attachedto and arethesameobject),theassumptionisgenerallyrestrictive andisthefocusoffutureresearch.AsdescribedbyHuetal.[17],thisassumption canbeavoidedbyusingthegeometricreconstructionapproach[102]underan alternativeassumptionthatthedistancebetweentwofeaturepointsisprecisely known. 4.4EuclideanReconstruction Therelationshipsgivenby(4)-(4)provideameanstoquantifya translationandrotationerrorbetweenthedi erentcoordinatesystems.Sincethe poseof ,and cannotbedirectlymeasured,aEuclideanreconstructionis developedtoobtaintheposeerrorbycomparingmultipleimagesacquiredfrom thehoveringmonocularvisionsystem.Tofacilitatethesubsequentdevelopment, thenormalizedEuclideancoordinatesofthefeaturepointsin and canbe expressedintermsofIas ( )R3and ( )R3,respectively,as (4) Similarly,thenormalizedEuclideancoordinatesofthefeaturepointsforthe current,desired,andreferenceimagecanbeexpressedintermsofIas 0( ) ( ) R3,respectively,as 0( ) 0( ) 0( ) ( ) ( ) ( ) (4) Fromtheexpressionsgivenin(4)and( 4),therotationandtranslation betweenthecoordinatesystemsFandF,betweenFandF,andbetweenIand

PAGE 83

69IcannowberelatedintermsofthenormalizedEuclideancoordinatesasfollows: = + (4) = 1 + (4) = + (4) = + (4) where ( ) ( ) ( )R denotedepthratiosde nedas = = = and ( ) ( ) ( ) ( )R3denotescaledtranslationvectorsthatare de nedas = = (4) = = SincethenormalizedEuclideancoo rdinatesin(4)-(4)cannotbe directlymeasured,thefollowingrelationships(i.e.,thepin-holecameramodel)are usedtodeterminethenormalizedEuclideancoordinatesfrompixelinformation = 1 = 1 (4) = 2 = 2 (4) where 12R3 3areknown,constant,andinvertibleintrinsiccameracalibrationmatricesofthecurrentcameraandt hereferencecamera,respectively.In (4)and(4), ( ) and ( )R3representtheimage-spacecoordinatesof theEuclideanfeaturepointson and expressedintermsofIas 1 1 (4)

PAGE 84

70 respectively,where ( ) ( ) ( ) ( )R .Similarly, ( ) and R3representtheimage-spacecoordinatesoftheEuclideanfeatureson and expressedintermsofIas 1 1 (4) respectively,where ( ) ( ) R .Byusing(4)-(4)and (4)-(4),thefollowingre lationshipscanbedeveloped: = 1 + 1 1 | {z } (4) = 1 1 + 1 1 | {z } (4) = 2 + 1 2 | {z } (4) = 2 + 1 1 | {z } (4) where ( ) ( ) ( ) ( )R3 3denoteprojectivehomographies.Setsof linearequationscanbedevelopedfrom(435)-(438)todeterminetheprojective homographiesuptoascalarmultiple.Varioustechniquescanbeused(e.g., seeFaugerasandLustman[93]andZhangandHanson[94])todecomposethe Euclideanhomographies,toobtain ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) .Giventhattheconstantrotation matrix isassumedtobeknown,theexpressionsfor ( ) and ( ) in(4 13)canbeusedtodetermine ( ) and ( ) .Once ( ) isdetermined,the expressionfor ( ) in(413)canbeusedtodetermine ( ) .Also,once ( ) and ( ) havebeendetermined,(41)canbeusedtodetermine 0( ) .Since ( ) ,

PAGE 85

71 ( ) ( ) ( ) ( ) ( ) ,and ( ) canbedetermined,thefollowing relationshipcanbeusedtodetermine 0( ) : 0= 0 + (4) wheretheinverseoftheratio ( ) 0( ) canbedeterminedas 0 = 001 + (4) 4.5ControlObjective ThecontrolobjectiveisforasixDOFrigid-bodyobject(e.g.,anautonomous vehicle)identi edbyaplanarpatchoffeaturepointstotrackadesiredtrajectory thatisdeterminedbyasequenceofimagestakenbya xedreferencecamera.This objectiveisbasedontheassumptionthatthelinearandangularvelocitiesofthe cameraarecontrolinputsthatcanbeindep endentlycontrolled(i.e.,unconstrained motion)andthatthereferenceanddesiredcamerasarecalibrated(i.e., 1and 2areknown).Thecontrolobjectivecanbestatedas 0( ) ( ) (i.e., theEuclideanfeaturepointson trackthecorrespondingfeaturepointson ). Equivalently,thecontrolobjectivecanalsobestatedintermsoftherotation andtranslationoftheobjectas 0( )( ) and 0( )( ) .Asstated previously, 0( ) and ( ) canbecomputedbydecomposingtheprojective homographiesin(4)-(4)andusi ng(4).Oncetheserotationmatrices havebeendetermined,theunitquaternionparameterizationisusedtodescribe therotationmatrix.Thisparameterizationfacilitatesthesubsequentproblem formulation,controldevelopment,andstabilityanalysissincetheunitquaternion providesaglobalnonsingularparameterizationofthecorrespondingrotation matrices.SeeSection2.3forsomeback groundabouttheunitquaternion.

PAGE 86

72 Giventherotationmatrices 0( ) and ( ) ,thecorrespondingunitquaternions ( ) and ( ) canbecalculatedbyusingthenumericallyrobustmethod (e.g.,see[14]and[15])basedonthecorrespondingrelationships 0= 2 0 3+2 +2 0 (4) = 2 0 3+2 +2 0 (4) where 3isthe 33 identitymatrix,andthenotation ( ) denotesthefollowing skew-symmetricformofthevector ( ) asin(2). Toquantifytherotationerrorbetweenthefeaturepointson and ,the errorbetweenrotationmatrices 0( ) and ( ) isde nedas = 0 = 2 0 3+2 2 0 (4) wheretheerrorquaternion ( )=( 0( ) ( ))isde nedas = 0 = 00 + 0 0+ (4) Since ( ) isaunitquaternion,(443)canbeusedtoquantifytherotationtracking objectiveask ( )k 0= ( )3as (4) Thetranslationerror,denotedby ( )R3,isde nedas[10,20] = (4) where ( ) ( )R3arede nedas = 0 00 0ln( 0 ) = ln( ) (4) In(3), 0( ) ( ) and ( ) ( ) canbecomputedasbelow

PAGE 87

73 0 = 0 = 0 1 = 1 Basedon(445)and(446),thesubsequentcontroldevelopmenttargetsthe followingobjectives:k ( )k 0 andk ( )k 0 as (4) 4.6ControlDevelopment 4.6.1Open-LoopErrorSystem From(43)and(44),theopen-looprotationerrorsystemcanbedeveloped as = 1 2 03+ (4) where ( ) denotestheangularvelocityof expressedinFthatcanbe calculatedas[23] =2( 0 0 )2 (4) where 0 ( ) ( ) 0 ( ) ( ) areassumedtobebounded;hence, ( ) isalsobounded.Theopen-looptranslationerrorsystemcanbederivedas(see AppendixC) = 000 + (4) where ( ) ( )R3denotethelinearandangularvelocityvectorsof expressed inF,respectively,andtheauxiliarymeasurableterm 0( )R3 3isde nedas 0= 100 0010 0001

PAGE 88

74 4.6.2Closed-LoopErrorSystem Basedontheopen-looprotationerrorsystemin(449)andthesubsequent Lyapunov-basedstabilityanalysis,theangularvelocitycontrollerisdesignedas = + (4) where R3 3denotesadiagonalmatrixofpositiveconstantcontrolgains. From(49)and(42),therotationclosed-looperrorsystemcanbedeterminedas 0= 1 2 (4) =1 2 03+ =1 2 0 From(4),thetranslationcontrolinput ( ) isdesignedas =0 00 1 ( ) (4) where R3 3denotesadiagonalmatrixofpositiveconstantcontrolgains.In (44),theparameterestimate ( )R fortheunknownconstant isdesigned as = (4) where R denotesapositiveconstantadaptationgain.Byusing(41)and (4),thetranslationc losed-looperrorsystemis = (4) where ( )R denotesthefollowingparameterestimationerror: = (4)

PAGE 89

75 4.6.3StabilityAnalysis Theorem4.1 :Thecontrollergivenin(4)and(4),alongwiththe adaptiveupdatelawin(455)ensuresasymptotictrackinginthesensethatk ( )k 0 k ( )k 0 (4) Proof :Let ( )R denotethefollowingdi erentiablenon-negativefunction (i.e.,aLyapunovcandidate): = +(1 0)2+ 2 + 1 2 2 (4) Thetime-derivativeof ( ) canbedeterminedas = 0 (1 0) + ( )+ = ( 03+(1 0) 3) = (4) where(4)and(4)-(4)wereu tilized.Basedon(4)and(4), ( ) ( ) 0( ) ( ) Land ( ) ( ) L2.Since ( ) L,itis clearfrom(4)that ( ) L.Basedonthefactthat ( ) L,(4) and(4)canbeusedtoprovethat 0( ) L,andthen 0( ) 0 1 ( ) L. Basedonthefactthat ( ) Land ( ) isaboundedfunction,(4)canbe usedtoconcludethat ( ) L.Since ( ) ( ) 0( ) 0( ) 0 1 ( ) Land ( ) isbounded,(44)canbeutilizedtoprovethat ( ) L.Fromthe previousresults,(4)-(4 51)canbeusedtoprovethat ( ) ( ) L.Since ( ) ( ) L L2,and ( ) ( ) L,BarbalatsLemma[96]canbeusedto concludetheresultgivenin(4).

PAGE 90

76 4.7SimulationResults Anumericalsimulationwasperformedtoillustratetheperformanceofthe trackingcontrollergivenin(4),(4),andtheadaptiveupdatelawin(4). Thecameracalibrationparameterswerechosenas 1= 2= 1545 10640 01545 1512 001 TheoriginsofcoordinateframesF,FandF,andthefourcoplanarfeature pointsontheplanes and arechosensuchthatthefeaturepointshavethe sameEuclideancoordinates(in[m])inF,FandF,as 1= 00 150 2= 0 150 150 3= 0 1500 4= 0 1500 Thetime-varyingdesiredimagetrajectorywasgeneratedbythekinematicsofthe featurepointplanewherethedesiredlinearandangularvelocitieswereselectedas = 0 1sin( )0 1sin( )0 1sin( ) [ ] = 000 5 [ ] ThemovingcameraattachedtoIisassumedtohavealinearvelocityvectorof 0 1sin( )00 [ ] Theinitialrotationmatrices (0) betweenFandI, (0) betweenFandI, and (0) betweenFandI,andtheconstantrotationmatrix betweenF

PAGE 91

77 andI,weresetas (0)= (180) (0) (40) (0)= (180) (0) (20) (0)= (180) (0) (20) = (180) (0) (80) Theinitialtranslationvectors (0) betweenFandI(expressedinI), (0) betweenFandI(expressedinI),and (0) betweenFandI(expressedinI),andtheconstanttranslationvector (0) betweenFandI(expressedinI),wereselectedas (0)= 0 50 54 0 (0)= 1 01 53 5 (0)= 0 51 56 0 (0)= 1 01 54 0 TheinitialEuclideanrelationshipbetweenthecameras,thereferenceobject,the controlobject,andtheobjectthatwasusedtogeneratethedesiredtrajectoryis showninFigure42. Theinitialimage-spacecoordinates(i.e., (0) )ofthefourfeaturepoints attachedtotheplane ,expressedinI,werecomputedas(inpixels) 1(0)= 409 62660 751 2(0)= 454 00623 511 3(0)= 491 25667 891 4(0)= 402 48742 381

PAGE 92

78 Theinitialreferenceimage-spacecoordinates(i.e., (0) and )ofthefour featurepointsattachedtotheplane ,expressedinIandI,respectively,were computedas(inpixels) 1(0)= 1104 11112 01 2(0)= 1166 31134 61 3(0)= 1143 71196 81 4(0)= 1019 21151 51 1= 196 71081 41 2= 206 71024 31 3= 263 81034 41 4= 243 71148 51 Theinitialimage-spacecoordinates(i.e., (0) )ofthefourfeaturepointsattached totheplane ,expressedinI,werecomputedas(inpixels) 1(0)= 755 5862 01 2(0)= 791 8848 81 3(0)= 805 1885 11 4(0)= 732 5911 51 Thecontrolgains in(4)and in(4)andadaptationgain in (45)wereselectedas = diag{1 1 1}= diag{5 5 5} =20 Thedesiredimage-spacetrajectoryofthefeaturepointplane ,takenbythe cameraattachedtoI,isshowninFigure4.Thecurrentimage-spacetrajectory ofthefeaturepointplane ,takenbythecameraattachedtoI,isshowninFigure 45.Thereferenceimage-spacetrajectoryofthereferenceplane ,takenby thecameraattachedtoI,isshowninFigure4.Theresultingtranslationand

PAGE 93

79 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 3 4 d *z Camera I Reference Camera IR y xFigure4:This gureshowstheinitialpositionsofthecamerasandthefeature pointplanes.TheinitialpositionsofthecamerasattachedtoIandIaredenoted byO.Thefeaturepointsontheplanes and aredenotedby.The originsofthecoordinateframesF,FandFaredenotedby. rotationtrackingerrorsareplottedinFigure4andFigure4,respectively.The errorsgotozeroasymptotically.Thetran slationandrotationcontrolinputsare showninFigure4andFigure4,respectively.

PAGE 94

80 0 5 10 15 20 650 700 750 800 850 900 ud [pixel] 0 5 10 15 20 750 800 850 900 950 Time [sec]vd [pixel]Figure4:Pixelcoordinate ( ) ofthefourfeaturepointsontheplane ina sequenceofdesiredimagestakenbythecameraattachedtoI.Theupper gureis forthe ( ) componentandthebottom gureisforthe ( ) component. 0 5 10 15 20 900 1000 1100 1200 u* [pixel] 0 5 10 15 20 1050 1100 1150 1200 Time [sec]v* [pixel]Figure4:Pixelcoordinate ( ) ofthefourfeaturepointsontheplane ina sequenceofreferenceimagestakenbythemovingcameraattachedtoI.Theupper gureisforthe ( ) componentandthebottom gureisforthe ( ) component.

PAGE 95

81 0 5 10 15 20 400 600 800 1000 u [pixel] 0 5 10 15 20 600 800 1000 1200 1400 1600 Time [sec]v [pixel]Figure4:Pixelcoordinate ( ) ofthefourfeaturepointsontheplane inasequenceofimagestakenbythemovingcameraattachedtoI.Theupper gureis forthe ( ) componentandthebottom gureisforthe ( ) component. 0 5 10 15 20 0.5 0 0.5 e1 [pixel] 0 5 10 15 20 0.5 0 0.5 e2 [pixel] 0 5 10 15 20 0.5 0 0.5 Time [sec]e3Figure4:Translationerror ( ) .

PAGE 96

82 0 5 10 15 20 0 0.5 1 ~ q0 0 5 10 15 20 0.5 0 0.5 ~ qv1 0 5 10 15 20 0.5 0 0.5 ~ qv2 0 5 10 15 20 1 0.5 0 Time [sec]~ qv3Figure4:Rotationquaternionerror ( ) 0 5 10 15 20 2 1 0 1 vc1 [m.s 1] 0 5 10 15 20 4 2 0 2 vc2 [m.s 1] 0 5 10 15 20 2 1 0 1 Time [sec]vc3 [m.s 1]Figure4:Linearcameravelocityinput ( ) .

PAGE 97

83 0 5 10 15 20 0.5 0 0.5 c1 [rad.s 1] 0 5 10 15 20 0.5 0 0.5 c2 [rad.s 1] 0 5 10 15 20 0 1 2 Time [sec]c3 [rad.s 1]Figure4:Angularcameravelocityinput ( ) .

PAGE 98

CHAPTER5 ADAPTIVEVISUALSERVOTRACKINGCONTROLUSINGACENTRAL CATADIOPTRICCAMERA 5.1Introduction Inthischapter,anadaptivehomographybasedvisualservotrackingcontrol schemeispresentedforacamera-in-hand centralcatadioptriccamerasystem.By usingthecentralcatadioptriccamera,afullpanoramicFOVisobtained.The literaturereviewforvisualservocontrolusingcentralcatadioptriccamerasare presentedinSection1.3.2.Inthischapter,thetrackingcontrollerisdeveloped basedontherelativerelationshipsofacentralcatadioptriccamerabetweenthe current,reference,anddesiredcameraposes.To ndtherelativecamerapose relationships,homographiesarecomputedbasedontheprojectionmodelofthe centralcatadioptriccamera[33].A sstatedbyGeyerandDaniilidis[36],a unifyingtheorywasproposedtoshowthatallcentralcatadioptricsystemsare isomorphictoprojectivemappingsfromthespheretoaplanewithaprojection centerontheperpendiculartotheplane.Byconstructinglinksbetweenthe projectedcoordinatesonthesphere,thehomographiesuptoscalarmultiplescan beobtained.VariousmethodscanthenbeappliedtodecomposetheEuclidean homographiesto ndthecorrespondingrotationmatrices,anddepthratios.The rotationerrorsysteminthischapterisbasedonthequaternionformulationwhich hasafull-rank43interactionmatrix.Lyapunov-basedmethodsareutilizedto developthecontrollerandtoproveasymptotictracking. 84

PAGE 99

85 Y( u v )i iT V UoicF cXcZcZmimage planeYm mreference plane mirrormFX Figure5:Centralcatadioptricprojectionrelationship. 5.2GeometricModel Acentralcatadioptriccameraiscomposedoftwoelements:acameraanda mirrorwhicharecalibratedtoyieldasinglee ectiveviewpoint.GeyerandDaniilidis[36]developedaunifyingtheorythatexplainshowallcentralcatadioptric systemsareisomorphictoprojectivemappingsfromthespheretoaplanewitha projectioncenterontheopticalaxisperpendiculartotheplane.Forthecentral catadioptriccameradepictedinFigure5,thecoordinateframesFandFare attachedtothefociofthecameraandmirror,respectively.Lightraysincidentto thefocalpointofthemirror(i.e.,theoriginofF)arere ectedintoraysincident withthefocalpointofthecamera(i.e.,theoriginofF). Withoutlossofgenerality,thesubsequentdevelopmentisbasedonthe assumptionthatthere ectionoffourcoplanarandnon-collinearEuclideanfeature pointsdenotedby ofsomestationaryobjectisrepresentedinthecameraimage planebyimagespacecoordinates ( ) ( )R =1 2 3 4 .Theplanede ned bythefourfeaturepointsisdenotedby asdepictedinFigure5.Thevector

PAGE 100

86 ( u v )i iTV U image plane unitar y s p here2 mii( x y z )iiT=oiFocsimZ Y Z XFXFYFo o o Figure52:Projectionmodelofthecentralcatadioptriccamera. ( )R3inFigure52isde nedas where ( ) ( ) ( )R denotetheEuclideancoordinateofthefeaturepoints expressedintheframeFwhichisa xedtothesinglee ectiveviewpoint.The projectedcoordinateof ( ) canbeexpressedas = = (51) where ( )R3denotestheEuclideancoordinatesof projectedontoaunit sphericalsurfaceexpressedinF,and ( )R isde nedas = q 2 + 2 + 2 (52)

PAGE 101

87 Basedonthedevelopmentin[36], ( ) canbeexpressedinthecoordinateframe ,whichisattachedtothereprojectioncenter,as = + (53) where R isaknownintrinsicparameterofthecentralcatadioptriccamerathat representsthedistancebetweenthesinglee ectiveviewpointandthereprojection center(seeFigure5).Thenormalizedcoordinatesof ( ) in(5),denotedby ( )R3,canbeexpressedas = + + 1 = + + 1 (54) Byusing(54)andthefollowingrelationship: 1= 2+ 2+ 2(55) thecoordinatesofthefeaturepointsontheunitsphericalsurface ( ) canbe determinedas = + q 2 + 2 (12)+1 2 + 2 +1 + q 2 + 2 (12)+1 2 + 2 +1 + + q 2 + 2 (12)+1 2 + 2 +1 (56) where ( ) and ( )R arethe rsttwoelementsof ( ) .Thebijective mappingin(56)isunique(i.e.,thereisnosignambiguity)because 0 1 [36], andthegeometryofthecentralcatadioptriccameragiveninFigure5guarantees thatthethirdelementof ( ) ispositive;otherwise,thefeaturepoint cannot beprojectedontotheimageplane.

PAGE 102

88 mioir e f e r e n c e p l a n e F oc *o* cmsi F od cms i md s id n*H( R x )fHd( R x )fd dd*mi*mdiF* Figure53:Camerarelationshipsrepresentedinhomography. AsshowninFigure5,thestationarycoordinateframeFdenotesaconstantreferencecameraposethatisde nedbyareferenceimage,andthecoordinateframeFrepresentsthedesiredtime-varyingcameraposetrajectoryde ned byaseriesofimages(e.g.,avideo).Thevectors ( )R3inFigure5are de nedas ( ) ( ) ( ) (57) where R and ( ) ( ) ( )R denotetheEuclideancoordinates ofthefeaturepoints expressedintheframesFandF,respectively.

PAGE 103

89 Theconstantcoordinatesof canbeprojectedontoaunitsphericalsurface expressedinFandO ,respectively,as = = (58) = = + andthetime-varyingcoordinates ( ) canbeprojectedontoaunitspherical surfaceexpressedinFandO,respectively,as = = = + where ( ) ( )R3 and ( )R arede nedas = q 2 + 2 + 2 = q 2 + 2 + 2 (59) Thenormalizedcoordinatesof ( ) denotedas ( )R3isde nedas ( ) ( ) ( ) ( ) 1 (5) whichwillbeusedinthecontrollerdevelopment.Thesignal ( ) in(5)is measurablebecause ( ) canbecomputedfromthemeasurableandbounded pixelcoordinates ( ) usingsimilarprojectiverelationshipsasin(5)and(5 17).Thenormalizedcoordinatesof ( ) denotedas ( )R3, respectively,arede nedas = + + 1 (5) = + + 1

PAGE 104

90 FromstandardEuclideangeometry,therelationshipsbetween ( ) ( ) and canbedeterminedas = + = + (5) where ( ) ( )R3denotethetranslationvectorsexpressedinFandF, respectively,and ( ) ( ) (3) denotetheorientationofFwithrespect toFandF,respectively.AsalsoillustratedinFigure53, R3denotesthe constantunitnormaltotheplane ,andtheconstantdistancefromtheoriginofFto alongtheunitnormal isdenotedby R isde nedas (5) Byusing(513),therelationshipsin(52)canbeexpressedas = = (5) where ( ) ( )R3 3aretheEuclideanhomographiesde nedas = + = + (5) Basedon(51)and(5),therelationshipbetweentheEuclideancoordinatesin (54)canbeexpressedintermsoftheunitsphericalsurfacecoordinatesas = = (5) where ( ) ( )R and ( ) ( ) arescalingterms. 5.3EuclideanReconstruction Thehomogenouspixelcoordinatesofthefeaturespointswithrespecttothe cameraframesF,FandFaredenotedas ( ) and ( )R3,respectively. Theycanberelatedtothenormalizedcoordinates ( ) and ( ) viathe

PAGE 105

91 followinglinearrelationshipas = = = (5) where R3 3containsthecalibratedintrinsicparametersofthecameraandthe mirror(seeBarretoandAraujo[35]).Sin cethecameraandmirrorarecalibrated (i.e., isknown), ( ) and ( ) canbecomputedfromthemeasurable pixelcoordinates ( ) and ( ) basedon(517).Theexpressiongivenin (5)canthenbeusedtocompute ( ) from ( ) .Similarly, and ( ) canbecomputedfrom and ( ) ,respectively.Thenbasedon(516),aset of12linearlyindependentequationsgivenbythe4featurepointpairs ( ( )) with3independentequationsperfeaturepointcanbedevelopedtodetermine thehomographyuptoascalarmultiple.Variousmethodscanthenbeapplied (e.g.,seeFaugerasandLustman[93]andZhangandHanson[94])todecompose theEuclideanhomographytoobtain ( ) ( ) ( ) ( ) ,and .Similarly, ( ) ( ) ( ) ( ) canbeobtainedfrom(516)using ( ( )) .The rotationmatrices ( ) ( ) andthedepthratios ( ) ( ) willbeusedinthe subsequentcontroldesign. 5.4ControlObjective Thecontrolobjectiveisforacameraattachingtoanobject(i.e.,camera-inhandcon guration)whichcanbeidenti edbyaplanarpatchoffeaturepointsto trackadesiredtrajectorythatisdeterminedfromasequenceofdesiredimages takenduringtheaprioricameramotion.Thisobjectiveisbasedontheassumption thatthelinearandangularvelocitiesofthecameraarecontrolinputsthatcan beindependentlycontrolled(i.e.,unconstrainedmotion)andthatthecamerais calibrated(i.e., isknown).Thecontrolobjectivecanbestatedas ( ) ( ) (i.e.,Euclideanfeaturepointson trackthecorrespondingfeaturepoints on ).Equivalently,thecontrolobjectivecanalsobestatedintermsofthe

PAGE 106

92 rotationandtranslationoftheobjectas ( )( ) and ( )( ) .As statedpreviously, ( ) and ( ) canbecomputedbydecomposingtheprojective homographiesin(56).Oncetheserotationmatriceshavebeendetermined, theunitquaternionparameterizationisusedtodescribetherotationmatrix. Thisparameterizationfacilitatesthesubsequentproblemformulation,control development,andstabilityanalysissincetheunitquaternionprovidesaglobal nonsingularparameterizationofthecorrespondingrotationmatrices. Toquantifytheerrorbetweentheactualanddesiredcameraorientations,the mismatchbetweentherotationmatrices ( ) and ( ) ,denotedby ( )R3,is de nedas = (5) Giventherotationmatrices ( ) and ( ) fromthehomographydecomposition, thecorrespondingunitquaternions ( ) and ( ) canbecomputedbyusingthe numericallyrobustmethod(e.g.,see[14]and[15])as ( )= 2 0 3+2 2 0 (5) ( )= 2 0 3+2 2 0 (5) where 3isthe 33 identitymatrix,andthenotation ( ) denotestheskewsymmetricformofthevector ( ) asin(2).Basedon(5)-(5),the rotationmismatchcanbeexpressedas = 2 0 3+2 2 0 (5) wheretheerrorquaternion ( 0( ) ( ))isde nedas[23] 0= 00 + (5) = 0 0+

PAGE 107

93 Basedon(5)and(5),therotationtrackingcontrolobjective ( )( ) canbeformulatedask ( )k 0= ( )3as (5) Toquantifythepositionmismatchbetweentheactualanddesiredcamera,the translationtrackingerror ( )R3isde nedas = = ln (5) where ( ) ( )R3arede nedas 1 2 3= ln (5) 1 2 3= ln (5) Basedon(523)and(524),thesubsequentcontroldevelopmenttargetsthe followingobjectives:k ( )k 0 andk ( )k 0 as (5) Theerrorsignal ( ) ismeasurablesinceitcanbecomputedfromthe ( ) and ( ) asin[24].The rsttwoelementsofthetranslationerror ( ) are measurablebecause 1 1= 2 2= where ( ) ( ) ( ) ( ) ( ) ( ) canbecomputedfrom(5)and(5),and ( ) ( ) ( ) ( ) ( ) ( ) canbecomputedfromsimilarrelationships.Thethirdelementof thetranslationerrorisalsomeasurablesince = ( ) ( ) =

PAGE 108

94 where ( ) ( ) canbecomputedfrom(5)and(5), ( ) ( ) canbecomputedfrom similarrelationships,andthedepthratios ( ) ( ) canbeobtainedfromthe homographydecompositionsin(5). 5.5ControlDevelopment 5.5.1Open-LoopErrorSystem Theopen-looprotationerrorsystemcanbedevelopedas[23] = 1 2 03+ (5) where ( )=( 0( ) ( )), ( )R3denotesthecameraangularvelocitycontrol input,and ( )R3denotesthedesiredangularvelocityofthecamerathatis assumedtobeapriorigeneratedasaboundedandcontinuousfunction(see[10]for adiscussionregardingthedevelopmentofasmoothdesiredtrajectoryfromaseries ofimages). Basedon(5)and(5 ),thederivativeof ( ) isobtainedas = 1 (5) where ( )R isde nedas = Basedon(5)andthefactthat[22] =+ (5) where ( )R3denotesthedesiredlinearvelocityofthecameraexpressedinF, itcanbeobtainedthat = 1 (5)

PAGE 109

95 Afterdi erentiatingbothsidesof(5)andusingtheequations(5),(5) and =+ theopen-looptranslationerrorsystemcanbederivedas =+ 1 (5) where ( )R3denotesthelinearvelocityinputofthecamerawithrespecttoFexpressedinF,theJacobian-likematrices ( ) ( ) ( )R3 3arede ned as = 10 101 2001 = 10 101 2001 (5) = 1 212 1 21+ 2 2 1 2 1 2 10 (5) and ( )R isde nedas = 5.5.2Closed-LoopErrorSystem Basedontheopen-looprotationerrorsystemin(528)andthesubsequent Lyapunov-basedstabilityanalysis,theangularvelocitycontrollerisdesignedas = + (5)

PAGE 110

96 where R3 3denotesadiagonalmatrixofpositiveconstantcontrolgains. From(58)and(55),therotationclosed-looperrorsystemcanbedeterminedas 0= 1 2 (5) =1 2 03+ Basedon(5),thetranslationcontrolinput ( ) isdesignedas = 1 + 1 (5) where R3 3denotesadiagonalmatrixofpositiveconstantcontrolgains.In (57),theparameterestimate ( )R fortheunknownconstant isde nedas = 1 (5) where R denotesapositiveconstantadaptationgain.Byusing(52)and (5),thetranslationc losed-looperrorsystemis = + 1 (5) where ( )R denotesthefollowingpara meterestimationerror: = (5) 5.5.3StabilityAnalysis Theorem5.1 :Thecontrollergivenin(5)and(5),alongwiththe adaptiveupdatelawin(538)ensuresasymptotictrackinginthesensethatk ( )k 0 k ( )k 0 (5)

PAGE 111

97 Proof :Let ( )R denotethefollowingdi erentiablenon-negativefunction (i.e.,aLyapunovcandidate): = +(1 0)2+ 2 + 1 2 2 (5) Thetime-derivativeof ( ) canbedeterminedas = 03+ (1 0) + 1 1 = 03+ +(1 0) 3 = (5) where(5)and(5)-(5)wereu tilized.Basedon(5)and(5), ( ) ( ) 0( ) ( ) Land ( ) ( ) L2.Since ( ) L,itisclear from(5)that ( ) L.Basedonthefactthat ( ) L,(5)and(5) canbeusedtoprovethat ( ) L,and ( ) 1 ( ) ( ) L.Since ( ) Land ( ) isaboundedfunction,(5)canbeusedtoconcludethat ( ) L.From ( ) ( ) ( ) ( ) 1 ( ) ( ) Land ( ) 1 arebounded,(537)canbeutilizedtoprovethat ( ) L.From thepreviousresults,(5)-(5)canbeusedtoprovethat ( ) ( ) L. Since ( ) ( ) L L2,and ( ) ( ) L,BarbalatsLemma[96]canbe usedtoconcludetheresultgivenin(5).

PAGE 112

CHAPTER6 VISUALSERVOCONTROLINTHEPRESENCEOFCAMERACALIBRATION ERROR 6.1Introduction Huetal.[14]introducedanewquaternion-basedvisualservocontrollerfor therotationerrorsystem,providedthecameracalibrationparametersareexactly known.SincetheresultsbyMalisandChaumette[21]andFangetal.[87]rely heavilyonpropertiesoftherotationparam eterizationtoformulatestateestimates andameasurableclosed-looperrorsystem,theresearchinthischapterismotivated bythequestion: Canstateestimatesandameasurableclosed-looperrorsystembe craftedintermsofthequaternionparameterizationwhenthecameracalibration parametersareunknown ?Toanswerthisquestion,acontributionofthischapteris thedevelopmentofaquaternion-basedestimatefortherotationerrorsystemthat isrelatedtotheactualrotationerror,thedevelopmentofanewclosed-looperror system,andanewLyapunov-basedanalysisthatdemonstratesthestabilityofthe quaternionerrorsystem.Oneofthechallengesistodevelopaquaternionestimate fromanestimatedrotationmatrixthatisnotatruerotationmatrixingeneral. Toaddressthischallenge,thesimilarityrelationshipbetweentheestimatedand actualrotationmatricesisused(asin[21]and[87])toconstructtherelationship betweentheestimatedandactualquaternions.ALyapunov-basedstabilityanalysis isprovidedthatindicatesauniquecontrollercanbedevelopedtoachievethe regulationresultdespiteasignambiguityinthedevelopedquaternionestimate. SimulationresultsareprovidedinSection6.7thatillustratetheperformanceofthe developedcontroller. 98

PAGE 113

99 6.2FeedbackControlMeasurements Theobjectiveinthischapteristodevelopakinematiccontroller(i.e.,the controlinputsareconsideredthelinearandangularcameravelocities)toensure theposition/orientationofthecameracoordinateframeFisregulatedtothe desiredposition/orientationF.ThecamerageometryisshowninFigure2 andthecorrespondingEuclideanandimage-spacerelationshipsaredeveloped inSections2.1and2.2.Theonlyrequiredsensormeasurementsforthecontrol developmentaretheimagecoordinatesofthedeterminedfeaturepoints(i.e., measurementofthesignalsin(2)),wherethestaticfeaturepointcoordinates inthedesiredimagearegivenapriori.Bymeasuringthecurrentimagefeature pointsandgiventhedesiredfeaturepoints,therelationshipin(26)canbeused todeterminethenormalizedEuclideancoordinatesof providedtheintrinsic cameracalibrationmatrixisperfectlyknown.Unfortunately,anyuncertaintyin willleadtoacorruptedmeasurementof ( ) and .Thecomputednormalized coordinatesareactuallyestimates,denotedby ( ) R3,ofthetruevalues sinceonlyabest-guessestimateof ,denotedby R3 3,isavailableinpractice. Thenormalizedcoordinateestimatescanbeexpressedas[21] = 1= (61) = 1 = (62) wherethecalibrationerrormatrix R3 3isde nedas = 1 = 11 12 130 22 23001 (63) where 11, 12, 13, 22, 23R denoteunknownintrinsiccalibrationmismatch constants.Since ( ) and cannotbeexactlydetermined,theestimatesin

PAGE 114

100 (6)and(62)canbesubstitutedinto(24)toobtainthefollowingrelationship = (64) where ( )R3 3denotestheestimatedEuclideanhomographyde nedas = 1 (65) Since ( ) and canbedeterminedfrom(61)and(6),asetoftwelvelinear equationscanbedevelopedfromthefourimagepointpairs,and(6)canbeused tosolvefor ( ) Asstatedin[21],providedadditionalinformationisavailable(e.g.,atleast 4vanishingpoints),varioustechniques(e.g.,seeFaugerasandLustman[93]and ZhangandHanson[94])canbeusedtodecompose ( ) toobtaintheestimated rotationandtranslationcomponentsas = 1+ 1= + (66) where ( )R3 3isde nedas = 1 (67) and ( )R3, R3denotetheestimateof ( ) and ,respectively,de ned as = = 1 where R denotesthefollowingpositiveconstant = Forthefourvanishingpoints(seeAlmansaetal.[103]foradescriptionofhow todeterminevanishingpointsinanimage), =,sothat = + = (68)

PAGE 115

101 andtherefore = 1= (69) Twelvelinearequationscanbeobtainedbasedon(64)forthefourvanishing points.Assumethe3rdrow3rdcolumnelementof ( ) ,denotedas 33( )R ,is notequaltozero(w.l.o.g.).Thenormalizedmatrix ( )R3 3,de nedas = 33 (6) canbecomputedbasedonthesetwelvelinearequations.Basedon(69) det =det( )det( )det( 1)=1 (6) From(6)and(6), 3 33det =1 (6) andhence, = 3q det( ) (6) whichisequalto ( ) 6.3ControlObjective Asstatedpreviously,theobjectiveinthischapteristodevelopakinematic controllertoensuretheposeofthecameracoordinateframeFisregulatedtothe desiredposeFdespiteuncertaintyintheintrinsiccameracalibrationmatrix.This objectiveisbasedontheassumptionthatthelinearandangularvelocitiesofthe cameraarecontrolinputsthatcanbeindep endentlycontrolled(i.e.,unconstrained motion).Forexample,thelinearandangularcameravelocitiescouldbecontrolled bytheend-e ectorofaroboticmanipulator.I nadditiontouncertaintyinthe intrinsiccameracalibration,uncertaintycouldalsoexistintheextrinsiccamera calibration(e.g.,theuncertaintyintherotationandtranslationofthecamera withrespecttotherobotend-e ector).Thedevelopmentinthischaptercould

PAGE 116

102 bedirectlymodi edasdescribedin[21]and[87]tocompensatefortheextrinsic calibration.Therefore,thee ectsofamismatchintheextrinsiccalibrationarenot consideredinthesubsequentdevelopmentforsimplicity. IntheEuclideanspace,therotationcontrolobjectivecanbequanti edas ( )3as (6) Thesubsequentdevelopmentisformulatedintermsofthefourdimensionalunit quaternion ( ) .Giventherotationmatrix ( ) ,thecorrespondingunitquaternion ( ) canbecomputedbyusingthenumericallyrobustmethodpresentedinSection 2.3.From(2)and(2),therotationregulationobjectivein(6)canalsobe quanti edasthedesiretoregulate ( ) ask( )k 0 as (6) Thefocusandcontributionofthischa pterliesintheabilitytodevelopand provethestabilityofaquaternion-basedrotationcontrollerinthepresenceof uncertaintyinthecameracalibration.ThetranslationcontrollerdevelopedbyFang etal.[87]isalsopresentedandincorporatedinthestabilityanalysistoprovidean exampleofhowthenewclassofquaternion-basedrotationcontrollerscanbeused inconjunctionwithtranslationcontrollersthatarerobusttocameracalibration uncertaintyincluding(forexample):theasymptotictranslationcontrollersin[21], andtheexponentialtranslationcontrollersin[87].Thetranslationerror,denoted by ( )R3,isde nedas = (6) where canbechosenasanynumberwithin{1 4} Thetranslationobjective canbestatedask ( )k 0 as (6)

PAGE 117

103 Thesubsequentsectionwilltargetthecontroldevelopmentbasedontheobjectives in(6)and(6). 6.4QuaternionEstimation Amethodispresentedinthissectiontodevelopaquaternion-basedrotation estimatethatcanberelatedtotheactualrotationmismatchtofacilitatethe controldevelopment. 6.4.1EstimateDevelopment Theunitquaternionisrelatedtotheangle-axisrepresentationas 0=cos 2 = sin 2 (6) where ( ) and ( ) arethecorrespondingrotationangleandunitaxis.Byusing (2)and(2),the rstelementofthequaternioncanalsobeexpressedin termsoftherotationmatrix ( ) as 2 0= ( )+1 4 where 0( ) isrestrictedtobenon-negativeas 0= 1 2 p 1+ ( ) (6) withoutlossofgenerality(thisrestrictionenablestheminimumrotationtobe obtained),and ( ) denotesthetraceof ( ) .Basedon(6)and(6), ( ) canbedeterminedas = s 1cos2 2 =1 2 p 3 ( ) (6) wheretherotationaxis ( ) istheuniteigenvectorwithrespecttotheeigenvalue 1 of ( ) .Forthequaternionvectorin(60),thesignambiguitycanberesolved. Speci cally,(215)canbeusedtodevelopthefollowingexpression: =4 0 (6)

PAGE 118

104 Sincethesignof 0( ) isrestricted(i.e.,assumedtobe)positive,thenaunique solutionfor ( ) canbedeterminedfrom(6)and(6). Basedonthesimilaritybetween ( ) and ( ) asstatedin(67),theexpressionsin(6)and(6)providemotivationtodevelopthequaternionestimate as 0= 1 2 q 1+ ( ) (6) = sin 2 =1 2 q 3 ( ) (6) In(6)and(6), ( ) istheestimatedrotationmatrixintroducedin(66) thatiscomputedfromthehomographydecomposition.Since ( ) issimilarto ( ) (see(67)), ( ) isguaranteedtohaveaneigenvalueof 1 ,where ( ) isthe uniteigenvectorthatcanbecomputedfromtheeigenvalueof 1 .Since ( ) is notguaranteedtobeatruerotationmat rix(anditwillnotbeingeneral),the relationshipsin(2)and(6)cannotbedevelopedandusedtoeliminatethe signambiguityoftheeigenvector ( ) .However,thesubsequentstabilityanalysis andsimulationresultsindicatethatthesamestabilityresultisobtainedinvariant ofthesignof ( ) .Oncetheinitialsignof ( ) ischosen,thesamesigncanbeused forsubsequentcomputations. 6.4.2EstimateRelationships Basedonthefactthat ( ) issimilarto ( ) (see(6)),thepropertiesthat similarmatriceshavethesametraceandeigenvaluescanbeusedtorelatethe quaternionestimateandtheactualquaternion.Sincesimilarmatriceshavethe sametrace,(6)and(6)canbeusedtoconcludethat 0= 0 (6) Asstatedearlier,sincesimilarmatriceshavethesameeigenvalues, ( ) isguaranteedtohaveaneigenvalueof 1 withtheassociatedeigenvector ( ) .Thefollowing

PAGE 119

105 relationshipscanbedevelopedbasedon(67) = = 1 (6) Premultiplying 1onbothsidesof(6)yields 1 = 1 (6) Hence, 1 ( ) isaneigenvectorwithrespecttotheeigenvalue 1 of ( ) thatcan beexpressedas 1 = (6) where R isde nedas = 1 (6) Basedon(6),(6),and(6),the estimatedquaternionvectorcannowbe relatedtotheactualquaternionvectoras = (6) Byusing(2),(6),(6)and(6), 2 0+k k2= 2 0+ 1 2 2 (6) Basedon(6)andthefactthat ( ) isaunitvector, =kk sin 2 sin 2 =kk (6) From(6)and(6), 2 0+k k2= 2 0+kk2 2 2=1

PAGE 120

106 6.5ControlDevelopment 6.5.1RotationControl Therotationopen-looperrorsystemcanbedevelopedbytakingthetime derivativeof ( ) as 0 = 1 2 03+ (6) where ( )R3denotestheangularvelocityofthecamerawithrespecttoFexpressedinF.Basedontheopen-looperrorsystemin(6)andthesubsequent stabilityanalysis,theangularvelocitycontrollerisdesignedas = (6) where R denotesapositivecontrolgain.Substituting(633)into(62),the rotationclosed-looperrorsystemcanbedevelopedas 0= 1 2 (6) =1 2 03+ (6) 6.5.2TranslationControl Thecontributionofthischapteristherotationestimateandassociatedcontrol development.Thetranslationcontrollerdevelopedinthissectionisprovided forcompleteness.Asstatedpreviously,translationcontrollerssuchastheclass developedbyMalisandChaumette[21]andFangetal.[87]canbecombinedwith thedevelopedquaternion-basedrotationcontroller.Tofacilitatethesubsequent stabilityanalysisforthesixDOFproblem,atranslationcontrollerproposedin[87] isprovidedinthissection,whichisgivenby = (6)

PAGE 121

107 where R denotesapositivecontrolgain,and ( )R3isde nedas = (6) where ( ) and canbecomputedfrom(6)and(6),respectively,andthe ratio canbecomputedfromthedecompositionoftheestimatedEuclidean homographyin(6).Theopen-looptranslationerrorsystemcanbedeterminedas =1 +[ ].(6) Aftersubstituting(633)and(66)into(638),theresultingclosed-looptranslationerrorsystemcanbedeterminedas = 1 +[ ] [ ] (6) 6.6StabilityAnalysis Asstatedpreviously,thequaternionestimate ( ) hasasignambiguity,but eitherchoiceofthesignwillyieldthesamestabilityresult.Thefollowinganalysis isdevelopedforthecasewhere = (6) Adiscussionisprovidedattheendoftheanalysis,thatdescribeshowthestability canbeprovenforthecasewhen = Theorem6.1 :Thecontrollergivenin(6)and(6)ensuresasymptotic regulationinthesensethatk( )k 0 k ( )k 0 as (6)

PAGE 122

108 provided isselectedsu cientlylarge(seethesubsequentproof),andthe followinginequalitiesaresatis ed min 1 2 + 0(6) max 1 2 + 1 (6) where 01R arepositiveconstants,and min{}and max{}denotethe minimalandmaximaleigenvaluesof 1 2 + ,respectively. Proof :Let ( )R denotethefollowingdi erentiablenon-negativefunction (i.e.,aLyapunovcandidate): = +(10)2+ (6) Aftercancellingcommonterms, ( ) canbeexpressedas 1=2 2(10) 0+ = 03+ (10) + 1 + h i [ ] = 03+ (10) 3 + 1 + h i [ ] = 1 + h i [ ] (6) Byusingtheinequality(642),theterm satis es = 1 2 + min 1 2 + kk20kk2 (6)

PAGE 123

109 Since = 1 2 + 0kk2 theterm1 satis es1 1 0kk2 (6) Basedonthepropertythat [ ] 2=kk R3(seeAppendixD)andk k 1 theterm h i satis es h i = [ ] k k2kk2k kkk2kk2 (6) From(6),theterm[ ] satis es[ ] = kk [ ] kk [ ] 2kk = k kkkkk (6) Byusing(6)-(6),theexpress ionin(6)canbeupperboundedas 0kk21 ( 1+ 2) 0kk2+ kk2+ k kkkkk (6) wherethecontrolgain isseparatedintotwodi erentcontrolgainsas = 1+ 2.Thefollowinginequalitycanbeobtainedaftercompletingthesquares: k kkkkk 2 k k2 4 10kk2+ 1 10kk2 (6) From(61),theinequality( 6)canberewrittenas 0 1 1 k k2 4 2 0 !kk20 2 0kk2

PAGE 124

110 Basedonthede nitionof ( ) in(6),theinequalities(6)and(6),and theassumptionthat and arebounded,thereexisttwopositivebounding constant 1and 2R satisfyingthefollowinginequalities: k k2 4 2 0 1and 02thecontrolparameter canbeselectedlargeenoughtoensurethat ( ) is negativesemi-de niteas 0 1kk20 kk2 (6) Basedon(6)and(6),standards ignalchasingargumentscanbeusedto concludethatthecontrolinputsandalltheclosed-loopsignalsarebounded.The expressionin(6)canalsobeusedtoconcludethat ( ) and ( ) L2.Since ( ) ( ) ( ) ( ) Land ( ) ( ) L2,BarbalatsLemma[96]canbeused toprovetheresultgivenin(6). BymodifyingtheLyapunovfunctionin(64)as = +(1+ 0)2+ thesamestabilityanalysisargumentscanbeusedtoproveTheorem6.1forthe casewhen = Sincethesignambiguityin(6)doesnota ectthecontroldevelopmentand stabilityanalysis,onlythepositivesignin(629)needstobeconsideredinthe futurecontroldevelopmentforconvenience. 6.7SimulationResults Numericalsimulationswereperformedtoillustratetheperformanceofthe controllergivenin(633)and(66).Theintrinsiccameracalibrationmatrixis

PAGE 125

111 givenby = 122 53 77100 0122 56100 001 Thebest-guessestimationfor wasselectedas = 100480 0100110 001 Thecameraisassumedtoviewanobjectwithfourcoplanarfeaturepointswith thefollowingEuclideancoordinates(in[m]): 1= 0 050 050 2= 0 050 050 (6) 3= 0 050 050 4= 0 050 050 Thenormalizedcoordinatesofthevanishingpointswereselectedas 0 020 021 0 020 021 0 020 021 0 020 021 ConsideranorthogonalcoordinateframeIwiththe -axisoppositeto (see Figure2)withthe -axisand -axisontheplane .Therotationmatrices 1betweenFandI,and 2betweenFandIweresetas 1= (160) (30) (30) 2= (120) (20) (80) where () () and () (3) denoterotationofangle (degrees) alongthe -axis, -axisand -axis,respectively.Thetranslationvectors 1( ) and 2( ) betweenFandI(expressedinF)andbetweenFandI(expressedinF),

PAGE 126

112 respectively,wereselectedas 1= 0 50 52 5 2= 1 01 03 5 Theinitial(i.e., (0) )anddesired(i.e., )image-spacecoordinatesofthefour featurepointsin(6)werecomputedas(inpixels) 1(0)= 126 50123 641 2(0)= 124 24127 911 3(0)= 120 92125 401 4(0)= 123 25121 111 1= 132 17133 171 2= 135 72133 611 3= 135 71136 911 4= 132 10136 441 Theinitial(i.e., (0) )anddesired(i.e., )image-spacecoordinatesofthefour vanishingpointsin(6)werecomputedas(inpixels) 1(0)= 124 02139 341 2(0)= 129 02141 611 3(0)= 131 02136 541 4(0)= 126 03134 351 1= 102 37102 451 2= 102 5397 551 3= 97 6397 551 4= 97 47102 451 Thecontrolgains in(6)and in(66)wereselectedas =5 =5 TheresultingtranslationandrotationerrorsareplottedinFigure6and Figure62,respectively.Theimage-spacepixelerror(i.e., ( ) ) isshownin Figure6,andisalsodepictedinFigure6ina3Dformat.Thetranslationand rotationcontroloutputsareshowninFigure6andFigure67,respectively.For

PAGE 127

113 di erentchoiceofsignofthequaternionestimate,theasymptoticresultcanstill beachieved.IncontrasttothequaternionestimateinFigure6,aquaternion estimatewithdi erentsignisshowninFigure63. 0 2 4 6 8 10 0.2 0.1 0 e1 0 2 4 6 8 10 0.2 0.1 0 e2 0 2 4 6 8 10 0.5 0 0.5 Time [sec]e3Figure6:Unitlesstranslationerrorbetween 1( ) and 1.

PAGE 128

114 0 2 4 6 8 10 0.6 0.8 1 q0 0 2 4 6 8 10 0.2 0 0.2 qv1 0 2 4 6 8 10 0.2 0.1 0 qv2 0 2 4 6 8 10 1 0.5 0 Time [sec]qv3Figure6:Quaternionrotationerror. 0 2 4 6 8 10 0 1 2 q0 0 2 4 6 8 10 0.5 0 0.5 qv1 0 2 4 6 8 10 0.2 0 0.2 qv2 0 2 4 6 8 10 0 0.5 1 Time [sec]qv3Figure6:Quaternionrotationerrorforcomparisonwithdi erentsign.

PAGE 129

115 118 120 122 124 126 128 130 132 134 136 118 120 122 124 126 128 130 132 134 136 138 p1(0) p2(0) p3(0) p4(0) ui [pixel]vi [pixel] Figure6:Image-spaceerrorinpixlesbetween ( ) and .Inthe gure,O denotestheinitialpositionsofthe4featurepointsintheimage,and*denotes thecorresponding nalpositionsofthefeaturepoints. 115 120 125 130 135 140 110 120 130 140 0 2 4 6 8 10 ui [pixel] p4(0) p1(0) p3(0) p2(0) vi [pixel]Time [sec]Figure6:Image-spaceerrorinpixlesbetween ( ) and shownina3Dgraph. Inthe gure,Odenotestheinitialpositionsofthe4featurepointsintheimage, and*denotesthecorresponding nalpositionsofthefeaturepoints.

PAGE 130

116 0 2 4 6 8 10 1 0.5 0 vc1 [m.s 1] 0 2 4 6 8 10 1 0.5 0 vc2 [m.s 1] 0 2 4 6 8 10 2 1 0 1 Time [sec]vc3 [m.s 1]Figure6:Linearcameravelocitycontrolinput. 0 2 4 6 8 10 0.5 0 0.5 1 c1 [rad.s 1] 0 2 4 6 8 10 0 0.5 1 c2 [rad.s 1] 0 2 4 6 8 10 0 2 4 6 Time [sec]c3 [rad.s 1]Figure6:Angularcameravelocitycontrolinput.

PAGE 131

CHAPTER7 COMBINEDROBUSTANDADAPTIVEHOMOGRAPHY-BASEDVISUAL SERVOCONTROLVIAANUNCALIBRATEDCAMERA 7.1Introduction Inthischapter,anewcombinedrobustandadaptivevisualservocontroller isdevelopedtoasymptoticallyregulatethefeaturepointsofarigid-bodyobject (identi edaplanarpatchoffeaturepoints)inanimagetothedesiredfeature pointlocationswhilealsoregulatingthesixDOFposeofthecamera(whichis a xedtotheobject).Thesedualobjectivesareachievedbyusingahomographybasedapproachthatexploitsbothimage-spaceandreconstructedEuclidean informationinthefeedbackloop.Incomparisontopureimage-basedfeedback approaches,someadvantagesofusingahomography-basedmethodinclude: realizableEuclideancameratrajectories(seeChaumette[48]andCorkeand Hutchinson[25]foradiscussionofChaume ttesConundrum);anonsingularimageJacobian;andboththecamerapositionandorientationandthefeaturepoint coordinatesareincludedintheerrorsystem.Sincesomeimage-spaceinformationis usedinthefeedback-loopofthedevelopedhomography-basedcontroller,theimage featuresarelesslikelytoleavetheFOVincomparisonwithpureposition-based approaches.Thedevelopedcontrolleriscomposedofthesameadaptivetranslation controllerasinthepreliminaryresultsinChenetal.[89]andanewrobustrotation controller.Thecontributionoftheresultisthedevelopmentoftherobustangular velocitycontrollerthataccommodatesforthetime-varyinguncertainscalingfactor byexploitingtheuppertriangularformoftherotationerrorsystemandthefact thatthediagonalelementsofthecameracalibrationmatrixarepositive. 117

PAGE 132

118 7.2CameraGeometryandAssumptions ThecamerageometryforthischapterisshowninFigure22andthecorrespondingEuclideanandimage-spacerelationshipsaredevelopedinSections2.1and 2.2.Forconvenienceinthefollowingdevelopment,thecameracalibrationmatrix isrewrittenas cot 00 sin 0001 = 1112130 2223001 (71) Basedonthephysicalmeaningoftheelementsof ,thediagonalcalibration elementsarepositive(i.e., 1122 0 ). Thefollowingtwoassumptionsaremadefortheconvenienceofthecontroller development.Theyaresoreasonablesuchthattheycanbeconsideredpropertiesof theconsideredvisionsystem. Assumption1 :Theboundsof 11and 22areassumedtobeknownas 1111 11 2222 22 (72) Theabsolutevaluesof 121323areupperboundedas|12| 12|13| 13|23| 23 (73) In(7)and(7), 11 11 22 22 12 13and 23areknownpositive constants. Assumption2 :ThereferenceplaneiswithinthecamerasFOVandnotat in nity.Thatis,thereexistpositiveconstants and suchthat ( ) (74)

PAGE 133

119 Basedon(2)-(2),thehomographyrelationshipbasedonmeasurablepixel coordinatesis: = 1 (75) Since isunknown,standardhomographycomputationanddecomposition algorithmscantbeappliedtoextracttherotationandtranslationfromthe homography. AsstatedinMalisandChaumette[21],ifsomeadditionalinformationis known,suchasfourvanishingpoints,therotationmatrixcanbeobtained.Forthe vanishingpoints(seeAlmansaetal.[103]foradescriptionofhowtodetermine vanishingpointsinanimage), =,sothat = + = (76) Basedon(76),therelationshipin(75)canbeexpressedas = (77) where ( )R3 3isde nedas = 1 (78) Forthefourvanishingpoints,twelvelinearequationscanbeobtainedbasedon (7).Afternormalizing ( ) byonenonzeroelement(e.g., 33( )R which isassumedtobethethirdrowthirdcolumnelementof ( ) withoutlossof generality)twelveequationscanbeusedtosolvefortwelveunknowns.Thetwelve unknownsaregivenbytheeightunknownelementsofthenormalized ( ) ,denoted by ( )R3 3de nedas 33(79)

PAGE 134

120 andthefourunknownsaregivenby 33( ) ( ) .Fromthede nitionof ( ) in (7),thefactthat det =det( )det( )det( 1)=1 (7) canbeusedtoconcludethat 3 33det =1 (7) andhencebasedon(7)and(7), = 3p det( ) (7) After ( ) isobtained,theoriginalfourfeaturepointsonthereferenceplanecan beusedtodeterminethedepthratio ( ) asshowninAppendixE. 7.3Open-LoopErrorSystem 7.3.1RotationErrorSystem Iftherotationmatrix ( ) introducedin(24)wereknown,thenthecorrespondingunitquaternion ( ) 0( ) ( ) canbecalculatedusingthe numericallyrobustmethodpresentedpresentedinSection2.3.Given ( ) ,the quaternion ( ) canalsobewrittenas 0= 1 2 p 1+ ( ) (7) = 1 2 p 3 ( ) (7) where ( )R3isauniteigenvectorof ( ) withrespecttotheeigenvalue 1 .The open-looprotationerrorsystemfor ( ) canbeobtainedas(seeDixonetal.[23]) 0 = 1 2 03+ (7) where ( )R3de nestheangularvelocityofthecameraexpressedinF.

PAGE 135

121 Thequaternion ( ) givenin(7)-(7)isnotmeasurablesince ( ) is unknown.However,since ( ) canbedeterminedasdescribedin(712),thesame algorithmasshowninequations(7)and(7)canbeusedtodeterminea correspondingmeasurablequaternion 0( ) ( ) as 0= 1 2 q 1+ ( ) (7) = 1 2 q 3 ( ) (7) where ( )R3isauniteigenvectorof ( ) withrespecttotheeigenvalue 1 Basedon(78), = ( 1)= ( ) ,where () denotesthetrace ofamatrix.Since ( ) and ( ) aresimilarmatrices,therelationshipbetween 0( ) ( ) and 0( ) ( ) canbedeterminedas 0= 0 =kk kk, (7) where ( )R isapositive,unknown,time-varyingscalarthatsatis esthe followinginequalities(seeAppendixF) ( ) (7) where R arepositiveboundingconstants.Theinverseoftherelationship between ( ) and ( ) in(7)canbedevelopedas = 1 1 = 1 1 11 112 1122 2 13 111223 1122 31 22 223 22 3 3 (7) 7.3.2TranslationErrorSystem Thetranslationerror,denotedby ( )R3,isde nedas ( )= ( ) (7)

PAGE 136

122 where ( ) R3arede nedas = ln( ) = 0 (7) where {1 4} Thetranslationerror ( ) ismeasurablesincethe rst twoelementsareimagecoordinates,and ( ) isobtainedfromthehomography decompositionasdescribedinAppendixA.Theopen-looptranslationerrorsystem canbeobtainedbytakingthetimederivativeof ( ) andmultiplyingtheresulting expressionby as[89] =+ 1 (7) where ( )R3de nesthelinearvelocityofthecameraexpressedinF,and ( )R3 3isde nedas = 1112130 2223001 Tofacilitatethecontroldevelopment,thetranslationerrorsystemcanbelinearly parameterizedas 1 2 3 = 11 1+ 12 2+ 3( 13) 22 2+ 3( 23) 3 + 1( )_ 2( )_ 3( )_ (7) where ()R1 =1 2 3 areknownregressorvectorsthatdonotdepend onthecalibrationparameters,and Risavectorofconstantunknown parameters.

PAGE 137

123 7.4ControlDevelopment 7.4.1RotationControlDevelopmentandStabilityAnalysis Basedontherelationshipin(7),theopen-looperrorsystemin(715),and thesubsequentstabilityanalysis,therotationcontrollerisdesignedas 1= 1 1=( 11+2) 1(7) 2= 2 2=( 21+ 22+1) 2 3= 3 3=( 31+ 32+ 33) 3 where R =1 2 3 and R =1 2 3 arepositive constants.Theexpressedformofthecontrollerin(725)ismotivatedbytheuseof completingthesquaresinthesubsequentstabilityanalysis.In(725),thedamping controlgains 21 31 32areselectedaccordingtothefollowingsu cient conditionstofacilitatethesubsequentstabilityanalysis 21 1 4 2 1 2 12 11 22(7) 31 1 4 2 11 11 12 23 22+ 13!2 32 1 4 2 2 2 23 22 where 11 11 22 22 12 13and 23arede nedin(7)and(7),and 11, 22, 33arefeedbackgainsthatcanbeselectedtoadjusttheperformanceof therotationcontrolsystem. Proposition7.1 :Providedthesu cientgainconditionsgivenin(76)are satis ed,thecontrollerin(7)ensuresasymptoticregulationoftherotationerror inthesensethatk( )k 0 as (7)

PAGE 138

124 Proof :Let 1( 0)R denotethefollowingnon-negativefunction: 1, +(10)2 (7) Basedontheopen-looperrorsystemin(715),thetime-derivativeof 1( ) canbe determinedas 1=2 2(10) 0= = 1 1+ 2 2+ 3 3 (7) Aftersubstituting(7)for ( ) andsubstituting(7)for ( ) ,theexpression in(7)canbesimpli edas 1= 111 11 2 1+ 221 22 2 2+ 33 2 31 11 2 1 112 22 1 2+ 2111 22 2 2 (7)1 11 2 1+ 1 1223 2213 1 3+ 3111 2 31 22 2 2 223 2 3+ 3222 2 3 Aftercompletingthesquaresoneachofthebracketedtermsin(70),theexpressionin(7)canbewrittenas 1= 111 11 2 1+ 221 22 2 2+ 33 2 31 11" 1 112 2 22 22+ 11 22 211 4 2 12 12 1122 2 2#1 11" 1+ 1 2 1 1223 2213 32+ 11 311 4 2 11 11 1223 22132! 2 3#1 22" 21 2 223 32+ 22 321 4 2 22 23 22 2 3# (7)

PAGE 139

125 Providedthesu cientgainconditionsgivenin(7)aresatis ed,then(7)can beupperboundedas 1 111 11 2 1+ 221 22 2 2+ 33 2 3 (7) Basedon(7),theinequalityin(7)canbefurtherupperboundedas 11 111 11 2 1+ 221 22 2 2+ 33 2 3 (7) TheLyapunovfunctiongivenin(7)anditstimederivativein(7)canbe usedtoconcludethat ( ) 0( ) Land ( ) L2(ofcourse, ( ) 0( ) Lbyde nitionalso).Theexpressionsin(718)and(720),andthefactthat ( ) L2,canbeusedtoconcludethat ( ) L2 Since ( ) 0( ) L, then ( ) ( ) ( ) and 0( ) L.Hence,(75)canbeusedtoconcludethat ( ) L.Basedontherotationerrorsystemin(7), ( ) 0( ) L;hence, ( ) 0( ) areuniformlycontinuous.Barbalatslemma[96]cannowbeusedto concludethatk( )k 0 as 7.4.2TranslationControlDevelopmentandStabilityAnalysis Forcompletenessoftheresult,thesametranslationcontrollerasinChenet al.[89]isprovided.Aftersomealgebraicmanipulation,thetranslationerrorsystem in(724)canberewrittenas 11 1= 1+ 1( 2 3) 1(7) 22 2= 2+ 2( 3) 2 3= 3+ 3( ) 3 where 1R1, 2R2,and 3R3arevectorsofconstantunknown parameters,andtheknownregressorvectors 1()R1 1, 2()R1 2,and

PAGE 140

126 3()R1 3satisfythefollowingequations(seeAppendixG): 11=12 11 2( 13) 11 3+ 1( ) 1122=23 22 3+ 2( ) 2233= 3( ) Thecontrolstrategyistodesign 3( ) tostabilize 3( ) ,andthendesign 2( ) to stabilize 2( ) given 3( ) ,andthendesign 1( ) tostabilize 1( ) given 3( ) and 2( ) .Followingthisdesignstrategy,thetranslationcontroller ( ) isdesigned as[89] 3= 1 33+ 3( ) 3 (7) 2= 1 22+ 2( 3) 2 1= 1 11+ 1( 2 3) 1 wherethedepthratio ( ) 0 .In(7), 1( )R1 2( )R2 3( )R3denoteadaptiveestimatesthataredesignedaccordingtothefollowingadaptive updatelawstocanceltherespectivetermsinthesubsequentstabilityanalysis 1= 1 11 2= 2 22 3= 3 33 (7) where 1R1 1 2R2 2 3R3 3arediagonalmatricesofpositive constantadaptationgains.Basedon(7)and(7),theclosed-looptranslation errorsystemis 11 1= 11+ 1( 2 3) 1(7) 22 2= 22+ 2( 3) 2 3= 33+ 3( ) 3

PAGE 141

127 where 1( )R1 2( )R2 3( )R3denotetheintrinsiccalibration parametermismatchde nedas 1( )= 1 1( ) 2( )= 2 2( ) 3( )= 3 3( ) Proposition7.2 :Thecontrollergivenin(735)alongwiththeadaptive updatelawin(736)ensuresasymptoticregulationofthetranslationerrorsystem inthesensethatk ( )k 0 as Proof :Let 2( 1 2 3)R denotethefollowingnon-negativefunction: 2= 1 2 112 1+ 1 2 222 2+ 1 2 2 3+ 1 2 1 1 1 1+ 1 2 2 1 2 2+ 1 2 3 1 3 3 (7) Aftertakingthetimederivativeof(7)andsubstitutingfortheclosed-looperror systemdevelopedin(7),thefollowingsimpli edexpressioncanbeobtained: 2= 12 1 22 2 32 3 (7) Basedon(7)and(7), 1( ) 2( ) 3( ) L L2,and ( ) ( ) = 1 2 3 L.Basedontheassumptionthat ( ) ,theexpression in(7)canbeusedtoconcludethat ( ) L.Basedontheprevious stabilityanalysisfortherotationcontroller, ( ) L;hence,(77)canbe usedtoconcludethat 1( ) 2( ) 3( ) L(i.e., 1( ) 2( ) 3( ) areuniformly continuous).Barbalatslemma[96]cannowbeusedtoshowthat 1( ) 2( ) 3( )0 as BasedonPropositions7.1and7.2,themainresultcanbestatedasfollows. Theorem7.1 :Thecontrollergivenin(7)and(7)alongwiththeadaptiveupdatelawin(7)ensuresasymptot ictranslationandrotationregulationin thesensethatk( )k 0 andk ( )k 0 as

PAGE 142

128 providedthecontrolgainssatisfythesu cientconditionsgivenin(76). Proof :SeeproofsinPropositions7.1and7.2. 7.5SimulationResults Numericalsimulationswereperformedtoillustratetheperformanceofthe controllergivenin(7)and(7)andtheadaptivelawgivenin(7).The intrinsiccameracalibrationmatrixisgivenby = 122 53 77100 0122 56100 001 Thecameraisassumedtoviewanobjectwithfourcoplanarfeaturepointswith thefollowingEuclideancoordinates(in[m]): 1= 0 10 10 2= 0 10 10 (7) 3= 0 10 10 4= 0 10 10 Thenormalizedcoordinatesofthevanishingpointswereselectedas 0 010 011 0 010 011 0 010 011 0 010 011

PAGE 143

129 Theinitial(i.e., (0) )anddesired(i.e., )image-spacecoordinatesofthefour featurepointsin(7)werecomputedas(inpixels) 1(0)= 128 49129 411 2(0)= 128 80119 611 3(0)= 119 00119 611 4(0)= 118 70129 411 1= 136 86138 681 2= 138 07132 171 3= 130 97131 331 4= 129 84137 821 Theinitial(i.e., (0) )anddesired(i.e., )image-spacecoordinatesofthefour vanishingpointsin(7)werecomputedas(inpixels) 1(0)= 91 84123 681 2(0)= 91 70121 161 3(0)= 89 20121 411 4(0)= 89 34123 941 1= 101 19101 231 2= 101 2698 771 3= 98 8198 771 4= 98 74101 231 Theactualvalueoftheunknownparametervectors 1, 2,and 3aregivenby 1= 0 00820 03080 81633 51100 10822 33860 0234 0 02340 00022 86480 02860 000200 0241 0 02340 00072 41170 000900 0910

PAGE 144

130 2= 0 00820 81593 51102 33750 02340 00020 000200 0241 0 02870 00092 95440 02340 00072 4106 3= 2 86480 02860 02870 00092 9544 Inthesimulation,theinitialvalueoftheestimatedparametervectors 1(0) 2(0) and 3(0) wereselectedashalfoftheactualvalue. Thediagonalcontrolgainmatrices in(7)and in(7)were selectedas = {4 2 85}= {5 5 5} Thediagonalgainmatrices 1, 2,and 3intheadaptivelaw(736)wereselected as 1=0 03{0 01 1 1 1 1 1 0 01 0 01 0 0001 1 0 01 0 0001 0 0001 0 01 0 01 0 01 1 0 01 0 01 1}2=0 03{0 01 1 1 1 0 01 0 0001 0 01 0 01 1 0 0001 0 0001 0 01 0 01 0 01 1}3=0 5{0 1 0 1 10 0 1 10} TheresultingasymptotictranslationandrotationerrorsareplottedinFigure 71andFigure72,respectively.Thepixelcoordinateofthefourfeaturepoints isshowninFigure73.Theimage-spacepixelerror(i.e., ( ) ) isshownin Figure7.Theimage-spacetrajectoryofthefeaturepointsisshowninFigure 7,andalsoinFigure7inathree-dimensionalformat,wheretheverticalaxis istime.ThetranslationandrotationcontroloutputsareshowninFigure77and Figure7,respectively.

PAGE 145

131 0 0.2 0.4 0.6 0.8 1 10 5 0 5 e1 0 0.2 0.4 0.6 0.8 1 10 5 0 5 e2 0 2 4 6 8 10 0.5 0 0.5 1 Time [sec]e3Figure7:Unitlesstranslationerror ( ) 0 1 2 3 4 5 0.99 1 1.01 q0 0 1 2 3 4 5 0.1 0 0.1 qv1 0 1 2 3 4 5 0.1 0 0.1 qv2 0 1 2 3 4 5 0 0.005 0.01 Time [sec]qv3Figure7:Quaternionrotationerror ( ) .

PAGE 146

132 0 2 4 6 8 10 110 120 130 140 150 u [pixel] 0 2 4 6 8 10 110 120 130 140 150 Time [sec]v [pixel]Figure7:Pixelcoordinate ( ) (inpixels)ofthecurrentposeofthefourfeaturepointsinthesimulation.Theupper gureisforthe ( ) componentandthe bottom gureisforthe ( ) component. 0 2 4 6 8 10 15 10 5 0 5 u u* [pixel] 0 2 4 6 8 10 15 10 5 0 5 Time [sec]v v* [pixel]Figure7:Regulationerror ( )(inpixels)ofthefourfeaturepointsinthe simulation.Theupper gureisforthe ( )( ) componentandthebottom gureisforthe ( )( ) component.

PAGE 147

133 115 120 125 130 135 140 145 115 120 125 130 135 140 145 p1(0) p2(0) p3(0) p4(0) u [pixel]v [pixel] Figure7:Image-spaceerrorinpixlesbetween ( ) and .Inthe gure,Odenotestheinitialpositionsofthe4featurepointsintheimage,and*denotesthe corresponding nalpositionsofthefeaturepoints. 110 120 130 140 150 100 120 140 160 0 2 4 6 8 10 u [pixel] p2(0) p1(0) p3(0) p4(0) v [pixel]Time [sec]Figure7:Image-spaceerrorinpixlesbetween ( ) and shownina3Dgraph. Inthe gure,Odenotestheinitialpositionsofthe4featurepointsintheimage, and*denotesthecorresponding nalpositionsofthefeaturepoints.

PAGE 148

134 0 1 2 3 4 5 20 0 20 vc1 [m.s 1] 0 1 2 3 4 5 20 0 20 vc2 [m.s 1] 0 1 2 3 4 5 20 0 20 Time [sec]vc3 [m.s 1]Figure7:Linearcameravelocitycontrolinput ( ) 0 1 2 3 4 5 0.5 0 0.5 c1 [rad.s 1] 0 1 2 3 4 5 0.2 0 0.2 c2 [rad.s 1] 0 1 2 3 4 5 0.4 0.2 0 Time [sec]c3 [rad.s 1]Figure7:Angularcameravelocitycontrolinput ( ) .

PAGE 149

CHAPTER8 CONCLUSIONS Inthisdissertation,visualservocontrolalgorithmsandarchitecturesare developedthatexploitthevisualfeedbackfromacamerasystemtoachievea trackingorregulationcontrolobjectiveforarigid-bodyobject(e.g.,theende ectorofarobotmanipulator,asatellite,anautonomousvehicle)identi edbya patchoffeaturepoints.Thesealgorithmsandarchitecturescanbeusedwidelyin thenavigationandcontrolapplicationsinroboticsandautonomoussystems.The visualservocontrolprobleminthisdissertationwereseparatedinto veparts:1) visualservotrackingcontrolviaaquaternionformulation;2)collaborativevisual servotrackingcontrolusingadaisy-chainingapproach;3)visualservotracking controlusingacentralcatadioptriccamera;4)robustvisualservocontrolin presenceofcameracalibrationuncertainty;and5)combinedrobustandadaptive visualservocontrolviaanuncalibratedcamera. Anadaptivevisualservotrackingcontrolmethodviaaquaternionformulation is rstdevelopedthatachievesasymptotictrackingofarigid-bodyobjecttoa desiredtrajectorydeterminedbyasequenceofimages.Bydevelopingtheerror systemsandcontrollersbasedonahomographydecomposition,thesingularity associatedwiththetypicalimage-Jacobianiseliminated.Byutilizingthequaternionformulation,asingularity-freeerro rsystemisobtained.Ahomography-based rotationandtranslationcontrollerisproventoyieldthetrackingresultthrough aLyapunov-basedstabilityanalysis.Basedontheresultforthecamera-in-hand con gurationproblem,acamera-to-handextensionisgiventoenablearigidbodyobjecttotrackadesiredtrajectory.Simulationandexperimentsresultsare providedtoshowtheperformanceoftheproposedvisualservocontrollers. 135

PAGE 150

136 InordertoenablelargeareamotionandweakentheFOVrestriction,a collaborativevisualservomethodisthendevelopedtoenablethecontrolobjectto trackadesiredtrajectory.Incontrasttotypicalcamera-to-handandcamera-inhandvisualservocontrolcon gurations,theproposedcontrollerisdevelopedusing amovingon-boardcameraviewingamovingobjecttoobtainfeedbacksignals. Adaisy-chainingmethodisusedtodevelophomographyrelationshipbetween di erentcameracoordinateframesandfeaturepointpatchcoordinateframesto facilitatecollaborationbetweendi erentagents(e.g.,cameras,referenceobjects, andcontrolobject).Lyapunov-basedmethodsareusedtoprovetheasymptotic trackingresult.Simulationresultsareprovidedtoshowtheperformanceof theproposedvisualservocontroller.ToenlargetheFOV,analternativevisual servocontrollerisdevelopedthatyieldsanasymptotictrackingresultusinga centralcatadioptriccamera.ApanoramicFOVisobtainedbyusingthecentral catadioptriccamera. Tostudytherobustvisualservocontrolproblem,avisualservocontroller isdevelopedtoregulateacamera(attachedtoarigid-bodyobject)toadesired poseinpresenceofintrinsiccameracalibrationuncertainties.Aquaternion-based estimatefortherotationerrorsystemisdevelopedthatisrelatedtotheactual rotationerror.Thesimilarityrelationshipbetweentheestimatedandactual rotationmatricesisusedtoconstructtherelationshipbetweentheestimatedand actualquaternions.ALyapunov-basedstabilityanalysisisprovidedthatindicates auniquecontrollercanbedevelopedtoachievetheregulationresultdespiteasign ambiguityinthedevelopedquaternionestimate.Simulationresultsareprovidedto illustratetheperformanceofthedevelopedcontroller. Anewcombinedrobustandadaptivevisualservocontrolleristhendeveloped toasymptoticallyregulatethefeaturepointsinanimagetothedesiredfeature pointlocationswhilealsoregulatingthesixDOFposeofthecamera.Thesedual

PAGE 151

137 objectivesareachievedbyusingahomography-basedapproachthatexploitsboth image-spaceandreconstructedEuclideaninformationinthefeedbackloop.In comparisontopureimage-basedfeedbackapproaches,someadvantagesofusing ahomography-basedmethodinclude:realizableEuclideancameratrajectories; anonsingularimage-Jacobian;andboththecameraposeandthefeaturepoint coordinatesareincludedintheerrorsystem.Sincesomeimage-spaceinformationis usedinthefeedback-loopofthedevelopedhomography-basedcontroller,theimage featuresarelesslikelytoleavetheFOVincomparisonwithpureposition-based approaches.Therobustrotationcontrollerthataccommodatesforthetimevaryinguncertainscalingfactorisdevelopedbyexploitingtheuppertriangular formoftherotationerrorsystemandthefactthatthediagonalelementsofthe cameracalibrationmatrixarepositive.Theadaptivetranslationcontrollerthat compensatesfortheconstantunknownparametersinthetranslationerrorsystem isdevelopedbyacertainty-equivalence-basedadaptivecontrolmethodanda nonlinearLyapunov-baseddesignapproach.Simulationresultsareprovidedto showtheperformanceoftheproposedvisualservocontroller.

PAGE 152

APPENDIXA UNITNORMPROPERTYFORTHEQUATERNIONERROR Property :Thequaternionerror ( 0( ) ( ))de nedin(35)hasaunit normgiventhat ( ) and ( ) aretwounitquaternions. Proof :Thequaternioncomponentsin(3)canbeexpandedas 0= 00 + 1 2 3 1 2 3 = 1 1+ 2 2+ 3 3+ 00 (A1) and = 0 1 2 3 0 1 2 3 + 0 3 2 30 1 2 10 1 2 3 (A2) = 0 1+ 2 3 3 2 10 0 2 1 3+ 3 1 20 0 3+ 1 2 2 1 30 Basedon(A)and(A), 2 0+ =( 1 1+ 2 2+ 3 3+ 00 )2+ 0 1+ 2 3 3 2 10 0 2 1 3+ 3 1 20 0 3+ 1 2 2 1 30 0 1+ 2 3 3 2 10 0 2 1 3+ 3 1 20 0 3+ 1 2 2 1 30 138

PAGE 153

139 Expandingtherightsideoftheaboveequationgives 2 0+ =( 1 1+ 2 2+ 3 3+ 00 )2+( 0 1+ 2 3 3 2 10 )2+( 0 2 1 3+ 3 1 20 )2+( 0 3+ 1 2 2 1 30 )2= 2 02 1+ 2 02 2+ 2 12 1+ 2 02 3+ 2 12 2+ 2 22 1+ 2 12 3+ 2 22 2+ 2 32 1+ 2 22 3+ 2 32 2+ 2 32 3+ 2 02 0 + 2 12 0 + 2 22 0 + 2 32 0 =( 2 0+ 2 1+ 2 2+ 2 3)( 2 0 + 2 1+ 2 2+ 2 3) Basedonthefactthat ( ) and ( ) aretwounitquaternions(i.e.,theirnormsare equalto 1 ), 2 0+ =1

PAGE 154

APPENDIXB ONEPROPERTYOFUNITQUATERNIONS Property : ( 3+ ) 1= Proof :Theterm 3 canbeexpandedas 3 = 100 010 001 0 3 2 30 1 2 10 = 1 3 2 31 1 2 11 (B) Takingtheinverseoftheexpandedmatrixin(B)gives 3 1= 1 3 2 31 1 2 11 1= 1 2 1+ 2 2+ 2 3+1 2 1+1 3+ 1 2 2+ 1 3 3+ 1 22 2+1 1+ 2 3 2+ 1 3 1+ 2 32 3+1 (B) Multiplying onbothsidesof(B)gives 3 1= 3( 2+ 1 3)+ 2( 3+ 1 2)+ 1( 2 1+1) 1( 3+ 1 2)+ 3( 1+ 2 3)+ 2( 2 2+1) 2( 1+ 2 3)+ 1( 2+ 1 3)+ 3( 2 3+1) 2 1+ 2 2+ 2 3+1 = 140

PAGE 155

APPENDIXC OPEN-LOOPTRANSLATIONERRORSYSTEM Thetranslationerrorwasde nedas = where ( ) and ( ) aretheextendednormalizedcoordinatesof 0( ) and ( ) ,respectively,andwerede nedas = 0 00 0ln( 0 ) = ln( ) Di erentiating ( ) gives = 1 00 0 (C1) where 0= 100 0010 0001 Thederivativeof 0( ) isgivenby 0= 0 0 0 = 0 + (C2) where ( ) and ( ) arethelinearandangularvelocitiesof withrespecttoIexpressedinF.Basedon(C)and(C2), ( ) canbefurtherwrittenas = 1 000 + (C3) 141

PAGE 156

142 From(C),thetranslationerrorsystemcanbeobtainedas = 1 000 + anditcanbefurtherwrittenas = 000 +

PAGE 157

APPENDIXD PROPERTYONMATRIXNORM Property : [ ] 2=kk R3 Proof :Thespectralnormkk2ofarealmatrixisthesquarerootofthe largesteigenvalueofthematrixmultipliedbyitstranspose,i.e.,kk2= p max{} Foranygivenvector R3, [ ]= 032301210 TheEuclideannormof [ ]isgivenby [ ] 2= r maxn [ ][ ]o = v u u u u u u u t max 2 2+ 2 31213122 1+ 2 32313232 1+ 2 2 = q 2 1+ 2 2+ 2 3 Thenormof isgivenbykk= q 2 1+ 2 2+ 2 3 Hence, [ ] 2=kk. 143

PAGE 158

APPENDIXE COMPUTATIONOFDEPTHRATIOS Basedon(75)and(7),thefollowingexpressioncanbeobtained: = + (E1) where ( ) isde nedin(7),and ( )R3and ( )R3arede nedas = = 1 Usingthefourcorrespondingreferencefeaturepoints,theexpressionin(E)can bewrittenas = = 33 (E2) where 33( )R isthe(assumedw.l.o.g.)positivethirdrowthirdcolumnelement of ( ) ,and ( )R3 3isde nedas ( ) 33( ) .Basedon(E),twelvelinear equationscanbeobtainedforthefourcorrespondingreferencefeaturepoints.Aset oftwelvelinearequationscanthenbedevelopedtosolvefor ( ) 33( ) and ( ) Todeterminethescalar 33( ) ,thefollowingequationcanbeused: 33= + (E3) providedthat ( ) isobtainedusingthefourvanishingpoints,where = 11 12 13 21 22 23 31 32 33 = 11 12 13 21 22 23 31 321 = 1 2 3 = 1 2 3 144

PAGE 159

145 Tothisend,let 1= 1 12= 1 23= 1 34= 2 15= 2 26= 2 37= 3 18= 3 29= 3 31= 2 12= 3 1 thenthefollowingrelationshipscanbeobtained: 2= 115= 148= 173= 216= 249= 27 Theelementsofthematrix 33( ) ( ) in(E)canberewrittenas 11+ 1= 33 11 12+ 11= 33 12(E4) 13+ 21= 33 13 21+ 4= 33 21 22+ 14= 33 22(E5) 23+ 24= 33 23 31+ 7= 33 31 32+ 17= 33 32(E6) 33+ 27= 33 Theexpressionsin(E4)canbecombinedas 12+ 11= 12 11( 11+ 1) (E7) 13+ 21= 13 11( 11+ 1)

PAGE 160

146 Similarly,theexpressionsin(E5)canbecombinedas 22+ 14= 22 21( 21+ 4) (E8) 23+ 24= 23 21( 21+ 4) Since ( ) ( ) {1 2}and 13( ) 23( ) 13( ) and 23( ) are known,(E7)and(E)providefourequationsthatcanbesolvedtodeterminethe fourunknowns(i.e., 1, 2, 1( ) 4( ) ).Aftersolvingforthesefourunknowns, (E)and(E)canbeusedtosolvefor 33( ) ,andthen ( ) canbeobtained. Given ( ) ,(E2)canbeusedtosolvefor ( ) .

PAGE 161

APPENDIXF INEQUALITYDEVELOPMENT Property :Thereexisttwopositiveconstants and suchthatthescaling factor ( ) satis estheinequality ( ) (F1) Proof :Since =kk kk thesquareof ( ) isgivenby 2= ( )= .(F2) Basedonthefactthat isoffullrank,thesymmetricmatrix ispositive de nite.Hence,theRayleigh-Ritztheoremcanbeusedtoconcludethat min( ) max( ) (F3) where min( ) and max( ) denotetheminimalandmaximaleigenvaluesof ,respectively.From(F)and(F),itcanconcludedthat 1 max( )2= 1 min( ) s 1 max( )s 1 min( ) 147

PAGE 162

APPENDIXG LINEARPARAMETERIZATIONOFTRANSLATIONERRORSYSTEM Theregressorvectors 1()R1 202()R1 153()R1 5andthe constantunknownparametervectors 1R20 12R15 13R5 1aregivenby 1= 3 2 3 2 1 1 1 1 1 3 32 2 2 2 3 3 3 2 2 2 1= 1 11 12 11 13 11 1 12 111323 112213 112223 11221 112223 221 221 2 1112 2 112212231322 2 112212 2 112 12 2 11222 1223121322 2 112213 2 111213 2 11221213232 1322 2 11222= 3 3 1 12 12 1 3 3 32 2 2 2 2 2 2 2= 1 22 23 22 1 2 23 2 2223 2 221 2 221 1112 112212231322 11221 112212 112 2212231322 112 2223 11221223 112 22122 23132223 112 223= 2 2 2 1 1 3= 1 1112 112212231322 11221 2223 22 148

PAGE 163

REFERENCES [1]S.Hutchinson,G.Hager,andP.Corke,Atutorialonvisualservocontrol, IEEETrans.Robot.Automat. ,vol.12,no.5,pp.651,1996. [2]F.ChaumetteandS.Hutchinson,VisualservocontrolpartI:Basicapproaches, IEEERobot.Automat.Mag. ,vol.13,no.4,pp.82,2006. [3],VisualservocontrolpartII:Advancedapproaches, IEEERobot. Automat.Mag. ,vol.14,no.1,pp.109,2006. [4]G.Hu,N.Gans,andW.E.Dixon, ComplexityandNonlinearityinAutonomousRobotics,EncyclopediaofComplexityandSystemScience Springer,toappear2008,ch.AdaptiveVisualServoControl. [5]N.J.CowanandD.Koditschek,Planarimage-basedvisualservoingasa navigationproblem,in Proc.IEEEInt.Conf.Robot.Automat. ,2000,pp. 1720. [6]N.J.Cowan,J.D.Weingarten,andD.E.Koditschek,Visualservoingvia navigationfunctions, IEEETrans.Robot.Automat. ,vol.18,pp.521, 2002. [7]Y.MezouarandF.Chaumette,Pathplanningforrobustimage-based control, IEEETrans.Robot.Automat. ,vol.18,no.4,pp.534,2002. [8]E.RimonandD.E.Koditschek,Exactrobotnavigationusingarti cial potentialfunctions, IEEETrans.Robot.Automat. ,vol.8,pp.501,1992. [9]A.Ruf,M.Tonko,R.Horaud,andH.-H.Nagel,Visualtrackingofanende ectorbyadaptivekinematicprediction,in Proc.IEEE/RSJInt.Conf. Intell.RobotsSyst. ,1997,pp.893. [10]J.Chen,D.M.Dawson,W.E.Dixon,andA.Behal,Adaptivehomographybasedvisualservotrackingfora xedcameracon gurationwithacamera-inhandextension, IEEETrans.Contr.Syst.Technol. ,vol.13,no.5,pp.814 825,2005. [11]M.SpongandM.Vidyasagar, RobotDynamicsandControl .NewYork: JohnWiley&SonsInc.,1989. [12]O.Faugeras, Three-DimensionalComputerVision:AGeometricViewpoint Cambridge,Massachusetts:MITPress,1993. 149

PAGE 164

150 [13]R.HartleyandA.Zisserman, MultipleViewGeometryinComputerVision NewYork:NY:CambridgeUniversityPress,2000. [14]G.Hu,W.E.Dixon,S.Gupta,andN.Fitz-coy,Aquaternionformulation forhomography-basedvisualservocontrol,in Proc.IEEEInt.Conf.Robot. Automat. ,2006,pp.2391. [15]M.Shuster,Asurveyofattituderepresentations, J.AstronauticalSciences vol.41,no.4,pp.439,1993. [16]G.Hu,S.Mehta,N.Gans,andW.E.Dixon,Daisychainingbasedvisual servocontrolpartI:Adaptivequaternion-basedtrackingcontrol,in IEEE Multi-Conf.Syst.Control ,2007,pp.1474. [17]G.Hu,N.Gans,S.Mehta,andW.E.Dixon,Daisychainingbasedvisual servocontrolpartII:Extensions,applicationsandopenproblems,in IEEE Multi-Conf.Syst.Control ,2007,pp.729. [18]S.Mehta,W.E.Dixon,D.MacArthur,andC.D.Crane,Visualservo controlofanunmannedgroundvehicleviaamovingairbornemonocular camera,in Proc.AmericanControlConf. ,2006,pp.5276. [19]S.Mehta,G.Hu,N.Gans,andW.E.Dixon,Adaptivevision-based collaborativetrackingcontrolofanugvviaamovingairbornecamera:A daisychainingapproach,in Proc.IEEEConf.DecisionControl ,2006,pp. 3867. [20]E.Malis,F.Chaumette,andS.Bodet,21/2Dvisualservoing, IEEE Trans.Robot.Automat. ,vol.15,no.2,pp.238250,1999. [21]E.MalisandF.Chaumette,Theoreticalimprovementsinthestability analysisofanewclassofmodel-freevisualservoingmethods, IEEETrans. Robot.Automat. ,vol.18,no.2,pp.176186,2002. [22]Y.Fang,W.E.Dixon,D.M.Dawson,andP.Chawda,Homography-based visualservoingofwheeledmobilerobots, IEEETrans.Syst.,Man,Cybern.PartB:Cybern. ,vol.35,no.5,pp.1041,2005. [23]W.E.Dixon,A.Behal,D.M.Dawson,andS.Nagarkatti, NonlinearControl ofEngineeringSystems:ALyapunov-BasedApproach .BirkhuserBoston, 2003. [24]G.Hu,S.Gupta,N.Fitz-coy,andW.E.Dixon,Lyapunov-basedvisual servotrackingcontrolviaaquaternionformulation,in Proc.IEEEConf. DecisionControl ,2006,pp.3861. [25]P.CorkeandS.Hutchinson,Anewpartitionedapproachtoimage-based visualservocontrol, IEEETrans.Robot.Automat. ,vol.17,no.4,pp. 507,2001.

PAGE 165

151 [26]G.Chesi,K.Hashimoto,D.Prattichizzo,andA.Vicino,Keepingfeaturesin the eldofviewineye-in-handvisualservoing:Aswitchingapproach, IEEE Trans.Robot. ,vol.20,no.5,pp.908,2004. [27]Y.MezouarandF.Chaumette,Optimalcameratrajectorywithimagebased control, Int.J.Robot.Res. ,vol.22,no.10,pp.781,2003. [28]H.ZhangandJ.Ostrowski,Visualmotionplanningformobilerobots, IEEETrans.Robot.Automat. ,vol.18,no.2,pp.199,2002. [29]J.Chen,D.M.Dawson,W.E.Dixon,andV.Chitrakaran,Navigation functionbasedvisualservocontrol, Automatica ,vol.43,pp.1165, 2007,toappear. [30]S.BenhimaneandE.Malis,Vision-basedcontrolwithrespecttoplanar andnonplanarobjectsusingazoomingcamera,in Proc.IEEEInt.Conf. AdvancedRobotics ,2003,pp.991. [31]N.Garcka-Aracil,E.Malis,R.Aracil-Santonja,andC.Perez-Vidal,Continuousvisualservoingdespitethechangesofvisibilityinimagefeatures, IEEE Trans.Robot. ,vol.21,no.6,pp.1214,2005. [32]E.HechtandA.Zadac, Optics ,3rded.Addison-Wesley,1997. [33]C.GeyerandK.Daniilidis,Catadioptricprojectivegeometry, Int.J. ComputerVision ,vol.45,no.3,pp.223,2001. [34]S.BakerandS.Nayar,Atheoryofsingle-viewpointcatadioptricimage formation, Int.J.ComputerVision ,vol.35,no.2,pp.175,1999. [35]J.BarretoandH.Araujo,Geometricpropertiesofcentralcatadioptricline images,in Proc.EuropeanConf.ComputerVision ,2002,pp.237. [36]C.GeyerandK.Daniilidis,Aunifyingtheoryforcentralpanoramicsystems andpracticalimplications,in Proc.EuropeanConf.ComputerVision ,2000, pp.445. [37]R.Y.Tsai,Aversatilecameracalibrationtechniqueforhigh-accuracy3D machinevisionmetrologyusingo -the-shelfTVcamerasandlenses, IEEE J.Robot.Automat. ,vol.3,no.4,pp.323,1987. [38], Synopsisofrecentprogressoncameracalibrationfor3Dmachine vision .Cambridge,MA,USA:MITPress,1989. [39]L.Robert,Cameracalibrationwithoutfeatureextraction, ComputerVision andImageUnderstanding ,vol.63,no.2,pp.314,1996.

PAGE 166

152 [40]J.HeikkilaandO.Silven,Afour-stepcameracalibrationprocedurewith implicitimagecorrection,in Proc.IEEEInt.Conf.ComputerVision PatternRecognition ,1997,pp.1106. [41]T.A.ClarkeandJ.G.Fryer,Thedevelopmentofcameracalibration methodsandmodels, PhotogrammetricRecord ,vol.16,no.91,pp.5166, 1998. [42]P.F.SturmandS.J.Maybank,Onplane-basedcameracalibration:A generalalgorithm,singularities,applications,in Proc.IEEEInt.Conf. ComputerVisionPatternRecognition ,1999,pp.432. [43]Z.Zhang,Flexiblecameracalibrationbyviewingaplanefromunknown orientations,in Proc.IEEEInt.Conf.ComputerVision ,1999,pp.666. [44]L.E.Weiss,A.C.Sanderson,andC.P.Neuman,Dynamicsensor-based controlofrobotswithvisualfeedback, IEEEJ.Robot.andAutomat. ,vol. RA-3,no.5,pp.404,1987. [45]J.FeddemaandO.Mitchell,Vision-guidedservoingwithfeature-based trajectorygeneration, IEEETrans.Robot.Automat. ,vol.5,no.5,pp. 691,1989. [46]K.Hashimoto,T.Kimoto,T.Ebine,andH.Kimura,Manipulatorcontrol withimage-basedvisualservo,in Proc.IEEEInt.Conf.Robot.Automat. 1991,pp.2267. [47]B.Espiau,F.Chaumette,andP.Rives,Anewapproachtovisualservoing inrobotics, IEEETrans.Robot.Automat. ,vol.8,no.3,pp.313,1992. [48]F.Chaumette,Potentialproblemsofstabilityandconvergenceinimagebasedandposition-basedvisualservoing,in TheCon uenceofVisionand Control ,ser.LNCISSeries,D.Kriegman,G.Hager,andA.Morse,Eds. Berlin,Germany:Springer-Ve rlag,1998,vol.237,pp.66. [49]B.Espiau,E ectofcameracalibrationerrorsonvisualservoinginrobotics, in The3rdInt.Symp.ExperimentalRobotics ,1993,pp.182. [50]W.J.Wilson,C.W.Hulls,andG.S.Bell,Relativeend-e ectorcontrolusingcartesianpositionbasedvisualservoing, IEEETrans.Robot.Automat. vol.12,no.5,pp.684,1996. [51]P.Martinet,J.Gallice,andD.Khadraoui,Visionbasedcontrollawusing 3Dvisualfeatures,in Proc.WorldAutomationCongress ,vol.3,1996,pp. 497. [52]N.Daucher,M.Dhome,J.Laprest,andG.Rives,Speedcommandofa roboticsystembymonocularposeestimate,in Proc.IEEE/RSJInt.Conf. Intell.RobotsSyst. ,1997,pp.55.

PAGE 167

153 [53]K.Hashimoto,Areviewonvision-basedcontrolofrobotmanipulators, AdvancedRobotics ,vol.17,no.10,pp.969,2003. [54]K.Deguchi,Optimalmotioncontro lforimage-basedvisualservoingby decouplingtranslationandrotation,in Proc.IEEE/RSJInt.Conf.Intell. RobotsSyst. ,1998,pp.705. [55]E.MalisandF.Chaumette,1/2Dvisualservoingwithrespecttounknownobjectsthroughanewestimationschemeofcameradisplacement, Int.J.ComputerVision ,vol.37,no.1,pp.79,2000. [56]F.ChaumetteandE.Malis,1/2Dvisualservoing:apossiblesolutionto improveimage-basedandposition-basedvisualservoings,in Proc.IEEEInt. Conf.Robot.Automat. ,2000,pp.630. [57]J.Chen,W.E.Dixon,D.M.Dawson,andM.McIntyre,Homography-based visualservotrackingcontrolofawheeledmobilerobot, IEEETrans.Robot. vol.22,no.2,pp.406,2006. [58]N.GansandS.Hutchinson,Stablevisualservoingthroughhybridswitchedsystemcontrol, IEEETrans.Robot. ,vol.23,no.3,pp.530,2007. [59]E.Malis,Visualservoinginvarianttochangesincameraintrinsicparameters,in Proc.IEEEInt.Conf.ComputerVision ,2001,pp.704. [60]Y.Y.SchechnerandS.Nayar,Generalizedmosaicing:Highdynamicrange inawide eldofview, Int.J.ComputerVision ,vol.53,no.3,pp.245, 2003. [61]S.Hsu,H.S.Sawhney,andR.Kumar,Automatedmosaicsviatopology inference, IEEEComputerGraphicsandApplication ,vol.22,no.2,pp. 44,2002. [62]M.Irani,P.Anandan,J.Bergen,R.Kumar,andS.Hsu,E cientrepresentationsofvideosequencesandtheirapplication, SignalProcessing:Image communication ,vol.8,pp.327351,1996. [63]A.SmolicandT.Wiegand,High-resolutionimagemosaicing,in Proc. IEEEInt.Conf.ImageProcessing ,2001,pp.872. [64]R.SwaminathanandS.Nayar,Non-metriccalibrationofwide-anglelenses andpolycameras,in Proc.IEEEInt.Conf.ComputerVisionPattern Recognition ,2000,pp.413. [65]D.BurschkaandG.Hager,Vision-basedcontrolofmobilerobots,in Proc. IEEEInt.Conf.Robot.Automat. ,2001,pp.1707.

PAGE 168

154 [66]Y.Mezouar,H.H.Abdelkader,P.Martinet,andF.Chaumette,Central catadioptricvisualservoingfrom3Dstraightlines,in Proc.IEEE/RSJInt. Conf.Intell.RobotsSyst. ,2004,pp.343. [67]H.Hadj-Abdelkader,Y.Mezouar,N.Andre ,andP.Martinet,21/2D visualservoingwithcentralcatadioptriccameras,in Proc.IEEE/RSJInt. Conf.Intell.RobotsSyst. ,2005,pp.3572. [68]G.Mariottini,E.Alunno,J.Piazzi,andD.Prattichizzo,Epipole-based visualservoingwithcentralcatadioptriccamera,in Proc.IEEEInt.Conf. Robot.Automat. ,2005,pp.3516. [69]G.Mariottini,D.Prattichizzo,andG .Oriolo,Image-basedvisualservoing fornonholonomicmobilerobotswith centralcatadioptriccamera,in Proc. IEEEInt.Conf.Robot.Automat. ,2006,pp.538. [70]S.BenhimaneandE.Malis,Anewapproachtovision-basedrobotcontrol withomni-directionalcameras,in Proc.IEEEInt.Conf.Robot.Automat. 2006,pp.526. [71]R.TatsambonandF.Chaumette,Visualservoingfromspheresusinga sphericalprojectionmodel,in Proc.IEEEInt.Conf.Robot.Automat. ,2007, pp.2080. [72],Visualservoingfromsphereswithparacatadioptriccameras,in Int. Conf.AdvancedRobotics ,August2007. [73]J.P.Barreto,F.Martin,andR.Horaud,Visualservoing/trackingusing centralcatadioptricimages,in Proc.Int.Symp.ExperimentalRobotics ,2002, pp.863. [74]K.HosodaandM.Asada,Versatilevisualservoingwithoutknowledgeof truejacobian,in Proc.IEEE/RSJInt.Conf.Intell.RobotsSyst. ,1994,pp. 186. [75]M.Jagersand,O.Fuentes,andR.Nelson,Experimentalevaluationof uncalibratedvisualservoingforprecisionmanipulation,in Proc.IEEEInt. Conf.Robot.Automat. ,1997,pp.2874. [76]M.ShahamiriandM.Jagersand,Uncalibratedvisualservoingusingabiased newtonmethodforon-linesingularitydetectionandavoidance,in Proc. IEEE/RSJInt.Conf.Intell.RobotsSyst. ,2005,pp.3953. [77]J.A.PiepmeierandH.Lipkin,Uncalibratedeye-in-handvisualservoing, Int.J.Robot.Res. ,vol.22,pp.805,2003. [78]J.A.Piepmeier,G.V.McMurray,an dH.Lipkin,Uncalibrateddynamic visualservoing, IEEETrans.Robot.Automat. ,vol.24,no.3,pp.143, 2004.

PAGE 169

155 [79]R.Kelly,Robustasymptoticallystablevisualservoingofplanarmanipulator, IEEETrans.Robot.Automat. ,vol.12,no.5,pp.759,1996. [80]B.BishopandM.W.Spong,Adaptivecalibrationandcontrolof2D monocularvisualservosystem,in Proc.IFACSymp.RobotControl ,1997, pp.525. [81]L.HsuandP.L.S.Aquino,Adaptivevisualtrackingwithuncertain manipulatordynamicsanduncalibratedcamera,in Proc.IEEEConf. DecisionControl ,1999,pp.1248. [82]C.J.TaylorandJ.P.Ostrowski,Robustvision-basedposecontrol,in Proc. IEEEInt.Conf.Robot.Automat. ,2000,pp.2734. [83]E.Zergeroglu,D.M.Dawson,M.deQueiroz,andA.Behal,Vision-based nonlineartrackingcontrollersinthepresenceofparametricuncertainty, IEEE/ASMETrans.Mechatr. ,vol.6,no.3,pp.322,2001. [84]W.E.Dixon,D.M.Dawson,E.Zergeroglu,andA.Behal,Adaptive trackingcontrolofawheeledmobilerobotviaanuncalibratedcamera system, IEEETrans.Syst.,Man,Cybern.-PartB:Cybern. ,vol.31,no.3, pp.341,2001. [85]A.Astol ,L.Hsu,M.Netto,andR.Ortega,Twosolutionstotheadaptive visualservoingproblem, IEEETrans.Robot.Automat. ,vol.18,no.3,pp. 387,2002. [86]Y.Liu,H.Wang,C.Wang,andK.Lam,Uncalibratedvisualservoingof robotsusingadepth-independentinteractionmatrix, IEEETrans.Robot. vol.22,no.4,pp.804,2006. [87]Y.Fang,W.E.Dixon,D.M.Dawson,andJ.Chen,Anexponentialclass ofmodel-freevisualservoingcontrollersinthepresenceofuncertaincamera calibration, Int.J.Robot.Automat. ,vol.21,2006. [88]G.Hu,N.Gans,andW.E.Dixon,Quaternion-basedvisualservocontrolin thepresenceofcameracalibrationerror,in IEEEMulti-Conf.Syst.Control 2007,pp.1492. [89]J.Chen,A.Behal,D.M.Dawson,andW.E.Dixon,Adaptivevisual servoinginthepresenceofintrinsiccalibrationuncertainty,in Proc.IEEE Conf.DecisionControl ,2003,pp.5396. [90]B.BoufamaandR.Mohr,Epipoleandfundamentalmatrixestimation usingvirtualparallax,in Proc.IEEEInt.Conf.ComputerVision ,1995,pp. 1030. [91]J.ShiandC.Tomasi,Goodfeaturestotrack,in Proc.IEEEInt.Conf. ComputerVisionPatternRecognition ,1994,pp.593.

PAGE 170

156 [92]C.TomasiandT.Kanade,Detectionandtrackingofpointfeatures, CarnegieMellonUniversity,Tech.Rep.,1991. [93]O.FaugerasandF.Lustman,Motionandstructurefrommotionina piecewiseplanarenvironment, Int.J.PatternRecognitionandArti cial Intelligence ,vol.2,no.3,pp.485,1988. [94]Z.ZhangandA.R.Hanson,Scaledeuclidean3Dreconstructionbasedon externallyuncalibratedcameras,in IEEESymp.ComputerVision ,1995,pp. 37. [95]Y.Fang,A.Behal,W.E.Dixon,andD.M.Dawson,Adaptive2.5Dvisual servoingofkinematicallyredundantrobotmanipulators,in Proc.IEEE Conf.DecisionControl ,2002,pp.2860. [96]J.J.SlotineandW.Li, AppliedNonlinearControl .EnglewoodCli ,NJ: PrenticeHall,Inc.,1991. [97]G.Bradski,TheOpenCVlibrary, Dr.DobbsJournalofSoftwareTools vol.25,pp.120,122,2000. [98]M.Galassi,J.Davies,J.Theiler,B.Gough,G.Jungman,M.Booth,and F.Rossi, GNUScienti cLibrary:ReferenceManual ,NetworkTheoryLtd., Bristol,UK,2005. [99]P.K.Allen,A.Timcenko,B.Yoshimi,andP.Michelman,Automated trackingandgraspingofamovingobjectwitharobotichand-eyesystem, IEEETrans.Robot.Automat. ,vol.9,no.2,pp.152,1993. [100]G.D.Hager,W.-C.Chang,andA.S.Morse,Robothand-eyecoordination basedonstereovision, IEEEContr.Syst.Mag. ,vol.15,no.1,pp.30, 1995. [101]S.Wiiesoma,D.Wolfe,andR.Richards,Eye-to-handcoordinationfor vision-guidedrobotcontrolapplications, Int.J.Robot.Res. ,vol.12,no.1, pp.65,1993. [102]N.Gans,G.Hu,andW.E.Dixon, ComplexityandNonlinearityinAutonomousRobotics,EncyclopediaofComplexityandSystemScience Springer,toappear2008,ch.Image-BasedStateEstimation. [103]A.Almansa,A.Desolneux,andS.Vamech,Vanishingpointdetection withoutanyaprioriinformation, IEEETrans.PatternAnal.Machine Intell. ,vol.25,no.4,pp.502,2003.

PAGE 171

BIOGRAPHICALSKETCH GuoqiangHureceivedhisbachelorsdegreeinautomationengineeringfrom theUniversityofScienceandTechnologyofChina(USTC)in2002,andhe receivedhisMasterofPhilosophyinAutomationandComputer-aidedEngineering fromTheChineseUniversityofHongKong(CUHK)in2004. Currently,heispursuinghisPhDdegreeinthenonlinearcontrolandrobotics groupattheUniversityofFlorida(UF)underthesupervisionofDr.Warren Dixon. 157