<%BANNER%>

An Augmented error criterion for linear adaptive filtering

University of Florida Institutional Repository
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110112_AAAACS INGEST_TIME 2011-01-13T03:15:57Z PACKAGE UFE0004355_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 49712 DFID F20110112_AAAXHA ORIGIN DEPOSITOR PATH rao_y_Page_135.pro GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
4ca3316d3d54b6eafcd677c88190a11c
SHA-1
19b32bac3ca6897b3159f8561652df4b8dea916c
23036 F20110112_AAAXGM rao_y_Page_117.pro
c844e9bc8c1a67e4f1b57625ac8482af
2c29fbfafa0cb8c791d95d7c1b8d7d37a0962e53
28806 F20110112_AAAXFX rao_y_Page_096.pro
dc54a37ff846fd4b0ce533c36a6151f4
094f7431e481071b6e929791888fb5248d666f7f
8432 F20110112_AAAWCV rao_y_Page_113thm.jpg
c03fda8b2b5bb053c6761f289110a982
228991850f6904f4d509808376a7dbcc4ccfe94e
27649 F20110112_AAAWDK rao_y_Page_083.QC.jpg
157e568b179ad93bdc01b3d9111bf672
a72975dba950c52d1fa2623fbbfcdb00ac56e342
39878 F20110112_AAAXHB rao_y_Page_137.pro
cd93706b4b69c3b9b49c4bff48dce538
0c1cf9479a01ea67baeed6b617b7efa979d3573f
8365 F20110112_AAAXGN rao_y_Page_118.pro
f24a8a100cef82aaccf468f750a0506f
b864a4087fb77253bbb5be7f47abf5e8ca8c42eb
20474 F20110112_AAAXFY rao_y_Page_097.pro
2ca58e69215970fbf2bef1b77bb9f1d7
04321fa52ba67b9e7187bb5744522f9ecd8f84f9
1922 F20110112_AAAWCW rao_y_Page_045.txt
d994ff0d8ecf352977097b9c8fcdefa2
23c696959014c628ea2e478646e1045eabc7630f
25271604 F20110112_AAAWDL rao_y_Page_072.tif
50f15ee18e66c84a532dd555a58417e0
7b1e242570667915256b97756aa0393cc4b21817
49920 F20110112_AAAXHC rao_y_Page_138.pro
311011d0b4ad3f69b05e99f4d147434c
cd34d0e14aa88f52fb432e6d775050f7d2e9f66c
44009 F20110112_AAAXGO rao_y_Page_119.pro
0e475cd3546708d80c03d95c7326f7fd
048a31cced68bb8182bee1e1c55912e5447c454d
50035 F20110112_AAAXFZ rao_y_Page_098.pro
f6c2d5c46df5f3c64e88ac7699707cfb
cee1853ff418bb137908c15079d1affebda56156
1449 F20110112_AAAWCX rao_y_Page_010thm.jpg
66ae5bffd9626a928a36e851ab5254b6
c8f07846cde3faca418fa4bb69e32d128fd52b10
88518 F20110112_AAAWEA rao_y_Page_024.jpg
7866ebbd3f359190da114ed4541c6aae
2fef98cd80fa837166a8954691eeebcde2c9a58e
604676 F20110112_AAAWDM rao_y_Page_149.jp2
b8ffd585be36444bb61da1c496919442
9bfe0528233d944e2c09ae8fcb568aaa72d24660
38833 F20110112_AAAXHD rao_y_Page_139.pro
23af60f63509b760cf99eedbe39e4372
f5a1eb69a6cea34c77564ad479b133854009a5a8
49465 F20110112_AAAXGP rao_y_Page_120.pro
4297f493d6e5d4eae46c21105c451f2c
1822ead573ced25f7b3de86293a6521d081d2d78
2639 F20110112_AAAWCY rao_y_Page_179.txt
e45ef753e5f137eaed4c8167e3f39065
0a8f68ead4e4922481ba486c060a8ba3b4dc6788
34915 F20110112_AAAWEB rao_y_Page_135.QC.jpg
142b3d419892b2ecf77cc9819d183faf
7bd3f12c450f68b1ca456386f6ccd6c8be6a5c4d
161276 F20110112_AAAWDN rao_y_Page_007.jpg
a19c931bd985973c6528defe4b868d97
d4f7239c1e7b4c6624da35044fcc6cd48440ac83
46039 F20110112_AAAXHE rao_y_Page_140.pro
ba2568cdcd1152d027844157653a62e3
e6b3550e03a448a5e3811ee3ef29fe95c349fefb
48194 F20110112_AAAXGQ rao_y_Page_123.pro
57a670474b7ceecb947a8c89cf1627b4
57d37c2d80572327daa49ff469aca02e24b71fec
98952 F20110112_AAAWCZ rao_y_Page_026.jpg
f9c9c6ee40bb44031862491f9ef7857c
e8e3dc3f66d876d00650bc8ae6ef44b17fac15fb
39358 F20110112_AAAWEC rao_y_Page_060.pro
ce533ee97bb1fe4dc7e7ad5e0249f145
023252cc19d098150123139e7c255b2effae5618
7896 F20110112_AAAWDO rao_y_Page_020thm.jpg
7cc4e632157ae950077ff70c2df6eea2
0e2abc909625a0fec1c2f6cbf5b6b1ca0bf1e2f5
43281 F20110112_AAAXHF rao_y_Page_141.pro
92a1ac2bb2ad4c133f1740a84e4372c4
e5b5b233eecc2aff0354a4cd30305071ce0220c2
30516 F20110112_AAAXGR rao_y_Page_124.pro
14d505b52a485a2acd824340e4b1124f
cf87b3bb19747fe6f05bf72e66e4a78913d2e3a1
4237 F20110112_AAAWED rao_y_Page_015thm.jpg
d161e5afd95f5d2ca42801f00c0af7f9
46b256a04595faafebef68b2fe3c3acf3a63b7ca
46006 F20110112_AAAWDP rao_y_Page_042.pro
c19fd5dc54d558eccaebc558f5965f24
57119391adddb5d6e82b2c560833ba2d5f0d176a
34697 F20110112_AAAXHG rao_y_Page_142.pro
f940cbad7292a624ae2c8702e2fc025d
3d70900e15577b266edea8a6d9218e38b9c314c3
30088 F20110112_AAAXGS rao_y_Page_125.pro
aef03b1ddaf5957ca26a0e2ddccfd9e6
76ea4176d7d9f5caa6c4dcd66e467ba089dcdb65
36188 F20110112_AAAWEE rao_y_Page_080.pro
36798303063e657e4a418419ccf328ce
574376bea91701e6c6c1442793f434882256b775
1051979 F20110112_AAAWDQ rao_y_Page_007.jp2
2226caec10f6cbe442422bda4e4b9bec
4a1942e33f77bb67a5904de182ff4e3af76de3ca
38181 F20110112_AAAXHH rao_y_Page_144.pro
dc67500d5bffba8dc52fecffee18a547
4269f396be05d73071524100d591f426ff3bc258
12478 F20110112_AAAXGT rao_y_Page_126.pro
fce056f53ead04c87efeb4cd451eaa74
5b489df2b20b1dd31bc01982306a32ab3a6a7492
90550 F20110112_AAAWEF rao_y_Page_070.jpg
fdd0044b9029ea8d881a6331e22ee6e6
96df66c9109d19efa6c775587d51ff0a040b5ebd
681851 F20110112_AAAWDR rao_y_Page_103.jp2
5b2b8534d684277c4d6c17a7211632be
2251aecc0313a871bf6c69c88f3dbb556805c2aa
29360 F20110112_AAAXHI rao_y_Page_145.pro
e98dc38915f2967db6db21972568f05d
a8085368eaa71e1d37f6bfd8a65ebf9890ba7c18
41196 F20110112_AAAXGU rao_y_Page_129.pro
7fc667761d2c9bf1ebaefa934a1afaa8
862f649dc58d3a7671ee71c1b7744726a0e67f9e
96618 F20110112_AAAWEG rao_y_Page_022.jp2
f0414046164749527fb491624085badd
91cdcac7d78bc19328e57a16b286d8900e08c5e7
74349 F20110112_AAAWDS rao_y_Page_091.jpg
bc35b6dd38e4d1b3f08839d1cd61fc5b
e284610b5d30f12ddcd869c82bb1264f180cca2a
39162 F20110112_AAAXHJ rao_y_Page_146.pro
cb49a253938532baa95d078233266c2c
a22ec3cc95a5e1b3859dabfb54fe1c97099ff623
50478 F20110112_AAAXGV rao_y_Page_130.pro
8656fdda80381a3feee91f231aea48ed
b7a4c6ae7d21092e4b16de4f6ef2f10c49426ecc
1053954 F20110112_AAAWEH rao_y_Page_167.tif
dc26855f2492137d6ef6bfcbfbd9a4da
25785d25f7da8c0b3260cbcecb48c3527932802e
28315 F20110112_AAAWDT rao_y_Page_021.QC.jpg
c872ce12ee03fa311ebcdf52868e7aa4
2f6b9450a8237de9545b7ab24636cc40c546b006
49255 F20110112_AAAXHK rao_y_Page_147.pro
dd6bf5131225648894a9de901338e3ac
6032cc5266be1da4a2c95443690091bc93827600
740019 F20110112_AAAWEI rao_y_Page_127.jp2
9b2ecd4cf59641b12f48e062b153b3f9
ac27deaa460eac03bc7d6b3ac0e7d9e3190a8e1b
50549 F20110112_AAAXHL rao_y_Page_148.pro
325f87dcca850c1ef4d4323b78a428db
18d791c9a1f9c1851736cb6abf0f6c788109c5af
48217 F20110112_AAAXGW rao_y_Page_131.pro
f29407b9f8efede8bfcb2f7ead47b709
a96edf96fbf50caf99329004ffa6b63bed907dae
8176 F20110112_AAAWEJ rao_y_Page_095thm.jpg
5192935b3c2a449efe1457f048b14ff1
00e8a8bd6a85426aa870dd45cbdf092ced354cb6
F20110112_AAAWDU rao_y_Page_113.tif
352c3836d95244d2ba1926b8c2371c60
55d6ae296754b5b10662716b449222c8325af9d6
42561 F20110112_AAAXIA rao_y_Page_167.pro
7b05cdbed37cdf489cd2da4a729ef452
9ac12a008d9fbe8cc999cc3c878856b7ef9f3462
46318 F20110112_AAAXHM rao_y_Page_151.pro
732acf4134e46c9606e2d19f0e4bdcb7
6d0b85f650deb0a4ba295d03f3e2c73be8c842f7
13663 F20110112_AAAXGX rao_y_Page_132.pro
680e940dc6e7e661d2ce8982acf1da17
b0e499973b8aacbe347b550ce3a17836f33848e0
69505 F20110112_AAAWEK rao_y_Page_170.jp2
47680bb48e113ba8eae95922a70a01b6
4cbd9bcf23eac221cd5d4b2d8429264908e271d5
F20110112_AAAWDV rao_y_Page_187.tif
d2ce9fa02e9e7dd2b64e04e2d11bba0f
0bcf115c71cf9ae33059f6a8f9c9873a9d10bf40
50586 F20110112_AAAXIB rao_y_Page_168.pro
14f34d33b845351dfa76b86fc8e3022c
2974acd4da61e70ae9d2c0ebeff1b30d109edc4e
35292 F20110112_AAAXHN rao_y_Page_152.pro
3eca816aa568944ed11d38be29a4e66f
f6a9143688b782a2960bf8b5064a135759eaed72
44439 F20110112_AAAXGY rao_y_Page_133.pro
b7fd5557637647a3fa16cb0b8570cc50
0ad54e44cdaead97adbfd3fbf377afd32769f326
2109 F20110112_AAAWEL rao_y_Page_023.txt
c1d2dc304cddedc9207f22f1aa042aef
8890f9347dd4cbce841c99dd83ada36f4e7507ef
F20110112_AAAWDW rao_y_Page_024.tif
5713f94b239c7540f4fbeb48bf16f6e8
f2de03b404d9285ee916c0241f6be1eae6d4ee3f
32962 F20110112_AAAXIC rao_y_Page_170.pro
6ba39bd31bd3acfac7a979c5f7891d23
c8b10435e496b83a01976392a276e30d79c96279
40846 F20110112_AAAXHO rao_y_Page_153.pro
aa9dcc5f334272ffc0c4e1108530cb78
eb34b7757c34d516e7a288c3a6f6f6666daec6b3
41480 F20110112_AAAXGZ rao_y_Page_134.pro
6ab835304bfbda1083fb7bec31f5df09
1130ef513d9c7d723fab1794ad7dc4932b0e1f7c
2449 F20110112_AAAWFA rao_y_Page_180.txt
d1ee0f3b44197dc58885559a2c80d0e5
5848ca0122af1085fce08e4d626a07f8b01c5416
F20110112_AAAWEM rao_y_Page_101.tif
66d8010fa16b0a24857d229bdbcdec54
d53422fef9971461009032abc17ec1886a56f923
7082 F20110112_AAAWDX rao_y_Page_002.jp2
222326012ffbaf104a13f636c8cd5c49
ba1434e93ca8ea3032fc97639eb0e9971f373894
17940 F20110112_AAAXID rao_y_Page_171.pro
62afcee3252d6b6ec3d98d8cd38eb524
83b3d08cee09428de20bc38b04ed1632b9be4056
36362 F20110112_AAAXHP rao_y_Page_154.pro
a427eb78fc95bddc99411126444f62de
2b12d16e3e721bc161e51d2c1ef70082467891fe
43900 F20110112_AAAWFB rao_y_Page_115.pro
18117e80a306c4e2e93dd4d12aca7909
1e029ec5a3101e9b84d80927715baff49d91b346
8966 F20110112_AAAWEN rao_y_Page_101thm.jpg
4a7ad4a1efd3ee777ff1448a294011bb
1e7a10d4380cf409c844491635f07f35862efde0
1659 F20110112_AAAWDY rao_y_Page_164.txt
5ed10dd82598d70d59ba5cf0ce1d2f5e
10328d5fb519e41ed832845a48876d6f7c3312d8
33827 F20110112_AAAXIE rao_y_Page_172.pro
d2d5aa903f895e5969296b33dbb1ecd6
536b774cbf932e97df423e6d7124d3e05f6d8928
36090 F20110112_AAAXHQ rao_y_Page_155.pro
6a946608dd25f646af01475ae83f5500
f9db9731656d8a4fff3588203dd6883a461749c5
39314 F20110112_AAAWFC rao_y_Page_014.pro
1e5090543cb952bf4defb59390395e63
80b80ce7ec34b3dcfde03cc89e15600f4d6b4509
51231 F20110112_AAAWEO rao_y_Page_169.jp2
9efaf626d5ab33606f15b683ed3b07c4
9891b35af727729540835affc1b7314ccbea74d0
63021 F20110112_AAAWDZ rao_y_Page_103.jpg
a1417c55c080be4052d7f085c11327d6
3ac9c303dcc9e28d2ef790ed521c00a79a1ceea9
35012 F20110112_AAAXIF rao_y_Page_173.pro
1608e587de1214ce653c6c501f106c86
81e945bfcfb80eae67b2145ad54dc94cc6166af3
24335 F20110112_AAAXHR rao_y_Page_156.pro
7ae00144e11f38464afe1ee248a652a7
ee9c7d7b5403cbea8d3ecb6ba8f65c9d3180e689
86006 F20110112_AAAWFD rao_y_Page_105.jpg
8a1ca3a827611e5e8e003070a842c5d9
0d138ace9c22d7f4b779efb580ab8ff9d353fff6
18638 F20110112_AAAWEP rao_y_Page_149.QC.jpg
c378fd224ec6b14b59c8ed56f4f8fe7a
6e9ca128ad5d55e81cc07136a4b2f9cb729aa6ac
26135 F20110112_AAAXIG rao_y_Page_174.pro
2a1d42cd8c969d25fb584de7740f62fd
2a9ae0bcaa39045d4625ddc14dd8405943c221eb
12418 F20110112_AAAXHS rao_y_Page_157.pro
5a8a64e398ae0411ff5b9929a9b27d7d
63460d23d261155f68e8ab896b9edefb1111b577
F20110112_AAAWFE rao_y_Page_134.tif
db7cb7ae99c8d2304869c837fd31bdde
f0bb900b7f3bfb946df2b0b10b4e3df9a22c6a8a
45171 F20110112_AAAWEQ rao_y_Page_043.pro
36b4cde4318dee5399442c6a6b1fec97
f2cd1394927a46e6a049902b1d16c5a05d2b2b7a
43486 F20110112_AAAXIH rao_y_Page_175.pro
8c1ae113fec9c72adbf1a4d105ef89d6
7328993e7e116b45ec64ce35f571fbeb5129faf7
43358 F20110112_AAAXHT rao_y_Page_158.pro
54990b609976787a1ed9f8a53e5c73d5
adc89a3f4e5b98675bd0c09f87e4075cdd0058eb
F20110112_AAAWFF rao_y_Page_076.tif
4a78ed2fac53a58755121938bf34ce3d
ca25977cfa8e42790a8a73be3d76d2741f0d46b3
1294 F20110112_AAAWER rao_y_Page_127.txt
68c91d68d5ac8c945ddefcc28266a4c9
f6515af3b451d90a191aee27b100bf9c0bf531a9
60476 F20110112_AAAXII rao_y_Page_177.pro
a9566b3dd55d26a32fd5d311033619b3
a48ea0fce88294a149c001d1ef1c2e182c9ba185
52138 F20110112_AAAXHU rao_y_Page_159.pro
a0f2aac10f495fc9f1c2a34343aa036f
2cf66efe099222b4d856ea73d3ed40cff541909e
35973 F20110112_AAAWFG rao_y_Page_110.QC.jpg
246362d2e9065a295e54f8fb57410047
f6b842b0c5b68f41712c4d034f7102cacb9b5f10
F20110112_AAAWES rao_y_Page_145.tif
9eeb741aaf07da3fddfd02bbb222fa20
d1c5bd14748ce1fa30863edb21cad5752ef20bd3
60568 F20110112_AAAXIJ rao_y_Page_178.pro
d64087d35523128e10ea731dc868efe8
a9327be35d12a9c4de8f1949b58e38e5afd6aa34
48693 F20110112_AAAXHV rao_y_Page_160.pro
ff182937140f0ac154ff3f49db40117a
9931ede19250dbfdec0c93c2a49fb5775f22e07b
76542 F20110112_AAAWFH rao_y_Page_062.jpg
abceb37d6a66e971db9034d1cd0c4212
01b847335b232307776ceb22d9b43f1c0fba30b7
F20110112_AAAWET rao_y_Page_095.tif
dbbb4667039d18838121797c570aa791
e02d1201eb124e8a40cbdbe1b074c3a3fe03d93b
64292 F20110112_AAAXIK rao_y_Page_179.pro
33ac9c3ce4064a9cbaf8258649ba6e68
53ed45213f96c4eb7f5e68791ecca16ca7c5f9dd
44684 F20110112_AAAXHW rao_y_Page_161.pro
52b4dad8981050b11bf17ee1c9c71f02
7f00e378c1adfb97396326f88615c26e2cfe37e4
32540 F20110112_AAAWFI rao_y_Page_132.jp2
a980cd914d528fee54b082712f592ff9
441062e67fafc1eb5802886ef012c3bb03e1027f
13231 F20110112_AAAWEU rao_y_Page_099.QC.jpg
9cee7be59a17d00eb6571c3a1a2d8bd6
9084aed7a3e4fc778c8a9948cfed46de1ebf5d63
59181 F20110112_AAAXIL rao_y_Page_180.pro
40856dce7c32fb784990fe3342314d66
2d3af753f6be6f93bd2c547d3ab148c27ca1324e
22934 F20110112_AAAWFJ rao_y_Page_116.QC.jpg
a553bee72bf48adb5ff7a26fbf940206
535382d0ff28978037416dc8c59da402aecbe0a4
685 F20110112_AAAXJA rao_y_Page_013.txt
cd91062b53bebc9aac5c853fab4b5958
042dbd47104500bd60823b69222c4e819f1f3be4
54906 F20110112_AAAXIM rao_y_Page_181.pro
d1333f0dd6798bc70765491c4eb248c3
fc47c835a070e3cf2d72bd123bda1856f1c72de5
41715 F20110112_AAAXHX rao_y_Page_162.pro
982422b7fb320cc7d85a60189d1479fe
b6d30fedbdc344816a5a1ff20d9d3fce34e88fba
102666 F20110112_AAAWFK rao_y_Page_088.jp2
9d164d6819403515b9822ee3d7a92758
f9f711be84c83533f79ca56f7718307c93c5c1d1
64630 F20110112_AAAWEV rao_y_Page_012.pro
f963f014e40fd984abfeb8f5a2b06af0
7356f88d6da62121ee75d05285ad16157f5ff185
926 F20110112_AAAXJB rao_y_Page_015.txt
ef4a685344cb813349d17ea6f0e1bd22
1fe27807540f91bdae0e6ea5010df8c7dd565800
58399 F20110112_AAAXIN rao_y_Page_182.pro
9c73e24a0e9fc85d6f1a11ed5682ba5a
10ccbbce07d7f36f67669eabd3642f69c9afeeef
36692 F20110112_AAAXHY rao_y_Page_163.pro
5190e10dee39d1b4fa2cbb92a37ab722
fedb74b92fb0a95cd2ee4223a25110421554fd1e
2276 F20110112_AAAWFL rao_y_Page_011.txt
14ab62679e694e55d91488d7e64b38d7
dea57d3ce5d234d5ff8e2965c41daf23d0196e03
84525 F20110112_AAAWEW rao_y_Page_075.jpg
8f622dbb0a9541a5ba33a0b7395c6bd6
2226efee50f58070ee706c9563300d8ba58a6778
1746 F20110112_AAAXJC rao_y_Page_016.txt
d9d41d8fe85d5c7d7731e4578d85eafe
9d965fc982758e0b601f706a08f437c5e84b46e2
55871 F20110112_AAAXIO rao_y_Page_184.pro
bf3fddc795a3e6a37972369d148383b4
d3b1978eb02efb66be96c4415bc4221060d62cea
37886 F20110112_AAAXHZ rao_y_Page_164.pro
c0a23bcb0e67477415479702fd3c0c5f
0b6ebe28e31adaf90f32909d69211a30d0e10824
74343 F20110112_AAAWFM rao_y_Page_096.jpg
e50c4271be2f73b638b9643d5b969ab6
1ddcd54d10d89212c392bc8b39d1db215f998740
86913 F20110112_AAAWEX rao_y_Page_146.jpg
6a0a7b83a7e20de68c4154819ec092e4
6781b255450d2705af2ca528b84c3453f9d9359e
663625 F20110112_AAAWGA rao_y_Page_111.jp2
606948f8536ff946a23331b37afc5702
49b45efe26b634e93af55448a414778ec5ccdfac
2169 F20110112_AAAXJD rao_y_Page_017.txt
bcef32f4a68625863d6938cf1d89726f
e8e48357f728e2949ca5851a6d4ea61e60f5206a
59458 F20110112_AAAXIP rao_y_Page_185.pro
42238899b491ad4f2050b43ff60fa3a7
92be0a11cc18c193fe4e2bf8fbf84d4abff601d3
1788 F20110112_AAAWFN rao_y_Page_153.txt
078aecda4b61d93fdc9030b0d9e7b424
d695627ce8d880009bb7b5b8b13d68bc433054e5
45141 F20110112_AAAWEY rao_y_Page_113.pro
91484e4680e891d26d1813f27bf9164d
3bb9f6e338a4e4bbb1b2f5520dc07bc1df0486f7
8570 F20110112_AAAWGB rao_y_Page_028thm.jpg
bf528f5f4780068509d5265f1834373c
3d979a7d6d46600bcfd07132be655187bb97e34e
1792 F20110112_AAAXJE rao_y_Page_018.txt
14118e412d43fec50d92c751bb421ce8
f48d32b170b88933562c2bd14946b42dd7567d16
31432 F20110112_AAAXIQ rao_y_Page_187.pro
cfa946b6f583c0a7864c60d1c80f18e3
956da2dfc61f10c3567b56fbaa6655b4892b4212
25421 F20110112_AAAWFO rao_y_Page_052.QC.jpg
c82b1e649250dd6632421db8a5cbd921
62217b26e94c92d9c3fe7ba46371a61bf23811c5
2075 F20110112_AAAWEZ rao_y_Page_054.txt
fbe7a5a881faad8316859b8c2242989d
0df908ebae988e3a113b9b2da2f27f8171817b30
1891 F20110112_AAAWGC rao_y_Page_166.txt
235ce9defeac66f8cd55eeb427d58718
f736fef520b613999871bb7300c5b03024dd5644
1363 F20110112_AAAXJF rao_y_Page_019.txt
36fc34819b5b63342e4955c87e666000
a1be24b0d91c435a1d0a5a3274a2fceaf474c949
471 F20110112_AAAXIR rao_y_Page_001.txt
3026e73edd761fce56b6ae26ded70e85
db7643e591e016790d4ac61c44f1c605f817cdaf
F20110112_AAAWFP rao_y_Page_029.tif
8dcf6bcdc36e9f7aee6724fc6cabb75f
27e488be3c1c9339e03f4b3ba66426713ca8af64
13415 F20110112_AAAWGD rao_y_Page_186.pro
9619cc01906aa40e15c15947653cb9e8
3f1762e6daddaa99763857693096d63d08a87ec0
1749 F20110112_AAAXJG rao_y_Page_021.txt
1bbac269173d7ebf859a531903c9c2c7
bab9ef622644d6cad8a2128fd5126e7b2c18f3f0
214 F20110112_AAAXIS rao_y_Page_003.txt
b3e8d8d45532efc1c5c8ce073fd9c87d
eeb164e8740f1fd76e17a65c49ee92011730145e
41574 F20110112_AAAWFQ rao_y_Page_070.pro
51f60da40e7155be6e1e3cc698809dc6
14b0cb5e255d40d02a723e1a1e0c94c5c05c7cbd
9157 F20110112_AAAWGE rao_y_Page_012thm.jpg
63767882f9feaab758355d74ce2d851d
150696082ff4d6dce41377b474b5a2ac51925195
1773 F20110112_AAAXJH rao_y_Page_022.txt
0eb3ae23d129106c1c3e1042d422de4d
940804db6c9cecbdf8919372bf98c37dccb0a164
F20110112_AAAXIT rao_y_Page_004.txt
6085ce90a3602fb769f1c9ade57bd244
5ea4d2d136cf99d86149fdcbd0d7ef427f03bb09
35556 F20110112_AAAWFR rao_y_Page_051.QC.jpg
59d01fc35745fbced509602aafc0f2e2
8fc9e0651be1f63da1abcedd55d3ed86c110c6ad
131 F20110112_AAAWGF rao_y_Page_002.txt
787e9439ff66dc5b52ce1bd48cb4a8ae
669542e5b61e2e437bbf223bea08dc117675cc06
1620 F20110112_AAAXJI rao_y_Page_024.txt
054b51badce1500b3bcf8db912d5d51c
ae2818fb4d34dfede0e5367f53042623b5cec004
213 F20110112_AAAXIU rao_y_Page_005.txt
0f83f4c41560561ad366e84ecba7508f
447b9d7b8c25ffc7c238a1308986706de06521fb
1051986 F20110112_AAAWFS rao_y_Page_012.jp2
eaedca957cc64f1e765a143dd9bb93cf
8b8b3fa381109f3e6308be4661ceb53cacdb6896
19553 F20110112_AAAWGG rao_y_Page_128.pro
3e267450c01bc67171e5541c79143b16
6bc1d27a4f763f0447c08835182181bc93303644
1947 F20110112_AAAXJJ rao_y_Page_025.txt
d7200cb0dcefd313b648c935c8f0f88e
8d8826ef45e893229ab04d887c36a585590ca961
2777 F20110112_AAAXIV rao_y_Page_006.txt
27428836b144e6bf09eb4f4abcb54cd6
2c92537d471f757cdc4e617795d069685c709f1c
1726 F20110112_AAAWFT rao_y_Page_014.txt
fa52a2c9aafdcee3e4930431832d41bf
ca11be0bf2f42eaa2ea0d5a6462e0efaddf991df
83841 F20110112_AAAWGH rao_y_Page_166.jpg
4e8813f143026a838a0d8afc1d4ab7cb
657e1b31d811682653796535db583ca86006ebfd
1852 F20110112_AAAXJK rao_y_Page_026.txt
fbc34dd7b19565d8678e64bdc63a88e5
273fdf7e62f0788743603f0cee697b04ef3f8d64
2991 F20110112_AAAXIW rao_y_Page_008.txt
eb1a2a27850a41f37559c7bbb362114b
7d256cc64b7f89558a28e3c7be848427e7770455
F20110112_AAAWFU rao_y_Page_063.tif
11bcfd8953f5ec4dd4f5b482a9049ac7
a87de0e32b741d04060086a154e8a1668d770885
9181 F20110112_AAAWGI rao_y_Page_054thm.jpg
4f7771bf0e1da0eb94825b5d37b35199
79c93e4c9d24ce30738893bc3131c08aa79c3398
1660 F20110112_AAAXJL rao_y_Page_027.txt
c2a66f5ebdbcdf61e81ce5ab8849c9ab
e2c4b3215fa89636b2c2294dc63616ef842cae95
494 F20110112_AAAXIX rao_y_Page_009.txt
6f6b0d645ec17866905f50aa9c65d6df
72acb5c07b779e86a16da79d13c722ecd63a47ec
6146 F20110112_AAAWFV rao_y_Page_002.jpg
ff43f5b3a48cd25fd57a27ac41d3c865
1308986bed29c94b0871b7ddcc0a0418378a808c
F20110112_AAAWGJ rao_y_Page_016.tif
9343d6413d1d540fbf73a7ba9131b2c2
08e4ad0d611d882904547d322db1823323157494
1938 F20110112_AAAXKA rao_y_Page_043.txt
144d0602f21c8a6c4857cfd437d1726f
5b48152e76a05c37f3088b56adc620d38305c15d
1879 F20110112_AAAXJM rao_y_Page_028.txt
98ef85c10d263355270adf118d8d2285
8808f0fe1b666b82f6ea8d7825fc7f171d19dd5d
26435 F20110112_AAAWGK rao_y_Page_014.QC.jpg
d4f82f3693736087110cdfdae196b331
7a3a44ec759e168df2bd5df0f4e7b5813f121b23
2031 F20110112_AAAXKB rao_y_Page_046.txt
5b9f49865d222c5305f526f36308e6c6
3e239f0f6c5c92de98b171ebe2787ef34637e54d
708 F20110112_AAAXJN rao_y_Page_029.txt
0f51b74cd352173d66ef34c6f1595290
848022934f33bba051c6cc2a9b0a6e6d714d1b78
331 F20110112_AAAXIY rao_y_Page_010.txt
8685ea8805f6d28107f59842f2ecaf79
4ac728997e1ef05af36c2a534c5e47d2d8d9b1cc
5939 F20110112_AAAWFW rao_y_Page_174thm.jpg
0c483f254a01bfcc58e7e6b5cd6f1e67
7c1a4f67aeb1c8d69fb232e4f9940c61cd66e3a8
23965 F20110112_AAAWGL rao_y_Page_172.QC.jpg
abf17337cca3e467c0f83f64de804cbd
58a1379fc986f2a7fb6e2acc0ffacd1c4ab5c2ad
1861 F20110112_AAAXKC rao_y_Page_047.txt
480aaa5b61956cd458c543b4036d1dfb
102191a611e08b3a5902c4e8a9d8bcec08922d7c
1882 F20110112_AAAXJO rao_y_Page_030.txt
ec352092ef56f47bd94dc2d33e65f342
6c429c65c49d8ccf96a03c11b7b47429e0272577
2610 F20110112_AAAXIZ rao_y_Page_012.txt
38973fc6de70b8ca0b37519ccffba293
99d22a036eb7834aff3638c78492c9aa8636d33c
71460 F20110112_AAAWFX rao_y_Page_061.jp2
29da260db40e0e8a3a584deb48d753da
5bb2b9985e872cc52f36725a50948d2b3257a890
25376 F20110112_AAAWHA rao_y_Page_080.QC.jpg
09ffee92c952d36438f29483e2c8fc27
923278d54bbe49a3d4557f3742da410ad9e47ff4
F20110112_AAAWGM rao_y_Page_023.tif
7ea75f20f975731c7464ade5995c2b5a
42ae7e8392490d29fd62a1f6c04402369cddf9f4
1728 F20110112_AAAXKD rao_y_Page_048.txt
d27884fbb2ca6624d6c854970e8f337c
b457a1fcc91ecb79762242b5c825f5b1cd819e2f
1394 F20110112_AAAXJP rao_y_Page_031.txt
d5c248e52afeabdd0dfe7889cd7f6dd9
ea3c7c39bb70cd9440a095c85c5976742d5fd404
929 F20110112_AAAWFY rao_y_Page_003thm.jpg
b08061b6fc4f433930faa70ce5d61706
c599008fd8249d45369c7a3650a5afda731106f2
93753 F20110112_AAAWHB rao_y_Page_158.jpg
fd08f99e95a1bf6fcf5d206eb27d18ff
e4a68215ca48cdb9c9a66c0da637a90c902564de
48127 F20110112_AAAWGN rao_y_Page_025.pro
95cb1a01b299ef39ea1f5bc175c669e4
83af169a61ffe08c437d6d77a82ab7c8ca876de9
1230 F20110112_AAAXKE rao_y_Page_049.txt
1eb7e4a2c92f5c9ac17a96f50cd81bce
3e00ff7d65821310e76181f9cf047de7a8ff9ba9
1989 F20110112_AAAXJQ rao_y_Page_033.txt
e5f775e1ec8089a78e416dc4bcd2219e
a6d04c65e4bc20bb0433c9be0612691d903b7af0
695125 F20110112_AAAWFZ rao_y_Page_125.jp2
c37eaba43cae83cc8d137f76531eff23
bdd79cfd569a85ae145937f560f4158723c4d268
1968 F20110112_AAAWHC rao_y_Page_032.txt
9039ea3d6bafa8b3e9fb8c0b8e2244d4
16701a8dbd143ea2d674f6fd4a1ba2b76305c59c
88047 F20110112_AAAWGO rao_y_Page_105.jp2
6521eac23da46d8babab4e354fd5e3a9
4f88c579c3b385f74d422e1633ba327938c98864
2055 F20110112_AAAXKF rao_y_Page_051.txt
ee0be593897f05a9f2964f33eb45961e
e4a564955cfcba4b7ecb4a84002532af5f62fde5
1857 F20110112_AAAXJR rao_y_Page_034.txt
dc15e0cb9aa367861a171df8d6fd6958
232012a7db49312714504d598463dd27fcbeef89
1435 F20110112_AAAWHD rao_y_Page_050.txt
3c272fff5e012b3cd14d1dad8297f3ec
c1417917c128697d02e06d7b78c90a0df21378b5
1986 F20110112_AAAWGP rao_y_Page_135.txt
4bc6f8fcdec2e8a33bae8f4004adcc3b
7ed7ee4c19dc9dedcafaeb4a268f0c38effec227
1121 F20110112_AAAXKG rao_y_Page_052.txt
9129d4b6914be1316d02a1fd4eaeb3a6
2ddf858a1be97615640640b38119cf0beba2e973
1718 F20110112_AAAXJS rao_y_Page_035.txt
dc8ca8f4c72a1c81919d43b247dc4019
386a94c9a5b201e6a872fb114b60e073a930785b
F20110112_AAAWHE rao_y_Page_112.tif
31326bae95ff7c7aef8f88a7267bca39
1d594663c51472218af7648b382216e7d854e02b
77015 F20110112_AAAWGQ rao_y_Page_049.jpg
f6f026705996982fe5f7cccc683867b5
eed641849d4c901f32c096f42db2f60f01a3f644
1694 F20110112_AAAXKH rao_y_Page_053.txt
7fe2053227ed447708fd5e339c641f7e
42aa6715ac90d0bd0a53a499f7b3e90fb2fdbf6a
1984 F20110112_AAAXJT rao_y_Page_036.txt
fa405188b7c2a89ecdc25f39769d8552
9fca96747e48a4318a6d6db721933a419dfac711
8473 F20110112_AAAWHF rao_y_Page_102thm.jpg
ba6771a4b69b2351bbd868edc83693d1
231eebec6bfe7d966e31b86c639a49a5a1598bf1
5736 F20110112_AAAWGR rao_y_Page_170thm.jpg
ff7a8623cf905faa6d092459784cce11
79edfa8da2b0816a8c9b8d52fda06b3ac2efe166
1904 F20110112_AAAXKI rao_y_Page_055.txt
2f2f1d56821404ece4a33723904a81c5
253266a134a87f0ee5990a3c6db467fc085f5003
1934 F20110112_AAAXJU rao_y_Page_037.txt
435dc2b7d8ca99b111ad4dea595f8208
781c77ff6fe76e51038d60182e15fd44058b4885
F20110112_AAAWHG rao_y_Page_110.tif
ae03ac3ca15d8f9988c0c57bc4bb532c
e326fb3529fb277bb48ecd35dabacddbabc44032
1993 F20110112_AAAWGS rao_y_Page_120.txt
fb7937d516869c04ca6f4a3b86a7f017
fcdbcd726583c7e3a12ae60f8773c4b5311e4996
1877 F20110112_AAAXKJ rao_y_Page_056.txt
c9a06777060627858c0a5bfd1c3b68a9
5a21665144c3e27fcb9147aa37aece6e68a8d1af
1818 F20110112_AAAXJV rao_y_Page_038.txt
26ce022f99606e1cb95d0d21d20dfac0
caa33850aa207c6c0e89e95518553b03cf2adc46
10625 F20110112_AAAWHH rao_y_Page_013.QC.jpg
ab1297b23aee5612dce213e4f326fcd5
b5b005df5761fd2e41394029c2a88d9116408624
1840 F20110112_AAAWGT rao_y_Page_044.txt
35cd5b3ab5fa994b51490a9a9c9c8440
f4bf7f40f360eaceb819cd7a7d001828e72e43e5
1483 F20110112_AAAXKK rao_y_Page_059.txt
7408415de866e8825ec59706b7b8acf2
393d393e04f56c696e04b22b029049b51f134fab
1789 F20110112_AAAXJW rao_y_Page_039.txt
739d5ea440fb625e0c379f04b319819c
31a75dbff00a6e1da00c1301f1cb5c894b793c64
1355 F20110112_AAAWHI rao_y_Page_145.txt
2113c5161db37c08863f6debef3d7903
14db96380d737fde50917a048291860ccfc2fd2d
32334 F20110112_AAAWGU rao_y_Page_055.QC.jpg
0d1184b2a48c64cc12b6e6aaaeed9302
f382ea91c5be9cfceedd8b9a8f73e6890581e9b7
1569 F20110112_AAAXKL rao_y_Page_061.txt
ff8c78305c8e342eddad729dda2a8bb2
f18757e05ae5b57dea7843d7b66c640b32e3c48d
1281 F20110112_AAAXJX rao_y_Page_040.txt
5460bab3e32996fe072073aaf665c6d4
e9ffdb441c936566dc4a24dcb234c7a5e04aabac
89983 F20110112_AAAWHJ rao_y_Page_016.jp2
f4d3c2f8758045e20668407ca69bcbba
236be6215ccc901c64ca8e2f8aef4bff70229996
124248 F20110112_AAAWGV rao_y_Page_183.jp2
d77245e8a0fe0a21ce09613980604fbe
a1039ac7e6d1fe4f7aa101629db6a3d20b05f7c1
1617 F20110112_AAAXLA rao_y_Page_080.txt
5e56a5b2ba525cc02a5f099b9e93d646
50c94eec82f6f74cc5f6b6613f88833a817a82a5
1930 F20110112_AAAXKM rao_y_Page_063.txt
a25f141a34bbae7f9926731697ac889a
ac53d295c9e20e19055bf6abbc1e2057388de11b
1500 F20110112_AAAXJY rao_y_Page_041.txt
632c284b0f9ed6bd3c120556ee10c095
1cc778c300e931733482ef8ef669fee11e4e56e5
52410 F20110112_AAAWHK rao_y_Page_015.jp2
3f8923529bc57537a2c1f4051bf0d299
3801a8e391be197af2e17dc971f63eb193ec1947
25586 F20110112_AAAWGW rao_y_Page_062.QC.jpg
3a5d4df31247473a711f529e15739f8c
ff6569da3bc57e561fe384ddc99b50a1ff20cd2f
1823 F20110112_AAAXLB rao_y_Page_082.txt
0e5b93105ce8698442b0ac697b187d33
ba617091be78363d6add9e5fd08a3dfb59ec9198
F20110112_AAAXKN rao_y_Page_065.txt
7f9df15ef4aaadbab59c99a18c45187d
ae208b87968387552497cdad0e2bf59e7a36f09b
34324 F20110112_AAAWHL rao_y_Page_031.pro
84532f4b2872ad0731970a448a638a71
e5419fa82f8ef198675d28fb59d1c5618eab7af1
1768 F20110112_AAAXLC rao_y_Page_083.txt
d9d3a2f7de918a4b836a65a425887c78
e6f592db7ce5cfe78bc38896a4516d593c457974
1869 F20110112_AAAXKO rao_y_Page_066.txt
ac2badb8d695777160ef0718343c09be
7162a6cb1f3d3d38e7abe373ceee96cd9b220e15
1820 F20110112_AAAXJZ rao_y_Page_042.txt
b830e1a02aac963998a7b12eb8999f7a
31b1c0f2e9069387a839b0dce87d89ef88c377cf
87711 F20110112_AAAWHM rao_y_Page_104.jp2
fe27c54679f919287a3abc72902d9048
6794301791bab754e04f1d530db877ae92d6b025
1712 F20110112_AAAWGX rao_y_Page_141.txt
3ce518958e1332eb9a65b232721e5cc9
29be44bda3815362a0e5b21bcd27c7bf14f9e875
88736 F20110112_AAAWIA rao_y_Page_081.jp2
bc2858e06542f688df25257f6dc12198
1827daaf57cfba9f2e4d8915f9167fc04f805d58
1400 F20110112_AAAXLD rao_y_Page_085.txt
c2217fc395da1303b421196bb490ef62
1ef0cc0940eb0a6dbd5da73c13966cda0b52889f
1868 F20110112_AAAXKP rao_y_Page_067.txt
62110d2a10aee19e82322461bc9c7ace
59835266530539d5bfc2c7190e379c3afbb7f13d
923784 F20110112_AAAWHN rao_y_Page_090.jp2
aedca04a9ad774b7620ec2d813b85e44
96f03c5fef3d5d49625f5209775d77b6b80f0911
89643 F20110112_AAAWGY rao_y_Page_167.jpg
2abd0daa47f533cff6a49c0faea91e45
86fcb037fcf362c185fd9726183b1a81fa5ea375
952374 F20110112_AAAWIB rao_y_Page_070.jp2
5ad64a8e49cae9fff5df7c8ac1899221
d838ad9f16bcce8df7896a48ef10b38aaed97245
1701 F20110112_AAAXLE rao_y_Page_086.txt
a994805eac40ab098aee6b3bc59b7eb8
2124b1653e404d50bf5adf29a3ccd125dd6edc81
786 F20110112_AAAXKQ rao_y_Page_068.txt
9c21478034e99d515c8ca0eea163cbab
18f4f293edea1cab54333908cde57f001cf41f09
101162 F20110112_AAAWHO rao_y_Page_121.jpg
8f061c8b909b02f7d2810de6685e20ee
8d07796ae3a3ce2d42b60059125711d54aba6958
730784 F20110112_AAAWGZ rao_y_Page_156.jp2
0ca823791dba919d37f124a1a590b923
4adc6abb1c6b6febb344eea760c5f8e0aba47780
F20110112_AAAWIC rao_y_Page_005.tif
29b1d94daf33d8293f0258a3cb511cb7
503fb1a128ed3b09e58313e701dbcfa493e0ada5
F20110112_AAAXLF rao_y_Page_087.txt
f10db7863fa16fb6efd785017ca7c99e
2bff95a35ca456d319dcfc798953b8caafb10cc0
F20110112_AAAXKR rao_y_Page_070.txt
84ff47f192aa1895c868f16593bee878
d8610edacb64d75d206e377883fc830e21226b98
44501 F20110112_AAAWHP rao_y_Page_057.pro
7a8ce4932c431362a8cbade0212f8ab0
c08419d650249f39dfaa801d825edb1e731f0c07
83976 F20110112_AAAWID rao_y_Page_109.jpg
205eff5f8f05ddbf7dba1f4ed82f3784
a0f9acba02e00eb862b3c5434245c209c6602041
1944 F20110112_AAAXLG rao_y_Page_088.txt
a57ae9940e3733c2c4cf981dd5c60976
81d4e8852510dcaa9bd087627e989d13016ab85d
1417 F20110112_AAAXKS rao_y_Page_071.txt
1ab668a8afdf1150c5a3fe2546d88f45
3b0d596471fc88c870bb5b4cabd9e3dec7ab14fb
58880 F20110112_AAAWHQ rao_y_Page_183.pro
8cdb4eebaea0d183d333eeddaa0ed91d
1f0d01fddbccb3f3b7a60b2f5ecde3d6fe7a9613
118999 F20110112_AAAWIE rao_y_Page_182.jpg
b988534debb14e3e533a0a75da0f261b
e98960cedb0f37a5276d7b3f15b1e0040fe296bc
1906 F20110112_AAAXLH rao_y_Page_089.txt
452942e917164e35f27b0102f1b9f83f
d2b406785c802233736d787e6d38ec486634f5b3
1666 F20110112_AAAXKT rao_y_Page_072.txt
7a2be40f92f67a5dcfa03006f7ecfad0
5a34a1a404e1ee9971ba5792fc033ad3f47b07d4
8009 F20110112_AAAWHR rao_y_Page_133thm.jpg
3887c6617b463de540985ac589167c19
165d1398656c4e4c7766cd593129ebf04c5a5e8e
783346 F20110112_AAAWIF rao_y_Page_049.jp2
0e7e81036290a689bbc1936dbafdadf9
7c3229aaf76c5d3d73eda78685469a19a9455422
1409 F20110112_AAAXLI rao_y_Page_091.txt
c0c7cbd4015f192ce07276805eb2d484
763c3099623f126b5afc24fc5dcec789314efdec
1422 F20110112_AAAXKU rao_y_Page_073.txt
b73d36cdd41958cd5a1910e5e17ea4f5
6dd2ae9b96d1787ca28ab58b15808fed648fd217
16708 F20110112_AAAWHS rao_y_Page_126.QC.jpg
42ad08fa9682dbc4bfdb2c8d4f52557a
160f99b0716b00b1fba67128b9380d8c982959d5
1655 F20110112_AAAWIG rao_y_Page_060.txt
5599876a61c088eabde177dddfab2625
e146626ab80e06f9faaa6f735f444a74a38db024
1273 F20110112_AAAXLJ rao_y_Page_092.txt
e0593c93e966f1d68b099abb7e853b31
c71583905e3dd5368a2c87e988630a21afb74558
1619 F20110112_AAAXKV rao_y_Page_074.txt
a304d2ebbcfa2e688728bbcc8d760c7a
7b22ec9f1b3fd86c1ae2a42cf4573efd5aa00069
110179 F20110112_AAAWHT rao_y_Page_130.jp2
aae6634cddd9ece224742fa3e6814dc3
c5d1c369a924b76e44532139ed0d21d665544904
1051983 F20110112_AAAWIH rao_y_Page_006.jp2
2142ba85823525b1d56aa48ca8356747
2ad868704b578515d42deb7d1b2daa355f8c784f
1008 F20110112_AAAXLK rao_y_Page_093.txt
0cef1a09cf72e6e63f3d78c6be861eb1
94f95221a2545a6bab74e4fa182bc85d27dbffd0
1597 F20110112_AAAXKW rao_y_Page_075.txt
66274a0bbec6af063cec32a8ae2ef4dd
d69e40f9dc570359c922f71e667b8bb0f60ae1fd
30534 F20110112_AAAWHU rao_y_Page_167.QC.jpg
9d004f05475da4bd0ce5cd330bc9ee33
06f2726cfb0e67aae7f1c76602a5ac232d2fa7a3
42713 F20110112_AAAWII rao_y_Page_079.pro
3a2a8a7b141e95f449f7daa8c68a1cb3
4dc0025486e1ceab8ccc2979a5709857fcd60f2a
1677 F20110112_AAAXLL rao_y_Page_095.txt
7d92a3875ad7dcf5f4029ec851d62a2f
66991bc22b63cb40c10767dcb55d48c5e47703f0
1807 F20110112_AAAXKX rao_y_Page_076.txt
cbd4bd43d28a5513336737a1f4afffdd
992bea8fa856cf4583ac0f9a4f7e18068e855b71
809539 F20110112_AAAWHV rao_y_Page_096.jp2
d8eeb66ce9b9817cad6dc1c6ffc8c0a0
d262448d6a05cb9e87898cd09438c66c113f8c2b
23599 F20110112_AAAWIJ rao_y_Page_155.QC.jpg
db334728f8053d77e2545da337e723b5
64cafbc3e93cb2a69f75b30a5b773b3da6323873
1855 F20110112_AAAXMA rao_y_Page_113.txt
07b7c34d115d5f57a8e14ff0d7010b06
b3f4cca5e95dab42e568363431b0c5e9e07f6f43
1378 F20110112_AAAXLM rao_y_Page_097.txt
bbdf80b1c148c46c5cb30e34186ab44d
e754b0ca24398426c56a931b1675b3c3306531ac
1545 F20110112_AAAXKY rao_y_Page_078.txt
3907c1a85bfaf88b95b057584b5ceb80
9f105116f6d07b2309f57396839f41e3b4d58a98
30351 F20110112_AAAWHW rao_y_Page_150.pro
dd54fcf8421a8dc5db98a56cb0dae590
62e7949905215c15f1f89759e60f2ea9d7950bc2
F20110112_AAAWIK rao_y_Page_089.tif
1152df228b2e01e5778e6788c16e715a
11ff9f2059a23b2999d0e701c217c06674b1cc9c
1415 F20110112_AAAXMB rao_y_Page_114.txt
a4e30dad1329452d4aef09a0e6c2fabe
b45461666e8979e3e7223aaa56da1847fb9a94a8
709 F20110112_AAAXLN rao_y_Page_099.txt
c403d56f3d837c05c43c6e29584ec9b0
d3a238dfceb5e6655aa42d7e4e837ec452707199
1923 F20110112_AAAXKZ rao_y_Page_079.txt
e58d635ff97cfd40a9c0390327beafbc
d4e1b6c4b20a73c7f8c4600b839595f3ebc4d818
31416 F20110112_AAAWHX rao_y_Page_038.QC.jpg
fe16d4e68545b9b054db3c620be8cd0c
5bb929e8d09ce9025b5f794a04dab19c8ac3cd31
59640 F20110112_AAAWIL rao_y_Page_156.jpg
1c61e554b59696c0392f8db5b254d64a
b018c310e4021cf8223f955d73f66213d531713d
1771 F20110112_AAAXMC rao_y_Page_115.txt
f143be799e133af1bc1b7e832aefbe1a
cb98ec2403a7b15e2c941fc5a357fb535cc49d2f
1776 F20110112_AAAXLO rao_y_Page_100.txt
e70ac6bc44b11f526d12eb0dab169a8b
b608373dfd623799396cbbbc31d9374360b30f5f
32798 F20110112_AAAWJA rao_y_Page_028.QC.jpg
dda2636f0dbf0e37957751ab38007555
cbafd91437defb8d4b331b8ce755da88534948c8
1621 F20110112_AAAWIM rao_y_Page_020.txt
766228adc9cb4477d9723a889d6aa07e
aa9b854d3398abbbe88eb6b2d3d25b63501ece5a
1054 F20110112_AAAXMD rao_y_Page_117.txt
96b164b1a8bdcdf9fb449a88d99474a1
90602570ac01b11c908a3d8e40a4d5b6e6dd856b
2070 F20110112_AAAXLP rao_y_Page_101.txt
1609fcdf3c9b6c19fe5228caffeffc19
b680c2030c92ed471a5f6e16ad43d23815b27038
F20110112_AAAWHY rao_y_Page_084.tif
a057cb36bf040ac64466d2b6b5b7cf4d
6bafd218460c4a223db6f06ec30c126d05b91767
F20110112_AAAWJB rao_y_Page_025.tif
739f33998acdb5405d42e26aefa0eeca
59eb934c55764775987e6eea6a184ce5a2bc3419
96588 F20110112_AAAWIN rao_y_Page_036.jpg
c9b99b3c581c9d5605ba2ab315db1406
8efdfff6e8c75c92b2716bc3a3aed7363fa700ad
380 F20110112_AAAXME rao_y_Page_118.txt
5a589ef4f55930ab37c5d1dc55af82ba
00b3f6c39d41c318c6b146bd394c9a47e63c03c2
1824 F20110112_AAAXLQ rao_y_Page_102.txt
46c636c750abe642f52de0386a656343
043f47ee97335b632864f46a6875e21f8537d815
33584 F20110112_AAAWHZ rao_y_Page_102.QC.jpg
25c8c049f0ac244e2a29e978d9403d2e
06b1a3d6fac714d8dd19f679af18aeea57c7d6c4
4453 F20110112_AAAWJC rao_y_Page_169thm.jpg
e5051f8a4ebf9b5d3023ddfefe86992c
f900e212fe04aebf381b4d24647141b0783c04ae
1777 F20110112_AAAWIO rao_y_Page_057.txt
d1d956f931e36d30d40a2f3940a0221a
ec82c14d90b3b90674ee143eafdb5d21caeb1f5c
1841 F20110112_AAAXMF rao_y_Page_119.txt
8272bcaf0b2b6941946df8989c547776
68d9a5f0c2de5d5dbe28df713322851285dba512
1212 F20110112_AAAXLR rao_y_Page_103.txt
36b6b4a3190532a9cd72ba59c5bb0b48
ff388a4122477c37cc88f71696eb344fc04c2f92
110964 F20110112_AAAWJD rao_y_Page_101.jp2
d40cf9030a1c9ba797701d622b66770f
458d28d1030ad576200501d29683254ede9ebb0a
F20110112_AAAWIP rao_y_Page_173.tif
6e491817b27e68e7cf80b1d291cbbf51
d8f8ed83e44d9498f5730b9c6111de2195951237
1973 F20110112_AAAXMG rao_y_Page_121.txt
2afebaa099fa89d801b8c283673518bc
c8a4ecfb6ac3cfcbc610757f6f2f864145af4896
1646 F20110112_AAAXLS rao_y_Page_104.txt
0d23cc0796bb899057f240dc23aa49ed
1c69d1fbdfb16ecb2f87d993351de6941cb76adc
1113 F20110112_AAAWJE rao_y_Page_094.txt
faa7187ce8d905eb61ade74ca39504fb
6e0e848041f78fbe7d78f71158339e7a21be0233
F20110112_AAAWIQ rao_y_Page_108.tif
1dd7416ff125c44d26742d420bc9b9f5
d8f3273a7c7279a9e5d48caa2b648725c93156db
1966 F20110112_AAAXMH rao_y_Page_122.txt
a7aaf2a2a50904b398056d3e7527771f
0bbbeebc3f1057bddc0bedd9f4d9cf89069be2ad
1709 F20110112_AAAXLT rao_y_Page_105.txt
c4927de6453e421994c84f8e033deecb
902fb2ecbec39162f65ab83de692c7782b87420c
F20110112_AAAWJF rao_y_Page_164.tif
f9168e2f45fadec51ff08248134ebb8b
629a825be34dcecf7abaebac058d5b42687a33fc
44769 F20110112_AAAWIR rao_y_Page_022.pro
478c51390902935da8853f41834f39f0
32a3b64452f02b5044b9468d02b21a7c9301917d
1937 F20110112_AAAXMI rao_y_Page_123.txt
22bd4983ec7177b826b6237176026461
49cb5fd556166c36329054d688c99ca7eadfb2b7
1334 F20110112_AAAXLU rao_y_Page_107.txt
3092d7b65ba6390f4ef3f6f092efa5bd
d083048b378c2493db60f053e50fd4f88d4563a7
46723 F20110112_AAAWJG rao_y_Page_034.pro
f83dbe7dfa2058d9bd39cd91427a7741
c23c656bd8547fdf98824c6c7d3bafd9000df21a
48229 F20110112_AAAWIS rao_y_Page_136.pro
719f0afafb326034475cd2044959f9e0
b4a77c839d359abbd3e16e7c7e92874b81115fdc
1418 F20110112_AAAXMJ rao_y_Page_124.txt
88d0c6107eb439b02a2b1fabef51eb35
cfc5269dd70654a5e91f8191e3817c89fbbe4ac6
1766 F20110112_AAAXLV rao_y_Page_108.txt
81157cd3349ce2f814b6a3c91f7dcbb5
7f0428532e6d2f44ab6dc8eb95a5dfe9daabb51e
1134 F20110112_AAAWJH rao_y_Page_116.txt
1df2686b388c3bd5f8ff67561f9177e0
66f72566775566e5e46a61845e7ee0ae5546145c
2529 F20110112_AAAWIT rao_y_Page_001thm.jpg
107fc0d970903b0a7bdb0cf661416ad5
7d101c5e933012fb12055bd5cc5cb1553d5116a1
562 F20110112_AAAXMK rao_y_Page_126.txt
44a6efb764a410ac1f5ffd8d3692f847
94c5e4b1fc6a414827918b6bc758e2d919267838
955 F20110112_AAAXLW rao_y_Page_109.txt
9011decf067dbcff9b249bdffbf61f44
72baa5cace2c15bd43183a125daca3f645c53e96
27099 F20110112_AAAWJI rao_y_Page_116.pro
bcbbb87768906049845d2bbc1ad8cc45
8936618044999c18d4ccaa925567abd36223a625
8300 F20110112_AAAWIU rao_y_Page_067thm.jpg
e1f259a65b52ecf1b264f55e48ad8fa8
ef16da51065937e57b884b30cb3f4379ab159840
1731 F20110112_AAAXML rao_y_Page_129.txt
6d660cc470d5e8c09fc3525417827977
cb7e409a3ac2975d9ae2a57b52f4b048bb9e0a74
2081 F20110112_AAAXLX rao_y_Page_110.txt
1adf914520d2ed3683b5337c844ff08c
00f2d7340a75ede6216160ed5a03dd9023571fa2
8423998 F20110112_AAAWJJ rao_y_Page_019.tif
995a1b711b9b8e518f0ba9bfeeb7e06f
527dc801feeedb59eb1886ff4b92ec9ded73a4fd
1810 F20110112_AAAWIV rao_y_Page_167.txt
b2ec246834b6e322eae3b17de1a60d66
cea016849d13c3ef7406e954ac349aee1ef362aa
1222 F20110112_AAAXNA rao_y_Page_149.txt
3950ea22025be5f930a02df8005abc70
60611a356161cc45cce0c35a24370be6ac0961e6
1988 F20110112_AAAXMM rao_y_Page_130.txt
ed784e8e9b4c9598c346afef6036777e
0bc7790d1dbf38a21ebb154290782c76a1b539bc
1274 F20110112_AAAXLY rao_y_Page_111.txt
66076cd3840b859eb82a07b48fb671d3
e3399dc955c002de1103981b16e53591adc04fe7
1914 F20110112_AAAWJK rao_y_Page_161.txt
7020f7480a9e5e985e1fac8e5f8241fd
7dc2cae88d69d62a7b854acb86292929495a224a
23298 F20110112_AAAWIW rao_y_Page_092.QC.jpg
642d110f0832d6775fe60b64ac9a909d
736f9c835c32dcbaf9c09148f7e6d217145f981c
F20110112_AAAXNB rao_y_Page_150.txt
e9c4c9ceedf7b50a4a4cd8bd6ea132aa
9d4ad062ef46274349a516a6520cc33e08c6da28
592 F20110112_AAAXMN rao_y_Page_132.txt
068a4001da705935961d6ebf8400aa14
93965f59038cf30b700dbf3efcd11356e287f604
1913 F20110112_AAAXLZ rao_y_Page_112.txt
1334960cdd85ed6feaabbf527e75595b
24260f74ad04099513e15d663795d28ab7a1acc4
1521 F20110112_AAAWJL rao_y_Page_106.txt
535e4ebaf8bf61a0c35397de82ede223
7ea6d89888748cdc8d957e385c5253780823d5dd
102771 F20110112_AAAWIX rao_y_Page_028.jp2
12ec72786693bb510503b8723e31c3a2
ef80efd8eaddd973ad55ace34d4ba418b1306c03
1853 F20110112_AAAXNC rao_y_Page_151.txt
0864fbbe16d3aa49633c2b19a78b067b
2b1eae4a96c34343e28222d15f8ed652c16f3151
1829 F20110112_AAAXMO rao_y_Page_133.txt
a3a92e2fde2b059a4be06ee82fbfff4e
5d52f4e3e8f76afdbe6d3bc3a090f6178d2d9574
2020 F20110112_AAAWKA rao_y_Page_098.txt
2d45c074ebdbfb13e743ff4f9551754e
5fbc542da6e454b26633f987e4694615a34c1d66
8388 F20110112_AAAWJM rao_y_Page_082thm.jpg
1bc901943e837bca5ee5ebab619e69e2
b328acfd2b78d6a0767d1eb18abbf74c0497ada6
16385 F20110112_AAAWIY rao_y_Page_169.QC.jpg
16f220ce9acacc31e937509858a0cc70
2bea0079b57056ccfe1fc79d2797dbdb877a4c66
1522 F20110112_AAAXND rao_y_Page_152.txt
56c000f56e596d56f60e7ffdf00829de
ba01e434dfc5c71f59a7d6bbf40224a9fe2536a7
1719 F20110112_AAAXMP rao_y_Page_134.txt
1eda32742b6e8598e5a2860b2bd76d74
d969dd859325ae56de1778ce12c984e1d75739ef
F20110112_AAAWKB rao_y_Page_136.tif
29140dff2c99310c8abc09008eb31e1f
65f16a0af70739c2ffb39a30935d15be92442d06
81082 F20110112_AAAWJN rao_y_Page_097.jpg
b90fbd6bcb4c536d58296812ed8b2e6b
daa99554c752d9d852ad8e0d1ef2269971a84935
1550 F20110112_AAAXNE rao_y_Page_154.txt
b7e6c6cb6a399423739bc8198c563c82
98f802c3818458f5d81682d5fd04333038480dba
F20110112_AAAXMQ rao_y_Page_136.txt
07ccc3dfdded3dc867641e8333de28a1
8046f114137d85973cccb36253479ae624136f61
1125 F20110112_AAAWKC rao_y_Page_096.txt
3871d126e39464d6ef9020568c90d4d7
664ea1e679e0b9680044c98423359ebf3b09816e
8702 F20110112_AAAWJO rao_y_Page_151thm.jpg
e6f0407f5cb250076879cb3d54f63bc6
5613d656198713e27f04e7c2e7197151c38540a5
100972 F20110112_AAAWIZ rao_y_Page_120.jpg
467c702ad9e7a63400eca99841aa0522
245cda5494cc345fee4e615f8c7787674be723cd
1614 F20110112_AAAXNF rao_y_Page_155.txt
de03d3a8ee964228390f7125351cbeec
c84d93b8bbe7e2132ebff8d0eccbc05b1c07ac0b
1740 F20110112_AAAXMR rao_y_Page_137.txt
c45c6eb572d0d8973cac8a3ce4847ec7
b4373423d15a81a466a1c0c99a5497aafab99e0f
8759 F20110112_AAAWKD rao_y_Page_182thm.jpg
b726c153449561d1eeff0ce232005265
171b354c5c1132e1d69238b218ff60a90142ba76
85382 F20110112_AAAWJP rao_y_Page_144.jp2
6ae938029054b18e8d2dacc2ed229071
dab41bb958b8072317e66674410c8e5350526d15
1145 F20110112_AAAXNG rao_y_Page_156.txt
c2902e315d82b144ab81a17e6d09d19b
b9e79e4ffffec73fded90634df52a7913bb9acf2
2002 F20110112_AAAXMS rao_y_Page_138.txt
a36922ca9dae5aaff2943f85d3d5874f
1f6d98e6078fd51d65ba1932c240f7faba223c74
22040 F20110112_AAAWKE rao_y_Page_061.QC.jpg
d89c9c6b27f79574a130db34a9e8cd63
f988552e8e5fd6deec63eeb8ea2f619104710a99
97291 F20110112_AAAWJQ rao_y_Page_043.jpg
578e769606843fb01d14b6700dea9a14
3cda193153c5df75625ac3e8572f245d911b27d2
584 F20110112_AAAXNH rao_y_Page_157.txt
132d75f8cfdcbb6202980ad33b6c12a6
4c8d1141163eb30cb6aa1dc12e61617e15f7e1b5
1624 F20110112_AAAXMT rao_y_Page_139.txt
f13290545b7a89620534790788842711
a9c8e31bf157a433736fe3f1d93c6b5c4eec7aa6
43721 F20110112_AAAWKF rao_y_Page_166.pro
542d2151150ce9f84bbb7b95b65b0f66
f6db1ba8682ba9339e2140f4375a518c9c888908
30427 F20110112_AAAWJR rao_y_Page_143.QC.jpg
bffd7e44f78ef819f43032e3b7a906d9
49215df749ed0c6c16f68edfb3d13a699dfd3f85
1825 F20110112_AAAXNI rao_y_Page_158.txt
8f43923bc64bcf8ced1cce03151cee67
b9de6274f35aa42d0a08efa861ca51407e711353
1804 F20110112_AAAXMU rao_y_Page_140.txt
6928e005d8ced945c5365f65adeeeb92
00199bffa907ef7a6d4bce47bd8548297230f7ab
81518 F20110112_AAAWKG rao_y_Page_073.jp2
6c6179419280e9fefa7f16121d09d926
ed4305f68d23b0326eef79d635155dafffbfa497
8077 F20110112_AAAWJS rao_y_Page_068thm.jpg
034368b50ffae486697de9e5972d72f9
51961c5cf314dc27fc13f0e8250e84f71bca9413
2042 F20110112_AAAXNJ rao_y_Page_159.txt
8c131bb16210ad74a8704297b7848b0e
69048ed7d386afcd4360d8dab55ff0415061298d
1537 F20110112_AAAXMV rao_y_Page_142.txt
bd1508d2b89dc5b52a605b39d4866109
63bfcfe79c9948878764e3460516519f35a3a1a5
97100 F20110112_AAAWKH rao_y_Page_088.jpg
fbc11baf387c86552f730c25fada09dc
7f946423e4df67d6750b7152f15075246af44480
34250 F20110112_AAAWJT rao_y_Page_032.QC.jpg
f978ad462252e06a8ecf19a0c5f0946e
2cc722b3902b54ff6b32ebda85c17077d866bd1f
1936 F20110112_AAAXNK rao_y_Page_160.txt
7f2b3d27ab93c63f4b40008f9c382126
27a6e8344322d42ba316a777a8b913f77366fb74
1615 F20110112_AAAXMW rao_y_Page_144.txt
312fdc79824171aa28c475763b27966f
61206cd20cf3fff42fc344cdbcdadc1a971e8e5a
45511 F20110112_AAAWKI rao_y_Page_143.pro
40f9340a1c9e43820fdfe3f6579511bf
dcdccf5b0255045d8bfb9dd266389c27b8d49132
87806 F20110112_AAAWJU rao_y_Page_166.jp2
429dee759abd3ea05ebfb338cb9f761b
6d31a0ad5984d04f62758b647639c9be33c18756
F20110112_AAAXNL rao_y_Page_162.txt
44f971357ea74ffad684bf4d03248c7c
3c448fb1f2fa89145d538c97647b37d8c3a7e2cc
1658 F20110112_AAAXMX rao_y_Page_146.txt
fa0ca61b905d3dd1a9e222fbbb6174b4
60bc8f831adf4450aa8b417af28c2a6d3f6d5cd6
1471 F20110112_AAAWKJ rao_y_Page_058.txt
73ba7038d0427f8965308dc02a4627cb
20e489442317b1471350c807b6fba2aca84caa59
95462 F20110112_AAAWJV rao_y_Page_113.jp2
f2e1523e606445a10f44da8206950b9c
7d09d4301bfc91fe6838a25d8d4110f70a517703
2427 F20110112_AAAXOA rao_y_Page_183.txt
564a7b83fd7fd9a7e563a681cee36d85
ca14c56b998eeabc7ceeed58dd9ef88c40484218
1608 F20110112_AAAXNM rao_y_Page_163.txt
4b912f1a74dc4c8ff19902f7b1799253
3a84394f417f738db96c64483e3c59131085230e
1953 F20110112_AAAXMY rao_y_Page_147.txt
b1f3d8a85b8d00312dda976955477930
fdd5375f962c87aa3b4b14e53b58a71e07127390
106737 F20110112_AAAWKK rao_y_Page_121.jp2
c2c8ea867bc2a3bb0f5112247e973286
95827ecf3b6467013580f5c664c8c1536775eb14
58654 F20110112_AAAWJW rao_y_Page_149.jpg
83603fccb9fb11c701d06666f96db4c9
74e9de004db4e40777f578d0d73ec9147da4a7d3
2314 F20110112_AAAXOB rao_y_Page_184.txt
d50b635f56f1d14e5ce61475ea015de5
332bc2ab3afd76baf695f39f6f8b4f237c400577
1844 F20110112_AAAXNN rao_y_Page_165.txt
5b7083201cfb98678072a4dc0a2ccd3c
2db0ec362d31f5b1d590d05e50e86d5ece2af823
2010 F20110112_AAAXMZ rao_y_Page_148.txt
63364cca545da86358fa770234596792
9c7850df109f0ed1acd70912025f394195fa3a81
48578 F20110112_AAAWKL rao_y_Page_088.pro
d15aff33d67fe7442554481f88f6367f
31f01f4d25b028b9acdadf62b097bbbe396bad6c
7849 F20110112_AAAWJX rao_y_Page_071thm.jpg
fb84ae4b801684beea3259acea031e7a
3f530466a7c5f911f7591ce5bbc4e7a50fc8a875
2461 F20110112_AAAXOC rao_y_Page_185.txt
e8999e03d7a7a70d99d072cd2ba6ba8b
cfb12258f74d952a8fb9d78618592b451693b814
F20110112_AAAXNO rao_y_Page_168.txt
5c92d14a1be0a753a1662bf653816e8d
28cad09c0c3dca6f1a1156d06375d782f0d46a42
F20110112_AAAWKM rao_y_Page_077.tif
7c03f7bc755f9ead498c40736b39b074
15dd508ca061d25e391a5ae053f2297f85c3572f
13394 F20110112_AAAWJY rao_y_Page_171.QC.jpg
97a30c806e3adc59707cb64e5133a136
29574309392ceaf1463deea04bd4cb7e6f107321
45512 F20110112_AAAWLA rao_y_Page_165.pro
f920a235b2e1022a3e1e90d30398432e
28fbdbd114848e4ee99a4e118364efd3e3a71f3b
609 F20110112_AAAXOD rao_y_Page_186.txt
2ac599b20dd5ef9b3b0adfaa8a093d02
9a69619f6fa5ecec9810b22018311aee4bceb8db
877 F20110112_AAAXNP rao_y_Page_169.txt
c83d1d7dee9dca3630e59e8c4524b750
a894e27b92b49bc84a75e11f8a4fe2fa3d4c7fc1
F20110112_AAAWKN rao_y_Page_131.txt
53d8a2e9944cda1379ce98bef0165aa4
8d8d2669ba6f51104b48a152b09671a304cef2ae
10378 F20110112_AAAWJZ rao_y_Page_009.QC.jpg
481f37680c4d67f2e431a10cc83f26bc
562e3bd0d9239da7d3a8ed5e9c7553bcb9d3cd99
106537 F20110112_AAAWLB rao_y_Page_046.jpg
a79a5e5673936e3abc08a9a83e71a78f
e5b9dde727dcdfba05658220f3d7eda26364acc7
1295 F20110112_AAAXOE rao_y_Page_187.txt
156b0cac02290633c0aa1001d9d7777c
35c98888bb36c975b9bd4e0e747eaf0884724e64
1465 F20110112_AAAXNQ rao_y_Page_170.txt
fa0da11bda5df719c496d9f37dcf366d
fbd1ae3009b43e52d6dfdbc541c557a9d0e51bd6
28488 F20110112_AAAWKO rao_y_Page_087.QC.jpg
f8797b3d7d51eae00d11333e6c30b59f
a126ee9cc1adcc144cda4989d81a2bb7297bf646
F20110112_AAAWLC rao_y_Page_075.tif
3ca0fd9c4ef92c2a49616cbf68c05ab4
157453d0448cac36c7e43f8b92e9df71543f762d
8664 F20110112_AAAXOF rao_y_Page_001.QC.jpg
fa44b56c6331446b4af40b9a69c0a109
1153680f39950ea160a44f76466dbd2871d548d0
1486 F20110112_AAAXNR rao_y_Page_172.txt
c92b881b9f734efcbd453d2fba404051
468747230470be8efe5a16d6e844dbbd9d818151
98351 F20110112_AAAWKP rao_y_Page_065.jpg
9f51fb36f48d96dbd894f70de451792b
77c922ed4cfed60b351e4d3a901aced9097cd64e
47196 F20110112_AAAWLD rao_y_Page_055.pro
737bae41e446b66b764b6f4bcec47b32
3955bd87933682909e02836c157d5c23ce218ea2
2242 F20110112_AAAXOG rao_y_Page_002.QC.jpg
683be236b046c92e6ed8a06ee52326f1
7c10178c0253f2af11dc5a165ba8e319db18c1ba
1475 F20110112_AAAXNS rao_y_Page_173.txt
e056f0482dc27619f08a7d822c177619
7e83374d60a79681f55aa28bcdc33689fd6aa0e5
1753 F20110112_AAAWKQ rao_y_Page_062.txt
ba31c67c95ee1f878e601d063d59dc50
cdbe703ba8c5ea33b9c8aefe1cbe4e5128c8657b
F20110112_AAAWLE rao_y_Page_130.tif
730e6f783642232b66fbce89cdaae22e
5923fabdb5a2af10f5ef399da0ce9f4b062077fc
2300 F20110112_AAAXOH rao_y_Page_003.QC.jpg
8a348ba8abc94698cfbc343602a7cb35
2144a55567cbf9aa8bb072b19d546950dcd64f75
1176 F20110112_AAAXNT rao_y_Page_174.txt
a9c54992442ea95e5f5be74d682bd6c0
4836b9407c86fb2984699b528d443a33f462344f
22264 F20110112_AAAWKR rao_y_Page_109.pro
754f17488eb6b7ad5c8f0afe83b08993
78a916e7e375b09958522eeee03de964651cc915
25660 F20110112_AAAWLF rao_y_Page_091.QC.jpg
0c4d2ad32c3a18be4af26fd8c0b1071a
59a1a1ddcff46025db0bdbc188d36d85c7da8402
7429 F20110112_AAAXOI rao_y_Page_004thm.jpg
9cde0dd663ae1789945659dd2b663342
cbf313fd41f08152c80cbae9dac0a396129eaf6e
1808 F20110112_AAAXNU rao_y_Page_175.txt
d67a2d137131d2f9c8061fa5a4db3239
3c72247097f6c9048d4a22d8aa4d790dc16d9307
46628 F20110112_AAAWKS rao_y_Page_028.pro
80640bafd72bbcca81eb6908f74291f0
6a6b3657d8fb419db6008086464ca7d7315c4663
F20110112_AAAWLG rao_y_Page_181.tif
0856e4bc9ae8c71e80de9641c523ac84
5c72fda9e1e1286befae157de11ab3461d17c036
30108 F20110112_AAAXOJ rao_y_Page_004.QC.jpg
d5ba06bedffd89aa0ca89c5e19f43117
31eff4b8cc277f4ad814840499e349ca9f7d8e12
2446 F20110112_AAAXNV rao_y_Page_176.txt
51a3f20e0c18f95dd3fe5e8b207a256a
a06d398223bc70b0b4c9ddef5e9c5b5ede93436b
1487001 F20110112_AAAWKT rao_y.pdf
0fce786726dd8417d26a95ddabe4a0db
c87cb217c4cae8a258789c958ca04dc1f64bba22
1577 F20110112_AAAWLH rao_y_Page_069.txt
6c975ddb4b58829e0c89fa0886f3cdb3
a78a05d0bb11ef5cff6f94b7f5767ed9d02a5d93
1410 F20110112_AAAXOK rao_y_Page_005thm.jpg
9711975b1c5a8692e2c934741dfb2528
1ab18165902adb6c12e5ae95a52bd1c36335b4f5
2485 F20110112_AAAXNW rao_y_Page_177.txt
846c2f00be33e9be1b62de37c1ec8178
560c9af61dd19f69eed5603ed549690a7214bb94
49940 F20110112_AAAWKU rao_y_Page_122.pro
c1b59299091a38b7560423ddc1d1f4d0
1fbc5bb421218528a1c376909aedfef36112cf30
14307 F20110112_AAAWLI rao_y_Page_128.QC.jpg
4d4286daeffc3d2f31396c2929347799
23042cc3702e35d3ca541fec2c55b866957da12d
5184 F20110112_AAAXOL rao_y_Page_005.QC.jpg
ececdfb4ca28c2c769c730fd1a68bb62
8691dfe5d025a43ef2ec6ce34f2f9c7d234f9418
2489 F20110112_AAAXNX rao_y_Page_178.txt
86540d59f7ef4fb22c9a03bcf1a92fa5
664c2aa67c88a326788853fb0fdb3cc974561b5d
43714 F20110112_AAAWKV rao_y_Page_128.jpg
4c1af453fd4fc8f42b42224541bac901
2962f3212d17ffa2c6db6c62769b18022602b561
104749 F20110112_AAAWLJ rao_y_Page_147.jpg
9d43c6e9c95afe9b8a8968e174ad149b
b082aa71601b59f3ee5e0785d7bafaac491520c8
27409 F20110112_AAAXPA rao_y_Page_016.QC.jpg
17778f036f2afed4fcefb464c65ec573
2e631070999795e79e960b1c5e27774513de0aa7
5970 F20110112_AAAXOM rao_y_Page_006thm.jpg
c69b1b22ae8363c3357b43aac85e784d
1882db6311a85e63259e70f687c734ae4f21a971
2269 F20110112_AAAXNY rao_y_Page_181.txt
00f06f6184c947b79773c1b4eab3234a
6de30951170ed957cef98bc8080c569d3c3d087f
86368 F20110112_AAAWKW rao_y_Page_137.jp2
d867fd583185cda8e2e94b90d8bc8eb1
b305927aa06e25f9d827e7d9a8bf8a9f61783431
8377 F20110112_AAAWLK rao_y_Page_055thm.jpg
73c1dae81ca4efbee5a9f0ed0200376b
9836acac447772590a373854ca4a40b2647243d5
8937 F20110112_AAAXPB rao_y_Page_017thm.jpg
60510b3b47275d1f03c081a4ff07fbc8
440aba92dd34f074166be4040bf239fc44df2e53
25961 F20110112_AAAXON rao_y_Page_006.QC.jpg
8a327436faab79099cec7cb1df0ba36e
b0717ab4f6c68399c606b7fcb7e29ae0afd5c03b
2421 F20110112_AAAXNZ rao_y_Page_182.txt
e3510bc2afa2970759529fbe66534919
373a76f18906fb3043d5fb9bddc8dc01e6d1b0d8
27740 F20110112_AAAWKX rao_y_Page_137.QC.jpg
69f57e5d6daf599d0efb33ebf632c09b
a0ebe34e12a1e056d3ec65355b1a0a49f763eda1
28438 F20110112_AAAWLL rao_y_Page_166.QC.jpg
fcafccd2f0592be92c6b7874f9967516
740cfe15960efcab64596a42a7b37ee7898f498b
36800 F20110112_AAAXPC rao_y_Page_017.QC.jpg
f7605f8b2d702bc727edeaf931ef4f7c
cd3b6d89cb865956a76bf8e07d85c515d8e8acb9
8522 F20110112_AAAXOO rao_y_Page_007thm.jpg
e9afabe3410413907a4bdd649ad15702
ebf1f67511675fba7e4f0f766ee5f2cc74c06596
91070 F20110112_AAAWKY rao_y_Page_027.jp2
bf7635b404cec73481c6dd692b93b2cf
aac421b2e1ad1cd390c85f89fa628142f382f463
21645 F20110112_AAAWMA rao_y_Page_169.pro
20c9c76e49a2cf3802e844e5013b9511
add6c7fb349640d821bf9b346a972ff0fc9f892e
26784 F20110112_AAAWLM rao_y_Page_019.QC.jpg
bce9e16f23474e7237a689b25fde2c6b
1a7a2ca39dcb1cc7926be762ce7843529adcb9e9
8366 F20110112_AAAXPD rao_y_Page_018thm.jpg
76af0ab12fdfefc15642542b9331a290
fa28ba186254b6f389ceeada02ae469858d61c54
37258 F20110112_AAAXOP rao_y_Page_007.QC.jpg
71ece362dc5ec6a505630d10e11ae94d
464879b3341376a730fdfad109ff18a8d55eef3b
7192 F20110112_AAAWMB rao_y_Page_154thm.jpg
f80c49ec73c36be822df67bc1ca8288b
f7264ea0656325e7e67114847f05ae53adf33668
8220 F20110112_AAAWLN rao_y_Page_164thm.jpg
e3b47b71aa17dd2806ba5cd8d13b011d
5f59cfb12b55d367e6c275b8fd760e4b14c971e4
34778 F20110112_AAAWKZ rao_y_Page_090.pro
6e043bb848021779b8d30ec160e1d8bb
e14f3741a1cd98759933c65f973213ec7037cf5c
29749 F20110112_AAAXPE rao_y_Page_018.QC.jpg
6d9fbb47d4cd374ce3f5baf3d70e8f3c
875a0e2e289e34766de792b98666217a526c4c09
7346 F20110112_AAAXOQ rao_y_Page_008thm.jpg
dcb8b896bf1f4979216a8ada1d7481fe
688bd230d6c63e5c0ab489cb6d74914f2b0abd1a
1630 F20110112_AAAWMC rao_y_Page_084.txt
378724a1c4533d2f68aa6737044964fc
4d66e8cd7bb00a35c65065c0ac2f76df7a700e5c
1670 F20110112_AAAWLO rao_y_Page_090.txt
75889057468b0a39d8a9ffcf9c5d9336
797b8f13edbff69caabf31ea0d8ab9b51648f675
7276 F20110112_AAAXPF rao_y_Page_019thm.jpg
9f2605c3e1525913620d8046e8969f1c
171a061982595d3c7538dceedcc460e5263e00d3
30189 F20110112_AAAXOR rao_y_Page_008.QC.jpg
f660a47256c6ffe0a0135d535b7d70fd
18b149ade599c81511e11d6af7221707e1417d5c
26847 F20110112_AAAWMD rao_y_Page_059.QC.jpg
d1fa89622b762d39712a930383c6b0f0
57a00313d8c1443f8e9e135f71846d1390c71e1d
127319 F20110112_AAAWLP rao_y_Page_177.jpg
e0c98c7b6f1b8a4287c4d512217182c9
e5abe987f8fa6d67e2bbfaf42c55f8aae139f9e7
7848 F20110112_AAAXPG rao_y_Page_021thm.jpg
4c80d3cddedd2c4b7fdf2f68231c7b75
5cc4579984ced725e91b6afb0587c79d2f612c6a
2581 F20110112_AAAXOS rao_y_Page_009thm.jpg
a2b091f379151dbae537ca431ab63070
1a4722bb962b128ba5eaaeb777f29511ccf1c801
F20110112_AAAWME rao_y_Page_152.tif
eca8388f268f906632332b9cfe7997ae
6a8af7b5fde8a67d582591ab35fe2cd349d6871c
48849 F20110112_AAAWLQ rao_y_Page_063.pro
b586424f62207312040c6ac1010bc131
847c130ca1de1395280808a44dd6a6eab2f79906
7672 F20110112_AAAXPH rao_y_Page_022thm.jpg
b5f291149465186fa6377c63007bbcc6
1efca6ee1cec4bc0f5ea7517ea10a79fa8fff5fe
4197 F20110112_AAAXOT rao_y_Page_010.QC.jpg
2deac2ab76353f8dc930bb737b1004a1
c25d4a094e51c4e92990cb165608b5007b9d2107
274659 F20110112_AAAWMF UFE0004355_00001.xml FULL
b43c5ab7e67b2ac15541fc17ceb6f62e
6002e3474bab53c19ed38089eaf8245180781604
120636 F20110112_AAAWLR rao_y_Page_184.jpg
87eae44a498ac8606e8a45ebef259c83
393a5908fd31559c38d8594a8c1b8fd4263d8dc6
30564 F20110112_AAAXPI rao_y_Page_022.QC.jpg
19e66de8ffbd8551363b34a1ae3a11c7
9a00bff88d4ba6ec3474229380d5271e75e810e9
7868 F20110112_AAAXOU rao_y_Page_011thm.jpg
32bffcbbaf61edc43db3db2f81c2c8a6
35fa8249dd960310c0a34b5eeb1f40833447db5c
721 F20110112_AAAWLS rao_y_Page_002thm.jpg
8378ad1a7b9e0d85c8df34917e14a636
1c234fe7083e91c83a3d7b184d9159beef31ec52
8883 F20110112_AAAXPJ rao_y_Page_023thm.jpg
8ec2f0afbc2b1d40e15e7435123c5e0e
60542a1b0a2a1670afdccf1f7093f42ad4d8f2ae
31182 F20110112_AAAXOV rao_y_Page_011.QC.jpg
c2e4ab10550cc0e6e92d25ddb2cbadbf
b27eb2f23c553f144a8ebbc29b48fbfa47605507
73931 F20110112_AAAWLT rao_y_Page_173.jpg
2129612de470aed9e5294075b0dc76f7
7e4cd073ccee64c3ac744e52fae11fe5b6e182a7
35039 F20110112_AAAXPK rao_y_Page_023.QC.jpg
8bae0080f7e196b95b4682553680dddc
c94e838465415f4feee827383f062423f90bd863
37245 F20110112_AAAXOW rao_y_Page_012.QC.jpg
2e24f4457a889ad75a8412a8c33ca232
e401a8527ed2681ee9eab7be5c1496fbbb187c9b
28963 F20110112_AAAWMI rao_y_Page_001.jpg
d3debcae37807c39624cb6ad81cd949e
1985c40c3cc920d9dced871f4c435da98ef883e2
829 F20110112_AAAWLU rao_y_Page_128.txt
02de44eec7d2d749ac0b5ec3b26e0767
2abeb5870e82d2d619d05a459347005a92ab3cca
7434 F20110112_AAAXPL rao_y_Page_024thm.jpg
8953c4de714e0b09af358362e3f6ae1f
61f93fb39c7adf0e984e2517f1361ebc281e2221
2824 F20110112_AAAXOX rao_y_Page_013thm.jpg
01f072f98b48939aa6c60a405c17179e
13f750927d51e1a70ec606d751c0c7a222b96362
9311 F20110112_AAAWMJ rao_y_Page_003.jpg
cec5a4c0b15620dc023352cb75318d3f
b6012556282afe9d3144fd8f23f49d148efd6cef
47717 F20110112_AAAWLV rao_y_Page_036.pro
af34212e0cedea4d6fc3bad3fe8cb34e
8fc6b3488de5458551539cde988e22e9128fc078
31087 F20110112_AAAXQA rao_y_Page_033.QC.jpg
849da0a26243d1fe7c599f2f971e3001
494600c9cf9c375d72a7ee26750a849894ac7b24
28854 F20110112_AAAXPM rao_y_Page_024.QC.jpg
d1596e56484b119fd03b1f9285315bb0
cf322f1c2ca151f877bc0552f0e7fe87fc3abd22
6978 F20110112_AAAXOY rao_y_Page_014thm.jpg
09ef2d40d0a12e8ea4327894092367fd
518646b37cf9e636b2c6f1d074e8f53ad6504361
90979 F20110112_AAAWMK rao_y_Page_004.jpg
bed6d6ff2e89b4e43f2857249aaf782e
c97bd6c9b0963cbc0e209b668dd37e2546a44a9c
F20110112_AAAWLW rao_y_Page_088.tif
ce9a0c01df9c7de17b2dd5bf7fa05044
05e6daaa72b90f1fe9676295de10de8f43969ed4
8739 F20110112_AAAXQB rao_y_Page_034thm.jpg
530296d59ca9ecb31bf7eff040409126
69ee2c2fef612ce5127288c91c77d35e1ea625b1
8643 F20110112_AAAXPN rao_y_Page_025thm.jpg
69c0834b5bdb3e6a4cc61b9c67f7d331
983d20b71d06cd935516f16eccc8b8ccd08cb406
7304 F20110112_AAAXOZ rao_y_Page_016thm.jpg
2e0fff46924d042bf169787f7a44eac8
083cd7beded52ef0dd28d1bde2acdaecce4cbeca
13971 F20110112_AAAWML rao_y_Page_005.jpg
129e3c8a2645078d4e9cf3902dcefa9c
aa633b07924e6cf90ae98bd854ef24ece2dd01be
7648 F20110112_AAAWLX rao_y_Page_059thm.jpg
58df6864d0510275d3d9c59fde596022
d2e61af77e5a0325c682a87f22fa4665ee0e53fe
31881 F20110112_AAAXQC rao_y_Page_034.QC.jpg
5574b2e3e472485d838b913a40327d6d
ded22a24294aa30c36fbc998b8b23d6de7d259dd
30916 F20110112_AAAXPO rao_y_Page_025.QC.jpg
fd9a757340751492032938fc21a89f53
39d23cfeaa7a4de30e18adc75e983657ffee7065
108699 F20110112_AAAWMM rao_y_Page_006.jpg
4ff3d68f410f547d374586718b8ef58c
f5f0e5b63b7d722090c6c244e4b99e26d7ae0ffa
28928 F20110112_AAAWLY rao_y_Page_127.pro
0fa42a0dc9afe653bb0fd39ef24c2a6d
58a73db331955b5618c197fed21296e68acc3e85
87669 F20110112_AAAWNA rao_y_Page_021.jpg
d5bdd93b0c0ad91e1f00ae1114ba4b97
52dcbf158900f08c80e2a478294a76c8efde186e
8197 F20110112_AAAXQD rao_y_Page_035thm.jpg
ea3b7d82768ff0a33994fd60aadf1f36
e1de704b6b784eb6fe67129c5f02715cda871524
8267 F20110112_AAAXPP rao_y_Page_026thm.jpg
d60b78a83493e6f6fbbd7045dcf7f6e9
e2039efa5428ab22e37f504cd548b77634e261d5
130794 F20110112_AAAWMN rao_y_Page_008.jpg
879eff3cb94e3c33b933fa5806127d7f
bd44d10c63205f1e39f91a42a2d860d3b592d864
3874 F20110112_AAAWLZ rao_y_Page_007.txt
9e2d699cbe60c79700e76997d263e0c2
ea26c77b9e7a1977efc87955125c295950ac37d9
92876 F20110112_AAAWNB rao_y_Page_022.jpg
95b7a520a1021a0648f1415332dd2d55
635c66fdcfe3cbc26a0a9ea4e83b733808e4ebdb
29386 F20110112_AAAXQE rao_y_Page_035.QC.jpg
ac636189491cb7a494286030851d70c4
273a518fae9d24d3e7796e7d4efdd5f98f2983c7
31273 F20110112_AAAXPQ rao_y_Page_026.QC.jpg
7ec24b0eaa7e7110cf9f6a7838be4f0a
506ea0e3f1bddb0dfd59fd499c0c5dff9f96ba8d
34196 F20110112_AAAWMO rao_y_Page_009.jpg
a474ac755657bffe6c20254c544bd4ab
ac21a6bf5dc97b71d338c774abf55646b6f4e546
107629 F20110112_AAAWNC rao_y_Page_023.jpg
4da760e4afb4e0cafb7dd0741b799568
dea50119ebee00cf46c752b1089bb39a133d3661
8268 F20110112_AAAXQF rao_y_Page_036thm.jpg
042a7ab64984c41ec93c0f39bd7ac7d2
5f4352cfb4f78050254b18e19cddc2001e5883be
7934 F20110112_AAAXPR rao_y_Page_027thm.jpg
a1e91a7e3a08ecd1d259ee61c7a6abc2
f57aea931c995ffd8e215a7b13b21eb54e4fa280
13821 F20110112_AAAWMP rao_y_Page_010.jpg
515f84440e140992f7936b2baf06f4b7
dbc797e9456e33c3a63f45f410581cc726e2a237
97452 F20110112_AAAWND rao_y_Page_025.jpg
e487da403008ef0ba81b5933429cdc87
a9c566317d64dc1f0a2e279e6dd3899e5cf5f9ae
31796 F20110112_AAAXQG rao_y_Page_036.QC.jpg
7cc69b14970d76e4d1440d44c4d0df92
c7cd0e2f8bf578f83712a99d04b4503ee7e4c91e
28556 F20110112_AAAXPS rao_y_Page_027.QC.jpg
8e2cfbf00b1a8e55a288a825c53f347a
02e606d90dc3ece803cbc4e80c134c407a6a5e8d
113891 F20110112_AAAWMQ rao_y_Page_011.jpg
141360544ab35561a949c21cc616d1fd
f2538d974fd4ebebb73c597cbd6cec963debc1dd
89625 F20110112_AAAWNE rao_y_Page_027.jpg
8a60f2d125d5dea80d2df78aca56e06c
cc776e665f9e10bb4a8a61b7f947383261c9c31a
7974 F20110112_AAAXQH rao_y_Page_037thm.jpg
27f814081401eeacefc8e711fb40149a
1cb4b446636230db0738f62c3b37c3ae238ef9d7
3415 F20110112_AAAXPT rao_y_Page_029thm.jpg
58e2b5a9d1bbc3f1edbee7bb2dd2f99e
f0468ca117fefa6d607eb5ee3aac832c30f90787
133155 F20110112_AAAWMR rao_y_Page_012.jpg
9584d09e994c2c89dafdb6b9b51b58ac
3fa0fec1ab6a1d1a4397bb8bbd6aa5d73225f8a1
38889 F20110112_AAAWNF rao_y_Page_029.jpg
ef1bfa87b574e6ffa0e604d9b8cd387b
4ba287aafd52375453f06762e1472d97ead5cc93
30604 F20110112_AAAXQI rao_y_Page_037.QC.jpg
47490fbae41eab1f54cc5e0ca32a479c
86c8e275f910da5298b0908a8c8f1486874b46f6
13288 F20110112_AAAXPU rao_y_Page_029.QC.jpg
4e4da85b91cbc44549dfdf3f9432d19e
543949b0ef26aa46a44553ae3d2338d2a5bbf0cd
37199 F20110112_AAAWMS rao_y_Page_013.jpg
fd4e04ac28f43756855275290d26a975
c528463ce0582ccab7acc63f9ee4bbfdad707d9f
96872 F20110112_AAAWNG rao_y_Page_030.jpg
be2aff2f62b4767bf04f3af5f32767d9
989f2052eabe23ecd33dccaa4313bb86b3ab7756
7850 F20110112_AAAXQJ rao_y_Page_038thm.jpg
fb0e2240beeceac3d6e17c4fef080af1
a99f2ab93729ee9eb67ea93c7218419b7bd7d65a
7984 F20110112_AAAXPV rao_y_Page_030thm.jpg
9ce922e2456cde62d8f4e6bccc39ccc9
7eca996a74d9ea5bf2ac474354b534b387bd2906
85979 F20110112_AAAWMT rao_y_Page_014.jpg
2fb5bd5538b67f68bff2514942db5912
0c6fd455cfc0f135669551e99b650619ab6556a7
84759 F20110112_AAAWNH rao_y_Page_031.jpg
04d18165f592ce61f6673202d11669a6
de64a113767add1a8b21c1a02d548a13e03bc83f
8047 F20110112_AAAXQK rao_y_Page_039thm.jpg
5476010baa80196883712b6c40d89370
a179a0448df758e255aea741f4810b860827ac45
31436 F20110112_AAAXPW rao_y_Page_030.QC.jpg
c23fe3fa1d985246e03abe287892e8b2
d0941f5e031aa7602207c86f1b01243ed7302001
50118 F20110112_AAAWMU rao_y_Page_015.jpg
c663b625113c6d04afd7b30985c00449
3f9ab84e464e3af89588dc92203bdac95f734e34
102604 F20110112_AAAWNI rao_y_Page_032.jpg
46becb55d78d4040ff1fed0dde813ccd
7ba52472d61626c2db6fe04c897a74e480087e85
8576 F20110112_AAAXRA rao_y_Page_047thm.jpg
c2e7e1c76b2dbc3c4a9e20f2d96e2d66
e6ecff7c2dc3ebcdf5912077674716f19878c841
29254 F20110112_AAAXQL rao_y_Page_039.QC.jpg
4734fd10ddbfce309c8c26870194aa8c
043d6cc8a2acf46dda7a268cadedc4dd7c3d03b7
7867 F20110112_AAAXPX rao_y_Page_031thm.jpg
7e017db44be16655d46fae3f38a5b308
7ce33b9cbd3bc3ebc40d4ce776976edacfc1a8ad
87854 F20110112_AAAWMV rao_y_Page_016.jpg
ea52fc032b5b85ca23e9db06922b887d
af0f41a879e2b1dbbae6a85ef561554c0ef1c616
98063 F20110112_AAAWNJ rao_y_Page_033.jpg
a9b4b6ab74ace96f249f250ce15cba87
edcc52d31fb2f6d37db0de10272d5f5c9d5f12b0
6904 F20110112_AAAXQM rao_y_Page_040thm.jpg
4651fd0e5d07c0bf8b9d2f49146d90e0
d0d1f49517dcd4445565efc45a0765d756f4b570
28703 F20110112_AAAXPY rao_y_Page_031.QC.jpg
e41e355f19c4561d7cd76f90cd6aa2e8
313c8b970889b96f507d1cf825a493bd5502db03
113573 F20110112_AAAWMW rao_y_Page_017.jpg
1ca16190857f8b68add0d61ccf29889f
50ed79f939d599fda72bbd8a1e54415ed6e9084f
97346 F20110112_AAAWNK rao_y_Page_034.jpg
d9bb8c11ce2cc96b7ba823476911d832
222deee6119ea3f050ed467ee3610fd53aab97be
32089 F20110112_AAAXRB rao_y_Page_047.QC.jpg
9f9c68e96b4bd6642edb65eb861e6239
c5d6ee3fd7eefaeb6b6984b29c785a0a178ad25e
21540 F20110112_AAAXQN rao_y_Page_040.QC.jpg
f836d6423f234fbabfd9b69b08322967
3b8cec280c74421265f19a3c019ceba9e6bcd38c
8862 F20110112_AAAXPZ rao_y_Page_032thm.jpg
6df31852acabdf96cabcb14426d371c0
e532945eef7a58edd51fd7544bbb5d379e2e97d8
93164 F20110112_AAAWMX rao_y_Page_018.jpg
67fc177dfa55f018a55feb0a98e68f3d
499d0cecacb56869dc9ad2daba3a0cef96889ed5
92946 F20110112_AAAWNL rao_y_Page_035.jpg
31673a0ba30930da047c0ca19295e49e
d6478ad305ae6b93cabda01a45c1c89161cd8f33
8306 F20110112_AAAXRC rao_y_Page_048thm.jpg
f5463b50ece45e4ac60b13a4b3643f70
6a96806d05b2763a56fac30b99443ea54c1c1430
7151 F20110112_AAAXQO rao_y_Page_041thm.jpg
082270d05d2925764eecfbed152157fd
ebdb9babedeecc9068e37995523a4503275e21c0
78094 F20110112_AAAWMY rao_y_Page_019.jpg
ee9537ade04bf094259d9f62e279a5d1
6dcd6f08ce48fd5e808499238459458737c1e981
110519 F20110112_AAAWOA rao_y_Page_054.jpg
f4dce35a9c1712a698f3678cb1ecfd29
2cb03be072af3d2af336bdcbbefa5bee361663a6
90934 F20110112_AAAWNM rao_y_Page_037.jpg
f4939d146a7789904e836976b288513d
8ef5a925274381487cfdf45714564109bf87aa89
29978 F20110112_AAAXRD rao_y_Page_048.QC.jpg
3df0055124855a1cc927273db9e1f432
9a43c7447427d754c1409b615b63b3162e08be21
25938 F20110112_AAAXQP rao_y_Page_041.QC.jpg
f21699d6838a13fe21b1917d3bffa3fb
18ecabf1112f57d25126ea338024bb83a3da91e8
81772 F20110112_AAAWMZ rao_y_Page_020.jpg
9ff20941f2672b9ecf3427aea5d5ef73
36d2681ccf276112bcc40f15a5bd030a6746b3a5
97563 F20110112_AAAWOB rao_y_Page_055.jpg
f99d978e18b05d11aa5fdcc99a9d3074
e5f683568c09c82d1f5ea3886f1bec3bbda15076
91237 F20110112_AAAWNN rao_y_Page_038.jpg
5468eb034eb66f8479194f2f5624fe05
c6a99fc9a713d7276282439c1662168f597e7994
7002 F20110112_AAAXRE rao_y_Page_049thm.jpg
907e8ea0f6610cf3e01999915c1f9f9a
25a2a5b27b5c95e84acdae44b36c5fd2d6d00a98
8397 F20110112_AAAXQQ rao_y_Page_042thm.jpg
52ab8a7ab71d00fa7f6d92f031c80d57
dc2159335c5af1dc6dc4871ad05b963b8698a9dd
96091 F20110112_AAAWOC rao_y_Page_056.jpg
e3323c02fd3a0a3868e606d52c0f32b2
e147d6f9547819cbe6f42aade62d2bedec4cfdb6
91747 F20110112_AAAWNO rao_y_Page_039.jpg
db2e5e46543db3d6a451c25d9da7552e
276d8447c8bae54c85439e3432073f51f0e02764
25987 F20110112_AAAXRF rao_y_Page_049.QC.jpg
891c4fe5e66fda29b73d5e53072bb2a2
46bbeb09133fce126ead959afb7c7da57ad0c762
32680 F20110112_AAAXQR rao_y_Page_042.QC.jpg
464503a4aa45955a726ffeddf3eefde6
4d7dcfb4c8ec69136afa4aa1d22e5fefb14a47a5
90043 F20110112_AAAWOD rao_y_Page_057.jpg
8c54e3dd1ec01153557c15444e3441cd
013fcd79c3dd4d68ee5aa005e2b647310f03fe93
75128 F20110112_AAAWNP rao_y_Page_040.jpg
2b2ed268d888413c64018f528063f86b
d37b76a011c438a9e6b0437d1741bc582c845e26
6439 F20110112_AAAXRG rao_y_Page_050thm.jpg
c10afaac2e18625e5633adb06f1d50c5
5dcad5f56ea80122064acb45e579ce6693030515
8240 F20110112_AAAXQS rao_y_Page_043thm.jpg
908162162d36888b8e252582b167a077
c1c1693ff8af2f83bf70577579dec65711256ae6
77336 F20110112_AAAWOE rao_y_Page_058.jpg
172ff59e8ab56d2fb7014a0e7bdceba0
6ea84f6b839500622d2c481fb354b98ef1f3e93b
79274 F20110112_AAAWNQ rao_y_Page_041.jpg
508374ac32ae47d179a958be8fdc9bc8
d4b90df374599770b5b2329d7e2a11806a030e58
24421 F20110112_AAAXRH rao_y_Page_050.QC.jpg
976176d6549ccea18201654e57c93f95
77ecea16ce32c130a8ce59ef589f8c38a3c0c305
32665 F20110112_AAAXQT rao_y_Page_043.QC.jpg
f13ad109a8381586a26a5cc736a09a98
af8674daa437c1a40be281553cca6c780a32f364
78615 F20110112_AAAWOF rao_y_Page_059.jpg
979074dab4547d0c21bd18a41c5bbfbb
9324ea0b59b6895e6b7821c355bfb9220a621758
94023 F20110112_AAAWNR rao_y_Page_042.jpg
442c0ccda49a737f3691f3e41f9d6aa8
d06c5db01893607ce9dde0fec794d667ab6b7bc5
9188 F20110112_AAAXRI rao_y_Page_051thm.jpg
342f8381da4fb46093e0eb4d946e9eaa
b06eb5e1d7308b5113c175f082ae834c9a54bbea
8348 F20110112_AAAXQU rao_y_Page_044thm.jpg
e2769c9e3750627d2b00c4de1c07ec04
5797199737104f3669ff7332bce659ab016f6d6b
83500 F20110112_AAAWOG rao_y_Page_060.jpg
3d228b0af0baee08e6acc07621a0ea37
48865f5b33cc226e44cd91ee7f67fe8bbea1cf9e
93274 F20110112_AAAWNS rao_y_Page_044.jpg
adce856617c9241acf01635daf62bf33
166e35ee59bf0d08a140aa34b39440f77a836f52
7759 F20110112_AAAXRJ rao_y_Page_052thm.jpg
d42e90e78c7961f03501dbee0ad043ef
faa11645620c4a16420c6c6327f4273a40d328c0
31334 F20110112_AAAXQV rao_y_Page_044.QC.jpg
389ac4358a0986c27a87551d2df9a950
3edf3eec897044e9cffcc911ffe24f5f21679227
68410 F20110112_AAAWOH rao_y_Page_061.jpg
130a65f93f27f194401fe60be59da83e
20078b8aee48568256ce6899076a14b3111bf7d7
101350 F20110112_AAAWNT rao_y_Page_045.jpg
848c5e8894b7e8ad92f2d72fb47dc900
30c75f910cc67235d86284637eef0968bae49f48
7279 F20110112_AAAXRK rao_y_Page_053thm.jpg
eb619076b44e07cb0f565721bad0c384
981e05c96d920326c6c07b0ad42cc5f8378dc03b
8923 F20110112_AAAXQW rao_y_Page_045thm.jpg
9dcdae1464b2ee31091e5eea119e3611
2d60520204a375b9b81dbb66aaf3c31e35ffdb6b
94055 F20110112_AAAWOI rao_y_Page_064.jpg
74a5b7bfeb9c9eb9110d1b5b5364bb35
80c9c212d9424f0da841c003fd6af36de0239ddf
97165 F20110112_AAAWNU rao_y_Page_047.jpg
97f1a53d1bdf7221d7b5c238a7a85bdb
d0757462943f1d79ffa3ac444c2cbdcc8b8a3a74
8463 F20110112_AAAXSA rao_y_Page_065thm.jpg
5301976580a25dac56371c0cd5ab7f9d
97d49a5db0e07f359e012502241393383f60dbc2
27561 F20110112_AAAXRL rao_y_Page_053.QC.jpg
0eff8484f30450b9dfadc43582077904
89866023c1ef2d0e3c057688c11db494cdef9b6d
34788 F20110112_AAAXQX rao_y_Page_045.QC.jpg
5ba1d0ff68d4cb47294c9bee9e266968
f94006241761cf7707004be745331f0b067fba5a
97261 F20110112_AAAWOJ rao_y_Page_066.jpg
cae2bf9145d067b433c4c105f924a051
4fe244c99e1511559c06917059e9f55dbdfed52a
91207 F20110112_AAAWNV rao_y_Page_048.jpg
a12e0092ff10ab2fdccdaf14cc251244
2954f5317e98f1674f1ce97b4f9aff5a00cd707a
33723 F20110112_AAAXSB rao_y_Page_065.QC.jpg
a040d3da74091be7b3f90294811b93ec
6e59c67b7356f460510c7c913f0b8302e460a053
36136 F20110112_AAAXRM rao_y_Page_054.QC.jpg
836a13a42d01819077429309b927c38a
9f848b7fd2957ca8c88e7b5fd67e32256f8b47f7
8936 F20110112_AAAXQY rao_y_Page_046thm.jpg
19ace54e50a7a1a54de363f8ba5e5b0e
b672e9c369456d6dfb77c384a2b07699b87f19ea
96567 F20110112_AAAWOK rao_y_Page_067.jpg
e1bae2f9ba352408992ff998fc1a760c
4aa0cee91ec4287a1ff4b455e5980268e77f8c7b
72674 F20110112_AAAWNW rao_y_Page_050.jpg
b5b7f6225fa551c9080463f028bdf73a
22438fc627d1f371c603893f7dbe24ca83b5705e
8000 F20110112_AAAXRN rao_y_Page_056thm.jpg
881a6ce11db5d056e9f9e48df8c42a45
47b93117577d6327485db01bc8b07c1873ece66e
34988 F20110112_AAAXQZ rao_y_Page_046.QC.jpg
efa89342a75d457c215c7c53c5e28099
2c0611ccffc720efcf6f719b6192ee7635d3712b
89330 F20110112_AAAWOL rao_y_Page_068.jpg
21b699da2463206389b397297b948cd2
b0ecfa7333fd2c9811e428c6d3e1791a07d1123e
110116 F20110112_AAAWNX rao_y_Page_051.jpg
a31415d06dc96e061dd7b3e0ee95a481
e72cff57e96fb5d6ca590b59272b0d1e1f17c44d
8396 F20110112_AAAXSC rao_y_Page_066thm.jpg
fd565a3f872d5a7ab6e8743bd5a8c7fc
0899c0e86ad8ded1173701c4751034ff8dbc28b5
31616 F20110112_AAAXRO rao_y_Page_056.QC.jpg
fd8cabfb32042ef118a6dd26b58113de
eb1af18cfe2ec993680c13d96be725c517e73953
77206 F20110112_AAAWPA rao_y_Page_085.jpg
ce416110e6a494264b9abc8a728e6e87
315735fdcca045557b5ef58bccbae87751cf7f4d
78887 F20110112_AAAWOM rao_y_Page_069.jpg
ccb6e94399e0e86f1bbace020f0a11d5
d3256d2d077a0102fbe010241aa7e2f060362bef
72889 F20110112_AAAWNY rao_y_Page_052.jpg
666ced6970388b22436856ab0faa9b26
ef73735d3ef3b60ad68140db7077b7468223c8e6
32784 F20110112_AAAXSD rao_y_Page_066.QC.jpg
f82c4d91ef6a63e0c46a05aa0195bfc0
4284bc7c6f962500f6d632d2daa3e39106a59f39
8136 F20110112_AAAXRP rao_y_Page_057thm.jpg
c6bcd39dcb474eaeb32c24edb1de1e78
8b6aad0a443aa2b5d207de562d0a98a1a22393a0
88265 F20110112_AAAWPB rao_y_Page_087.jpg
e3cbf0bf4bbc64c169cfac86ece0689a
1d67029df6b776bac7e9dc6f0118d39de6d9d702
88000 F20110112_AAAWON rao_y_Page_071.jpg
3fd51279004ff9b04f24fe6f6d28347b
1ec9c9f70d6a1ffb7f77df6d4ea733d49d7e15fb
83773 F20110112_AAAWNZ rao_y_Page_053.jpg
408dc09ae847db0ce736ffa109740d73
844be15f192dbb6c072e2176bbda10cacd144df2
32542 F20110112_AAAXSE rao_y_Page_067.QC.jpg
74afa42cc231cde0447065d4abcc28e6
b263da76ab948cb0f907fa30dcb066436f7f1855
30977 F20110112_AAAXRQ rao_y_Page_057.QC.jpg
d08afc3fbb8305991764baa042bfd23d
c76cb2d87d07afae781efaa66290f5c2149c8531
102174 F20110112_AAAWPC rao_y_Page_089.jpg
23223f3fcb5bdb8eb3dc3c373fac0381
8a706037acc975e788c36d5dd1f8f160e24df64d
84244 F20110112_AAAWOO rao_y_Page_072.jpg
80f2cf9f5023ed41476955201d45d961
f2f448860d9f3b8b315a48ad5474b6291af7cf40
29468 F20110112_AAAXSF rao_y_Page_068.QC.jpg
f11efaf9b296eb26ee29d45c2691338f
77bd4a4d0a43ace946be255933a5cd1217dc519d
22140 F20110112_AAAXRR rao_y_Page_058.QC.jpg
d3dfc2e37bb6fcbbbd55401aa1c1e128
414d9ec1e8beb3bd969034e5b05681d447519a4e
88475 F20110112_AAAWPD rao_y_Page_090.jpg
eb7b0f08f46b3db406c57970ea878081
682f898e48c8c8e91de1dde1911e61f2e62e4516
79641 F20110112_AAAWOP rao_y_Page_073.jpg
1cc0e2e46e03906671eb2aef1726efd5
2d321cf6bc490271945599f7d7adb56d3a57c810
25489 F20110112_AAAXSG rao_y_Page_069.QC.jpg
30a4dfb4f836241be273fa83998b5ac1
f1677e70a95459f92ffbe508d02a08ef10b739d6
7179 F20110112_AAAXRS rao_y_Page_060thm.jpg
f239297f1e4f56b9b492405189d515ec
284d2a37354a95e96020da65b9945de3758e633e
77784 F20110112_AAAWPE rao_y_Page_092.jpg
7b875053d5351b8a02e0faeb1453c01b
8ac689c14c3be69b212eb34605641d8d26f19c90
83120 F20110112_AAAWOQ rao_y_Page_074.jpg
942d45e608640de227a4c913661272ca
2b31c72752130f00a6c9a7b0625437cfb4528d7e
7689 F20110112_AAAXSH rao_y_Page_070thm.jpg
7ab588912ecb9f28a19796cc7e8788cc
7e75f242db1826c4d5830e35f905d390d4d8dfaa
27877 F20110112_AAAXRT rao_y_Page_060.QC.jpg
849b04b665494aaf35a48bfb052f2ae8
467cced44bde2d911e94c51b388ff9e42d656d69
102857 F20110112_AAAWPF rao_y_Page_093.jpg
419e0d3312ea5c98b592d45d709a7c1b
110a49adf597479a7db5a66a9bbb599bc89c8651
86668 F20110112_AAAWOR rao_y_Page_076.jpg
d17a9136f40ff40146ef720c3750c5fd
dbefd9dd8b27da3678c1491d56af2dec8864226b
28365 F20110112_AAAXSI rao_y_Page_070.QC.jpg
8a6753e50840191cc8fbaecd0a443699
324350c8ccbf3ebb72a83c3368be9847cb0dcf8b
6391 F20110112_AAAXRU rao_y_Page_061thm.jpg
48100aa29745f9c0c1967aeb4d6ebe6e
898fc10f6aa22c0a726846571fba66b011764124
101391 F20110112_AAAWPG rao_y_Page_094.jpg
ae7c040b44a29ec7cde4e32a3f4cb4a4
4af6d61833ae25450dbbd24974a9c817ce094545
70490 F20110112_AAAWOS rao_y_Page_077.jpg
c5e2ee08f60aa04a7dab6b9f64def458
b2f6e61ee7a539ba3ac8d86ddd1ee29791e0d5c0
28662 F20110112_AAAXSJ rao_y_Page_071.QC.jpg
4012cb491900fdfc94d9b965c79d50c6
8a759ef13953edfaf69e66aa505d66bd5659f198
7147 F20110112_AAAXRV rao_y_Page_062thm.jpg
bd69c01587a5ad095675af54185c8548
92503a1450e8c660385c9f3fa108cb594501cde8
101180 F20110112_AAAWPH rao_y_Page_095.jpg
359df5e57f15a779f24e17896eab670c
a0f90a1451dce3a8a5392f1839dc71426f57266d
86478 F20110112_AAAWOT rao_y_Page_078.jpg
b0fab17f571676125733022bedc461c5
bed632e1920031cd42c2c99c24e2d273089e0ac5
7355 F20110112_AAAXSK rao_y_Page_072thm.jpg
4461a5a4ae16b6469c1f6dac7e296a78
050e98acab7d46004a0ffdf6a57b08e1cd7d56d0
8860 F20110112_AAAXRW rao_y_Page_063thm.jpg
8ce666a5b8f4623f1a852e56922b4b54
c4b2aa9aea535b803a01d503a991d3af0919da1d
106252 F20110112_AAAWPI rao_y_Page_098.jpg
0afeaa0de70c18f27877eef9468cac62
223c1621047a539a30fc88a8037ea0d9c8bd22a6
90835 F20110112_AAAWOU rao_y_Page_079.jpg
5f33e43cbaafe8222fdf645ab620c960
b329698a69a4739f7df70b35da35a89d8e046eda
7464 F20110112_AAAXTA rao_y_Page_081thm.jpg
d3bb286ad15dbb6d13c98d076ab62323
c1b93d40ea9d3968ad2ef66132a0740e5e46a125
6934 F20110112_AAAXSL rao_y_Page_073thm.jpg
b395e014412657f14e5013738e7d3c33
bc971960681234b2fc0e41dc749d7014b1538eb3
34176 F20110112_AAAXRX rao_y_Page_063.QC.jpg
3f07904e84a621a0ceeb90ae6adc084d
83fd3cda588a5e403ef005213305838f18dfee8d
39344 F20110112_AAAWPJ rao_y_Page_099.jpg
03a69bd692713de2aef4b7111045bf2c
84f7ad453300c75e094887767d97639e684ee3d0
79263 F20110112_AAAWOV rao_y_Page_080.jpg
33716d1df37e1de010e8630f7e4eedf9
3fdfa051a805505706ef9acf6995c0e6c8393287
24834 F20110112_AAAXTB rao_y_Page_081.QC.jpg
cabcb6e5240d12ea07e03c47ecc274d4
cec9532742ee67bca15511db239778f3ba7aa75c
26248 F20110112_AAAXSM rao_y_Page_073.QC.jpg
8afd621e4cebcdc240ad94d94c354c8f
b33d866aea97369596cbd7e25a2f5eaa07c2a61e
8407 F20110112_AAAXRY rao_y_Page_064thm.jpg
ad8d295da46da1005718d4c714dcd24e
be3fd27b14b3eaa595d6b5ab3bcf6688b94d11f4
91505 F20110112_AAAWPK rao_y_Page_100.jpg
537dbc2931dc03d62382d0cfba181064
932e7a9a1b52b6fd2f10fdd4d33d90d9a567cf65
84881 F20110112_AAAWOW rao_y_Page_081.jpg
a5ada873968f8d034b14901837373583
a71c1238665703f8e6ed3d309339b2236825cca5
29171 F20110112_AAAXTC rao_y_Page_082.QC.jpg
a3b6cfda30d1cb1633fbfa6783a070e9
41f328c047612819a5bad57dec452bca999c09f1
7273 F20110112_AAAXSN rao_y_Page_074thm.jpg
95d57c0b15c4303ab96d2f8aa0c0f3a4
0db0ea7e2846957e21c9b69cb1f45f52e2917c7d
31488 F20110112_AAAXRZ rao_y_Page_064.QC.jpg
100d10fc2ca503d49734a440ffa3b384
9219efd42d7152768c6541b3f60216b591a9114a
106984 F20110112_AAAWPL rao_y_Page_101.jpg
befd5d1a956d88bcc7d540f07e71ab03
6c2accebb05c39d6c686ad83b5c1ccdf0d141211
90689 F20110112_AAAWOX rao_y_Page_082.jpg
a3c412d3062d6b4add031d5fb6671051
9115c5c022b920ce7fa626905619167d1546b35c
101653 F20110112_AAAWQA rao_y_Page_122.jpg
c43bf0f48b8868ceda4627531f5ad63b
377296e99c6b4d5c8acd9230bb4dc732bdafe26e
25641 F20110112_AAAXSO rao_y_Page_074.QC.jpg
ebb2db909046497fa6f769fdd972f80b
5421b5046dce41f1eaf7d1850bedb7a5edb2ab0e
94411 F20110112_AAAWPM rao_y_Page_102.jpg
f3482df2a3a65348cfb135b80fe0d5fe
543d1af43116c16c5d1581a7d6bf3a04041d8568
96832 F20110112_AAAWOY rao_y_Page_083.jpg
4781839897e94d82eb960c820ec12a9a
2292b00f8f93b22d8ce9c70d485b87c75ba36bf5
8581 F20110112_AAAXTD rao_y_Page_083thm.jpg
a7f13b326d8939124e11c8a569f88e2e
a5c27a4b5270acd43c6fa19891bafc696edc199b
7751 F20110112_AAAXSP rao_y_Page_075thm.jpg
62e68bdb98c1cf2ce7f032f5652f9227
08d04122da61a49185b8d3bf1b0fc09b40b0c414
86089 F20110112_AAAWPN rao_y_Page_104.jpg
d0e475e8cdf5b407e9a62e8aee6d3fc8
281b568b4b2f8288d6e732553fde87f5e6fd2876
87383 F20110112_AAAWOZ rao_y_Page_084.jpg
82d0ced884e587ef02ea6f0d0abf0539
b1b2ab0829e58b9afd5e640b13dfc86e310344f9
100044 F20110112_AAAWQB rao_y_Page_123.jpg
8dbec491545a010475fb1122014ca451
e9889bde53ed95f51664c3ab1535753211f3256c
7911 F20110112_AAAXTE rao_y_Page_084thm.jpg
279e3ca1418d5894681435a273e3b60a
9e7caf084f012afc9816b239088ad17234a7029a
27618 F20110112_AAAXSQ rao_y_Page_075.QC.jpg
1b4db07907a81d0c33e0d3c91a7eba77
38f7364e189287596b7d2b10840a1f34612119cc
75027 F20110112_AAAWPO rao_y_Page_106.jpg
395653626cda8490c92877d19312156b
805d209e08cf6f81cb9577f75cd2a87690972488
75343 F20110112_AAAWQC rao_y_Page_124.jpg
c7463efabd1e66ef14faa6cfe484876b
fcb160cd80140522b56222349eb355f6d4e0260f
30559 F20110112_AAAXTF rao_y_Page_084.QC.jpg
9d14d4fc97c67a1e7b4152578b3c6af6
a7c18fb9e533a4b53117c9fb91c27fb461379157
8039 F20110112_AAAXSR rao_y_Page_076thm.jpg
942bb0dc6a4f109ca58785823c774807
8925a1f207a3d06b6335f3151d55cab85c11ea37
66913 F20110112_AAAWPP rao_y_Page_107.jpg
92a62ca2c62602b8ff18452a5b18f763
bc69a6c2d0db4bb8960af2fe196af6ee9296bbf5
68397 F20110112_AAAWQD rao_y_Page_125.jpg
6a737fa6869d4cd01dcc9c915ecb948b
e00805c99bef4cf373bd1eb0bb5dcf8a475255c8
7580 F20110112_AAAXTG rao_y_Page_085thm.jpg
daa3f2c9638c18e190b3abf60059b563
2bb1f0c4147e7ec3e0f1024ccb15f42cc26c579f
27810 F20110112_AAAXSS rao_y_Page_076.QC.jpg
9bb5e7d8f7fc9e093b614dc157557f25
4c7f2f48265f4dffff1ac10a3298e5fc4229c96b
93656 F20110112_AAAWPQ rao_y_Page_108.jpg
2b2f39dca2e1ab1c0f052c3863b4fa2f
58e334e5bc4a5616e1a8cd3ca1b38a9eb66ff2c2
52788 F20110112_AAAWQE rao_y_Page_126.jpg
9cf452a1eccca9a39b78dd9e7ff0084a
028366d7fb249e7f57848d503aae1b90de9f917e
26144 F20110112_AAAXTH rao_y_Page_085.QC.jpg
c809cb93abbbbaf147b2a77042090784
bbfd75a23017b377ea4424da666cf25d1c9d0305
6528 F20110112_AAAXST rao_y_Page_077thm.jpg
7f3c04a28ed3a50203118447116fa591
392f679f56c4a7c38a4eda3342b29cacadb48250
109739 F20110112_AAAWPR rao_y_Page_110.jpg
51729ed5c5f96f701ccd3c5b68f023db
506fd2085edc089a8c1f4c0f7dab1c7c38d9e079
74168 F20110112_AAAWQF rao_y_Page_127.jpg
0a8d848a0396a42e0647a3f9cdde1a0b
47a33f5c8b3fb94c23ecb71897580825e7450750
7783 F20110112_AAAXTI rao_y_Page_086thm.jpg
c2a8a0bc90ec7c9dd45588daaf165953
ba585865764c8cb99b9d74e003c778e38cfac2e5
23357 F20110112_AAAXSU rao_y_Page_077.QC.jpg
cdf81861c3d0f31542e81a3d48c80f9a
12f11bad009fd802f34d0ea4289d15efeab7f049
65536 F20110112_AAAWPS rao_y_Page_111.jpg
0a55f80e882603f78acd0b9fc254cc2a
db7e027288c3e75c014ee030093ecf396f507e9a
88275 F20110112_AAAWQG rao_y_Page_129.jpg
77527b29a75ad9f3bebda2985284821d
8a14eef086a75da30d23296f08523245c80913b9
29418 F20110112_AAAXTJ rao_y_Page_086.QC.jpg
8a435c2cf6f2761c45c5e32b780cb75a
dea531dcf1bd85d44818a04edad93c130233ec68
7966 F20110112_AAAXSV rao_y_Page_078thm.jpg
ba2af43ffbbe29ab101d4aa6551e519b
5ee7854397ae939331bd39233d834bea19f86c17
98421 F20110112_AAAWPT rao_y_Page_112.jpg
ce3fbb1184183731603036b968fc257e
d994b5fece27a78e1d6f981b9696b6b183f28e46
105353 F20110112_AAAWQH rao_y_Page_130.jpg
e5a7d4a49dffc54f25b0c8d43c257027
08c6bc4012f8fc3e314b347520d52d4008550ec4
7977 F20110112_AAAXTK rao_y_Page_087thm.jpg
7ff281f50eb5b57268a3e7215fa50b3d
08f39f80a9c73c53ef0591976c3e7ef582a439ee
28949 F20110112_AAAXSW rao_y_Page_078.QC.jpg
9b6f569c52e3557efa1c5816044ea74a
10bb13cddad5fae00b7e45ca12fb21477b2acae2
91920 F20110112_AAAWPU rao_y_Page_113.jpg
ab9f8c53fd5dde2a41b804e1f5471521
38c15ed5c8859164a9be567bba10c0ccf09c68ee
100098 F20110112_AAAWQI rao_y_Page_131.jpg
bcc98f4b128d2309d7c67afa5a1440cc
59402eb117a7b81646bd3a38e4d13fb9b492a3d1
8744 F20110112_AAAXUA rao_y_Page_098thm.jpg
569ef1f1294358e451c14853bf1a81a9
33ff011c0fbd76b1ae7e628f68d5b8196021cd92
8521 F20110112_AAAXTL rao_y_Page_088thm.jpg
b78fa9b61f25a77923236e0322d60e27
56851c7fa6e6b621de865d770be54e68d90cefb9
8282 F20110112_AAAXSX rao_y_Page_079thm.jpg
4be57b8c96a70241073bfaffdb1e50a6
1a9dc98dd1eb7ff8da3a7b90f447c8ee72d377bb
69030 F20110112_AAAWPV rao_y_Page_114.jpg
d6558da9b6b1d0469776e82f9e0ea438
77660d5377cb7fea9f11c0f0a8a31577aea5844d
30615 F20110112_AAAWQJ rao_y_Page_132.jpg
b0cf64c423180a3490d99a09a5b7a361
62a6a800dca1e220631fcb374ffb0b39c2dc6909
3348 F20110112_AAAXUB rao_y_Page_099thm.jpg
12d39ad7560bad493b834b835dac83e5
9bfa8807ed7b1ac9d7a8972ef020dfcfe524269c
32435 F20110112_AAAXTM rao_y_Page_088.QC.jpg
c2a26d7ae3575192257e9f49fec79215
4d9c3215f984023218f2414d776dbe2fb59fccbe
30352 F20110112_AAAXSY rao_y_Page_079.QC.jpg
44712ef6ca1c15dc5b7d9464b13fdf98
e2e136354a89d2362165ea3f646e99835fdcc8a8
97576 F20110112_AAAWQK rao_y_Page_133.jpg
296ae9624da12cc0605da0124be50464
7b02ae569967138313590af937369fe9931c8a25
92111 F20110112_AAAWPW rao_y_Page_115.jpg
938f5a16fa24fadf79900ebb6616586c
6a9e5ffdfecaa2c8b83a596274ec897166f311eb
7468 F20110112_AAAXUC rao_y_Page_100thm.jpg
ba0b79c9e5a59e710c6a4a96c8b86b12
929f170e505f15a1f4000d9d9e5adb1291c4fa08
8891 F20110112_AAAXTN rao_y_Page_089thm.jpg
7d5cff32549c948f1eb91f4725c21f30
ff2e668d85bef0a94001b4935abe24f91890a8d1
7531 F20110112_AAAXSZ rao_y_Page_080thm.jpg
00b5ccfd8c0a2558a55c52cf449258b3
1638060d516c078bab22d36312014edb7157242c
77607 F20110112_AAAWRA rao_y_Page_154.jpg
1df00c44014d4d7bc332f89790a06ea5
1350f602537404de29749d6bb6670693d8cae826
89669 F20110112_AAAWQL rao_y_Page_134.jpg
8821bace36d61428a150916941fb49ae
5f67c326124bbf85842e29084d2bbbae2e7edade
70123 F20110112_AAAWPX rao_y_Page_116.jpg
50e648d983a6472939856d66e02a1cac
c5535fbc2627caa086c6c6805e7f08a77d060ed3
29194 F20110112_AAAXUD rao_y_Page_100.QC.jpg
d9c3a3506119847f4d0f8ba93758a290
b65bdf9d9a24304c102e4dca1028ccb99993828e
33401 F20110112_AAAXTO rao_y_Page_089.QC.jpg
18ad0ae87619127e8dae2b875ba804cd
71d4fa58122ed088225b376b37ff36e5f792ccaa
72669 F20110112_AAAWRB rao_y_Page_155.jpg
0c2726bc2b097c5ea7a1421bdbadb2e3
d8c1b2af59b87b7b872ce3c634509c0031521d26
104911 F20110112_AAAWQM rao_y_Page_135.jpg
7b286f056b5690d59db39c5dea60346c
6981dee028b3997b7b3c6a9914c514a39629d863
65843 F20110112_AAAWPY rao_y_Page_117.jpg
3a3f354cde4acb2a27627ea522d6cd16
0b5ed400b784af7727ff1f10e2c207081c19cc8e
8115 F20110112_AAAXTP rao_y_Page_090thm.jpg
2f783af79f8c81b9b42885d67eb816b9
189858e146628788369419303e27bd2f055a0a81
99822 F20110112_AAAWQN rao_y_Page_136.jpg
84065f385865141f3311282ec08cfdaf
2bbef3e79744804e7c0740e42363cc0967d7d34e
93701 F20110112_AAAWPZ rao_y_Page_119.jpg
2085cf562f636bbae5227163a32aee28
c5dc6ffab96c41d38162863692418efefcc6ca0c
35232 F20110112_AAAXUE rao_y_Page_101.QC.jpg
a6d0223ae35ae6514187f31ceced9f12
1a8cec6f872da038ac27f1e1785d40eeebdd9257
28840 F20110112_AAAXTQ rao_y_Page_090.QC.jpg
2976ff6575886ef01ac913df5f718551
2636005031d8cb54983c75863fc4bc5d4c9557fe
35152 F20110112_AAAWRC rao_y_Page_157.jpg
70d0c119124801457f503542fff130c3
2a9d70faf6a84d2f1309b1450e6a981f3716f662
77734 F20110112_AAAWQO rao_y_Page_137.jpg
1a56fbd2abcd9aa33b2380380cd2f538
ab0cb58325f918629470d12034b5d3c54955ce55
6578 F20110112_AAAXUF rao_y_Page_103thm.jpg
2a135f58fafcafab7b37ad3ffd41d540
d95f35b51e7b394688e564279804e3bcd4c2a31b
7215 F20110112_AAAXTR rao_y_Page_091thm.jpg
14c089fa1af47701f8cccdee3644c62d
d06daf6c91797aeebac54a5f53cd08a65e4a177d
110480 F20110112_AAAWRD rao_y_Page_159.jpg
da733e0fc5b40616cd15f92fbe9c5fb3
8f0f0c2d6a89a083916b4c3831b12403b4288794
105386 F20110112_AAAWQP rao_y_Page_138.jpg
18f89046fcb1d49044e34b82c698870c
a2de6462ccb1e6e34c02adc71ca7aabad5c6a85d
20445 F20110112_AAAXUG rao_y_Page_103.QC.jpg
7e612001381fef584d63454975c2f017
aad0e297196bb9ef5a247f8d9af1cab4bc588a89
5831 F20110112_AAAXTS rao_y_Page_092thm.jpg
963426a156411f7197681ebd8e27d3fc
ce5af03b9c8bb3d9c66722924da832cec31c4ff3
105275 F20110112_AAAWRE rao_y_Page_160.jpg
d02c5276291437b8da610b5aef46cc43
d26a143a528a2ecec44008cc8c58d3187916ec14
81443 F20110112_AAAWQQ rao_y_Page_139.jpg
66cf9bbf2e36375a87f56cd5f324a3dd
4d2eb5adf9683bc1653b2abf672831f85503ad35
7992 F20110112_AAAXUH rao_y_Page_104thm.jpg
efa1dcb16c7e62d30b115fb4d2983889
b60b7ce442f1aeb117c5bf0c8c77506f174fcea3
6630 F20110112_AAAXTT rao_y_Page_093thm.jpg
52b2ee2868ee61999f193dc7804f5aa5
c90eaa17e81c6d5693581dcf1de64f8c8768d6b8
93689 F20110112_AAAWRF rao_y_Page_161.jpg
d2fb1d517da49b265d90623f6c75ac15
4b7a14276321a3fb071b2911aea4facc1f7a9e3d
91315 F20110112_AAAWQR rao_y_Page_141.jpg
9d0f22072f7de38232b4dcc60dcb9b56
be809c7d3b97d6b8a254c84d24f2b1b37ce3957f
30700 F20110112_AAAXUI rao_y_Page_104.QC.jpg
08eace0d7163a902ff6861cab35e50b2
2478fc16d92d805ee255a822572751c883e646df
27449 F20110112_AAAXTU rao_y_Page_093.QC.jpg
fd363a4342d9488f096f51afd00c7871
694b68056467c22ac447e5df6824f4dbb62996e6
89314 F20110112_AAAWRG rao_y_Page_162.jpg
003802bb9072e2ed185734c929879eac
5a4d9c435004d20c2a3cd3981d931368eae97175
78332 F20110112_AAAWQS rao_y_Page_142.jpg
47b343cc3e04ad5827a9c3f01769bcf7
68a9408a814877b5b2d8774a3ee179b28e9bee6d
7523 F20110112_AAAXUJ rao_y_Page_105thm.jpg
d4f215f67a4031cf4d238baa7b82a221
617e26083118524b0775d65b2697d3aa5fc2e000
7551 F20110112_AAAXTV rao_y_Page_094thm.jpg
2417ab752f51e353e8101a98bc67ff82
54c472a63ce7ba5f2a6069088b23ce41a4e90c49
82569 F20110112_AAAWRH rao_y_Page_163.jpg
3079b0c8e071376eb5d9e7e1ac163a3f
0b11817fb3527f02118c11d40a357ce5f45832a6
92381 F20110112_AAAWQT rao_y_Page_143.jpg
7236421878a77744b28dfd54ee47e6d1
cef0672efe39c9038cb77d15c8a54bf8fb036c29
29186 F20110112_AAAXUK rao_y_Page_105.QC.jpg
ffcbc97270d16cfaf529932b91697d9b
01cec69c08bff37014066c094a637c153a40f2a0
30792 F20110112_AAAXTW rao_y_Page_095.QC.jpg
5aae9624e381b8b11b972a92ff1f0058
cedb2dc82f7b9a12b35fe52db7b643735ffbacf3
85704 F20110112_AAAWRI rao_y_Page_164.jpg
d762f0db101fe5b0fb938acfc6b11edf
1d5981f767fb2affad7ce5f68c11072c47f77e9e
85293 F20110112_AAAWQU rao_y_Page_144.jpg
54de93cd27e368704da0ae727c6cfbcb
68759c2c1fc25bdd7c2948da5b92a941cd6df3e5
22902 F20110112_AAAXVA rao_y_Page_114.QC.jpg
d8bdc0d721d0757bda55ebcf391148a3
1d1361f289b51b3df925a90af33c708c607006bf
7485 F20110112_AAAXUL rao_y_Page_106thm.jpg
e7536088f0c494f60ba01c20eeb1e879
da2d7f92a776908ba253eadeb4ff9c8674d689f0
24333 F20110112_AAAXTX rao_y_Page_096.QC.jpg
3ca6f49ba799ba12cbc38f25851a57b5
c5a59af05fc397b4d54876445bff8cde90c784fa
96649 F20110112_AAAWRJ rao_y_Page_165.jpg
182fbc0b0569290e921441910b6e1a69
b22a70127741ee7bb8301f24e771cb5189728da5
103391 F20110112_AAAWQV rao_y_Page_148.jpg
e6d8ad6542a69c177583bc09a0548dd9
616b7e45babdf7fe3feabf6ad51847ef6cc817d9
30641 F20110112_AAAXVB rao_y_Page_115.QC.jpg
970acfb8050c37e9b29de0c73e1d604d
a2ceb77d2a09b12c924cf7cfc791d662c145262b
23866 F20110112_AAAXUM rao_y_Page_106.QC.jpg
4d385c1c1b0fcf8dfbb0ca9ffaa0c805
e8219a03759128f1483dea2dfd9ad7647ba4d773
6947 F20110112_AAAXTY rao_y_Page_097thm.jpg
20262540acd5242563ad7e0d10211bf8
28956e4d5100de3880875a3e2f822aa0a23f728c
105965 F20110112_AAAWRK rao_y_Page_168.jpg
147b1ee512fd50acd10cfff1ea7251dd
ad1b41e063862205a1c45f067c29f06d19fc28aa
70316 F20110112_AAAWQW rao_y_Page_150.jpg
7c3d422e6a856164203394cef23f350b
fc38d1d2087f7bbaf7b897a528ac2fa7699cc18b
6563 F20110112_AAAXVC rao_y_Page_116thm.jpg
57177694d0c195e6e69aae9d69cc708c
0cde69d6f09d09d64c56de812d7bf1c8f8404ba4
6506 F20110112_AAAXUN rao_y_Page_107thm.jpg
95a1d882dbf532e4c0761f8862de63a1
1c92815f2e7bf3318a430d2307ea9ab4ed247d94
26076 F20110112_AAAXTZ rao_y_Page_097.QC.jpg
e5cd3d0dc46e9ba5cf46491a040d5fc5
e5fc956dae515f790134625df7605b11826da8b1
49641 F20110112_AAAWRL rao_y_Page_169.jpg
e6d3efaaa317dba8e993ef681297673f
ea4fdcb56de19b17fe12a7019348b5354da813f4
99420 F20110112_AAAWQX rao_y_Page_151.jpg
4b179b518098f2a6b36821a9289cfd79
15feef1b3941025fc7aadfd443567c517c66d974
9989 F20110112_AAAWSA rao_y_Page_003.jp2
d2cdca9b0dbe9dbc2a7c6bf6974861b0
68d44d42bca783dd959bdd54a2f0814c9a455f69
6832 F20110112_AAAXVD rao_y_Page_117thm.jpg
7cb18d094297c302f694a43731941e89
c882f8a1fa74cd74f09944dff6a6e8aa1d1ddc01
22342 F20110112_AAAXUO rao_y_Page_107.QC.jpg
07abf74243ef3d949700c5aceff619b7
ee34bd3a834ba12e94dededa18fd6fd3706d0b12
70034 F20110112_AAAWRM rao_y_Page_170.jpg
3e38361dcb9b54b20b54a0a9097b39cc
30e501e997174c05aa99db064fb9144c9db750ec
83753 F20110112_AAAWQY rao_y_Page_152.jpg
e94d959dd2d4d7699f1294720ed18be0
f50c20ec3f6a4ae84bdd9770b97cd407ab20655a
93066 F20110112_AAAWSB rao_y_Page_004.jp2
b41fecad905906a6f593bbef9bd13020
04a0aed5901aff19c5c11f88e9e1dc3a860c11e5
21818 F20110112_AAAXVE rao_y_Page_117.QC.jpg
3ae8b1feb6972b6fd21a456bc6861de6
4cfa4c59e6cdb32a2b0802122e437cca14c0ea7a
8105 F20110112_AAAXUP rao_y_Page_108thm.jpg
0a4cd89c4290bee07021f4ffe9e3753c
4b8f1c1e7a3e4a8d662ff4aea19217cb2629065e
40607 F20110112_AAAWRN rao_y_Page_171.jpg
acf02870d3f05dcfea4582614a30a8e5
14bbb33bad33abd764a1b4548ae49a8725aef9d4
86906 F20110112_AAAWQZ rao_y_Page_153.jpg
942850700c88eb51e6b2172a448fff2a
9bbd9ea0ae711e2def299d4740b2ba1e61f46366
14086 F20110112_AAAWSC rao_y_Page_005.jp2
62914c00c9b8d6ec84e0a3218d8d35c7
3fefe9570f1efed32ce8169e9cb2e38fdffffe8e
30082 F20110112_AAAXUQ rao_y_Page_108.QC.jpg
fd64801578772c846ce8d38784474e9b
3249437021dda0921c7c8ee5c64cd9d3eb360cca
74766 F20110112_AAAWRO rao_y_Page_172.jpg
eccf6c79fa83a3fe1663af8a517583af
59bf0c7225d5a7f11633d1523a7fa07e7e6edae8
1978 F20110112_AAAXVF rao_y_Page_118thm.jpg
d00b39aaaae3ec1d3c875fb95edb2ec1
02aaf6ca08433f61cad5174aba0426e4c0b2350c
7654 F20110112_AAAXUR rao_y_Page_109thm.jpg
28b283cbb5940866ed92be3213602574
e44a5de124622daabe58a895f8580b1ca22da385
59970 F20110112_AAAWRP rao_y_Page_174.jpg
6896cbbb3206225e4ca57480175021fc
aeade8ba8fc6c141ab0756e9043158040207ea1b
552236 F20110112_AAAWSD rao_y_Page_009.jp2
36a1c5c1da4473ee43271364f40eabf7
ef34e26b7b86efdce3b460d3405f1fe1572a61d1
6796 F20110112_AAAXVG rao_y_Page_118.QC.jpg
2794acb3db128d96f1a0348b744096f7
f2a6e257a838ec8846b98f4137ea9c7c88895e05
28306 F20110112_AAAXUS rao_y_Page_109.QC.jpg
eb0f7a37cea03c25911055bb019caadb
0e555ce5eef1222ca7922bf342af8342ff723b06
93456 F20110112_AAAWRQ rao_y_Page_175.jpg
225fd62dda8de2cbf36c3b06e4f15c26
8189c37d737b32f3522a4b5ff4aca11742cf64b8
179979 F20110112_AAAWSE rao_y_Page_010.jp2
e638889b8353d5594a78581bc31b3558
72da612a078faa693b63f51cac076f6ef5373399
7914 F20110112_AAAXVH rao_y_Page_119thm.jpg
62510e02b2100eabd8923f4cb1e85c93
1ae28162ecbf0b825f86551496f24e5e167aa9fd
8823 F20110112_AAAXUT rao_y_Page_110thm.jpg
f01a698f718f6549e0fe9b11aa18559e
cb2a228fdfc260d4ced4f729b8017a19765e2227
127962 F20110112_AAAWRR rao_y_Page_178.jpg
87f7509d82fb6f8aaed6100b5912fedc
57dc5524921119d63e565f53da6ef805e4a0b5ff
1051955 F20110112_AAAWSF rao_y_Page_011.jp2
01292a6eb295e444c239e52e031c5e77
71f42dc31cd5fc6bd56f276e3ed8306f7b0f3947
30636 F20110112_AAAXVI rao_y_Page_119.QC.jpg
ab77532e9160223a1e4491a6926e9d64
71ac903543da3143b4cbbd33fdbdf920d698aa62
5444 F20110112_AAAXUU rao_y_Page_111thm.jpg
84a2abca4ab470b9a7cb53d9125c0512
555e2dfa955a36b18f1ab83143d4fe374606d77e
128898 F20110112_AAAWRS rao_y_Page_179.jpg
312cd6c9b3e148f471aaa103c3446ef8
00d9a6003fa3ea1bd8ab61ba0b428eeea0cab23c
669917 F20110112_AAAWSG rao_y_Page_013.jp2
ff4112739ac5a56176b9922311d9720f
7cb57ef6b52a8d9a8f5db4aae7bd7970633be4fd
8438 F20110112_AAAXVJ rao_y_Page_120thm.jpg
79ae9f2a992ecfb30adfc659f1628563
45cc57cd376417ddbb43db30999d44af1e8d7b45
20876 F20110112_AAAXUV rao_y_Page_111.QC.jpg
4b105d49c17715317184c7ad58e61dfe
2d8303f7c918471ed1691dcd3c8311f0f006a2ca
122234 F20110112_AAAWRT rao_y_Page_180.jpg
5fded1206c8b47ee934f5daed81cb799
988cf9a05a985c541b2285fe79562c74423f65f5
87480 F20110112_AAAWSH rao_y_Page_014.jp2
965dfc08bf71f7e8104eec73ee3cc255
4e93bfaa101d0d8f7305eb0f531f8fcadfbfc1ae
32640 F20110112_AAAXVK rao_y_Page_120.QC.jpg
e183900dbdec52698a521f57b8a79eff
f2ffacfb27b7df17205d38942e3feab5afbb229b
8418 F20110112_AAAXUW rao_y_Page_112thm.jpg
81e9c68beaac46b87acee1cd8ce28338
5399f63340a234b3b1726453f83e3b086c0bff79
114353 F20110112_AAAWRU rao_y_Page_181.jpg
ee8be4161c9bcc8ea158a9a2cd22d533
f9199b2437e12eb765f028700b5548950cd24318
118242 F20110112_AAAWSI rao_y_Page_017.jp2
e831fd0c4253add862912c12e8912ffa
e7c142a535d758c016f2e89257ff312fe9a8019b
28426 F20110112_AAAXWA rao_y_Page_129.QC.jpg
69cbc9b4b7f17cbce03e0ec18f016745
959ac51ee7549aa1c05ac8c8b138591d0a2373a5
8527 F20110112_AAAXVL rao_y_Page_121thm.jpg
efa26edf517ef8245783e908664ebe50
0e08d163fbbd1a93cbfa414d1070d7a712d80d87
32413 F20110112_AAAXUX rao_y_Page_112.QC.jpg
b734facff0e4edb50bdc0c7dca417a74
4ebc250479bc817a04fbfa05d687219cd18609cc
124358 F20110112_AAAWRV rao_y_Page_183.jpg
409533f2312a0b1b3f8cdc62f7ee6d1b
0691517cd22253d9c18f19a06cf2d500f02233c3
95159 F20110112_AAAWSJ rao_y_Page_018.jp2
026fc6da64106ae314923f296af94ab9
779fd95998c1a1f9ceba946aa30db5b5fbf3b4bc
8970 F20110112_AAAXWB rao_y_Page_130thm.jpg
0dff58adcfb5b2eeef4d176b81f6499b
bb15fdee431f56ed19ab67f3e43ea77ba381e886
32309 F20110112_AAAXVM rao_y_Page_121.QC.jpg
af21fc89d9adb85189519b4bc101ad1b
92d8dc9924f434e01b1e7e2272d070297b5159fe
31694 F20110112_AAAXUY rao_y_Page_113.QC.jpg
991cf92b729dd2d0721f669f682c804f
ad6d98de40d70517997a50b62ecb7c12ed0cc452
121278 F20110112_AAAWRW rao_y_Page_185.jpg
4fc07a45ab3d6cd69ea27ca100de3351
ab91334c2a53887fef45758a3d0039b4412fc4f6
808027 F20110112_AAAWSK rao_y_Page_019.jp2
af1495513dcac8263684a0acc8b7405d
21ee59afd4c175b3bde4d4c00a5a1553f2df035f
34909 F20110112_AAAXWC rao_y_Page_130.QC.jpg
bd7b9474c6ede202ed86aa4ca51e4eb6
67d5faea716e993b44a877b1e8ddf8cd1402a8ae
8756 F20110112_AAAXVN rao_y_Page_122thm.jpg
e1b1f33c6b9d75466e2e614d75cc79af
41ebc0c408001e803650361c36b6630df27f954c
6477 F20110112_AAAXUZ rao_y_Page_114thm.jpg
bb011d6d774983b52fd052a7ffc2508b
1731310a5d116ab5c44994fc0a7aa3886d227d89
31292 F20110112_AAAWRX rao_y_Page_186.jpg
7993136800e47f1e092a0c7b84d49055
33a011c34165d2a53fc6b89a931a5e7225cfab22
96054 F20110112_AAAWTA rao_y_Page_038.jp2
ebe32a12a6aa05b377a05c568e46385d
ac9099e824d1c2a6dcf8f535a3465034ee9e80dc
88514 F20110112_AAAWSL rao_y_Page_020.jp2
d64bc2c1528daf4c7e811f65e091b1d7
60093157abf9ba87f9f81fd08552ebebc29feb87
8384 F20110112_AAAXWD rao_y_Page_131thm.jpg
8b2bcd19594562496eb324c7eaf20b29
c4aa86621f485fad8938d6e0479e34874f6e491f
33589 F20110112_AAAXVO rao_y_Page_122.QC.jpg
4075e4ec85df8892346a9cd99ceca382
4c5f144ab9d403bb662f7fb7fc25540d8b61788f
70829 F20110112_AAAWRY rao_y_Page_187.jpg
c860b4ef8290abb8875aeecbf0d08527
6be72acb0064a2e0d94a2a1ff3fff47f36aa8761
96904 F20110112_AAAWTB rao_y_Page_039.jp2
93b9cc9388123c84210896f9c7725d8e
8bcb0dea4508d4fa21e880c0f3eb476737b074c3
89563 F20110112_AAAWSM rao_y_Page_021.jp2
6661e8db556b082d4184699c274bbd1e
8db9ad3c5480a3446ff1bd336ea750248b5d11dd
33602 F20110112_AAAXWE rao_y_Page_131.QC.jpg
e7ddc5f25f7bd9f171835c653664a859
801794b767eec2eb775769dab78b8fb5db4e638b
8636 F20110112_AAAXVP rao_y_Page_123thm.jpg
d4ac819012bc799892792011ad8d6642
94b72253ebb1449fdf35bd41214e3f5e96b0c228
26165 F20110112_AAAWRZ rao_y_Page_001.jp2
a544adedf63b536aa6cd5cf3ddb7f79a
be5999b0345a87f3c9df40e72e4427b39d0f400d
F20110112_AAAWTC rao_y_Page_040.jp2
37082d227c45fb9c0e2fca8980a06004
a88bf2c544333db47480669b1821a3c7e9759297
113840 F20110112_AAAWSN rao_y_Page_023.jp2
e85f71a6992e4c616a5a905689b82a57
aa95d7778d71daf657d5d3e4475c6a60e3317565
2684 F20110112_AAAXWF rao_y_Page_132thm.jpg
587e1a91a5a1c18426e6962a234e66f8
df3ead1f9a327bd79f07d6e5d919f2dd669e3465
32168 F20110112_AAAXVQ rao_y_Page_123.QC.jpg
dbdf24f166b792ffa518eb3c40d54856
4522a2f4593ab528a6711e55ff740339353e872c
79644 F20110112_AAAWTD rao_y_Page_041.jp2
68976c441467aa169e77eb93f89c962e
d8efc6a1290f8fa910f8ca28f56de9a11d03b24d
F20110112_AAAWSO rao_y_Page_024.jp2
4b32874e85ea39dcaab962510c0a5750
b2e1fb95879aef5f924e85faba885fbfceaebdd3
6770 F20110112_AAAXVR rao_y_Page_124thm.jpg
0d53af8e3583979b3b61c2ca2661e9ee
0e463a09d5f6014fe5523a5dc15e803926ddee58
100092 F20110112_AAAWSP rao_y_Page_025.jp2
fd091f70de1a5ffb44207e3995bbd0d5
eb9de5259b4658853876b509b1589ed5f955cfca
10670 F20110112_AAAXWG rao_y_Page_132.QC.jpg
0a53b54346d1f37cf98b727c18e005d1
48c58ee8a7e57cb7061c52f2b217be15d715e7ea
24663 F20110112_AAAXVS rao_y_Page_124.QC.jpg
375189b3df3fdfb20899c79abf34857c
f2db6ebd0f7b550faa0491a8e60d35205fc3b795
98265 F20110112_AAAWTE rao_y_Page_042.jp2
bb48e022ad7bb9c2cdfad0111680ecb9
441fc280f6df7440a80eacc8a011460681a753ee
102904 F20110112_AAAWSQ rao_y_Page_026.jp2
f41e450940420afa078c64fcd07a996f
d5f5d5fd512ea4722f3cb6681e565c639af92fa2
31409 F20110112_AAAXWH rao_y_Page_133.QC.jpg
64df72c3ce03977e5d331e9331da8c80
84644aad71610339edaef8799080fe22d424af48
6257 F20110112_AAAXVT rao_y_Page_125thm.jpg
49ddc9c547ad0dd27cbf69c69e037258
c76775f4a3c86780441ead4983bf80ca57386170
95939 F20110112_AAAWTF rao_y_Page_043.jp2
b3c7aca5c7ff14b3a7063327b64b56a2
1f4e78a88049bed02e0f34cea547855d49dec8f4
41103 F20110112_AAAWSR rao_y_Page_029.jp2
670b96a72c2563f51470d2d288cd8dbb
30f9bcf3090b9f424df49ecb7359b63cc9e83981
8127 F20110112_AAAXWI rao_y_Page_134thm.jpg
738e2860501f547d98a133cfa7c094c0
9e93eef21772d6f0c775cc72c945ca8001fbf1b3
23029 F20110112_AAAXVU rao_y_Page_125.QC.jpg
734805e5dbe9088a2da7e75f1e663290
1081993fc74021d0b5eb8e630d376337922a0836
103414 F20110112_AAAWTG rao_y_Page_045.jp2
cca6c978a9a4b45258b225d3eb7bbf9b
42de53037f356837c0e3be3feca6a29ce07eabee
98646 F20110112_AAAWSS rao_y_Page_030.jp2
4146560ccc5744b63c372eef4011c12c
bd607eaedd07d214c844a5f3d5fe31cc1546cfbc
29589 F20110112_AAAXWJ rao_y_Page_134.QC.jpg
d6d72daa0b80d3dfae0b4f4ae3bfab4c
8e193ea8c2dac244f7565738bcf94da646b679dc
5400 F20110112_AAAXVV rao_y_Page_126thm.jpg
8942a22240c90624b7161efe59c76b34
2663548704d06541d82af89c308af998ffeed445
100804 F20110112_AAAWTH rao_y_Page_047.jp2
325470ea3d0f46431a376c478d599ae8
59d63ac284cc072c036649e241d72a9701d066ca
872368 F20110112_AAAWST rao_y_Page_031.jp2
8d09e859c31337c27e4ffbdf02256848
dc2221fa3c9f33afdc5c5b2f74c8f36414aa66cf
8715 F20110112_AAAXWK rao_y_Page_135thm.jpg
c5047a4035662504f3808f2441d8c3b8
c9f770c473755aed9dd192d7ae45ba81c2ac2020
6678 F20110112_AAAXVW rao_y_Page_127thm.jpg
1ba5bc11ccbc83996d8932c361fa92d1
38bc73b5c6443a1aaa92eb87f7b038926f015c2a
93838 F20110112_AAAWTI rao_y_Page_048.jp2
f8d9cd1d139fb173de1ec2161fe5aff2
3b8d6e51ffaa92a8fdcf75e9f2f1a2e5bcb76dd4
106812 F20110112_AAAWSU rao_y_Page_032.jp2
9c76975ce397e8fdcfb0a85ba01a92d0
6727fbe2285ae1c6a4afdf7b1b981af874f23039
20315 F20110112_AAAXXA rao_y_Page_145.QC.jpg
02aef6993fc2afcf16065a2574d1d6a1
bc699d0fc67fb52b86ed489502d5b4b764c9ddfb
8793 F20110112_AAAXWL rao_y_Page_136thm.jpg
fa1b106b7530dbba2b9b6d390a134177
b07abf10b21aaefe8223a170e6f5709f96c62865
24655 F20110112_AAAXVX rao_y_Page_127.QC.jpg
913785eb0a93377437caccd995cf7ed2
725ae78f63c10ac0be26d882106bc14a1b0f3c77
735460 F20110112_AAAWTJ rao_y_Page_050.jp2
93175c5794a9d6352fca19c90ba26c71
98333c4112727d75dad60f63b56403433f1e112e
105313 F20110112_AAAWSV rao_y_Page_033.jp2
a43a492c957f35f6b94f63c0b925556e
7b60c0ebd56d1cee8b84c41cb2babf1d9f3b613b
7924 F20110112_AAAXXB rao_y_Page_146thm.jpg
a37c673e311b72c605765c29e79a8d58
f3e7a5205591c589536d3b4c390c87c52326ecf3
8611 F20110112_AAAXWM rao_y_Page_138thm.jpg
fc4d2204358a38b28a8aa53e8cbaff7f
c7c1b861810f8e84c6d8feb1c38f0152bf15588d
3647 F20110112_AAAXVY rao_y_Page_128thm.jpg
5e50c619ee35dd79694e964afd7c3ef9
cc4ac6dc85ec2f53774c15e2818777f6c704e971
112492 F20110112_AAAWTK rao_y_Page_051.jp2
57d0b9f0a4188e84b4fd73e4f3e9b97e
e6a19e5b1a947d5badefa0e6b400dc9a3ac5da98
100976 F20110112_AAAWSW rao_y_Page_034.jp2
054ef50fb4b23298a49dd40af8b48992
193e5aa1221a5030c3190f7c3ddbe90df9363422
28116 F20110112_AAAXXC rao_y_Page_146.QC.jpg
cf0a344925cdf074caa8d31c1301da1c
741ad89fcff3f992f45260cf2ef323fe5527a43a
32428 F20110112_AAAXWN rao_y_Page_138.QC.jpg
38af7c25936028477939d958edb687a3
e76a7bc12657268cde290925518a4beeca0c65c6
7447 F20110112_AAAXVZ rao_y_Page_129thm.jpg
c1169f584d08d66c23bc59438beb841c
33c17afb00f13bd0c6739974671a15152b9638ca
826487 F20110112_AAAWUA rao_y_Page_069.jp2
98ccd445fe9be127e8011e5520e2b8a9
0961f2e6ef74187aa6d66502deb890c5ea53893e
706390 F20110112_AAAWTL rao_y_Page_052.jp2
e78b725de502c4a3803b3f74561b04b0
ada29bcf8165e4f11ab6a26536e69f9e6876686f
95451 F20110112_AAAWSX rao_y_Page_035.jp2
2f03004a7c4b8e0ff896deb4ec3ee792
ad70f41c138dfcce1c8d81dba38ed17e9ecde58b
9068 F20110112_AAAXXD rao_y_Page_147thm.jpg
414ac6114070059f1b3a41f69c40ea0d
ac85b9559a48cf76935a4b294daa1efef27f4744
7923 F20110112_AAAXWO rao_y_Page_139thm.jpg
9dce92628e09d82baf2c6912f1a2015d
5bfc2131b2f3f0d915d489831424f98e49f03d90
869194 F20110112_AAAWUB rao_y_Page_071.jp2
a9afd443ef86b05338450120958545da
5e3243cdef3602bd08046d51d911c317061f28e4
854911 F20110112_AAAWTM rao_y_Page_053.jp2
04f1b70b8676f386d21748280be87ac2
aed008bf5859ff9cb9e375d81c86743a151735f3
101042 F20110112_AAAWSY rao_y_Page_036.jp2
5aee26ccb82d4c444edf28906c26432e
3ac44a788e9c13bf3c82cbbd2c8c685835c2bdce
8864 F20110112_AAAXXE rao_y_Page_148thm.jpg
b865c52d2d40aaea34c33dfa8f2173da
3a962ca85fe1f41546438e829daddd80c8b8fa57
28058 F20110112_AAAXWP rao_y_Page_139.QC.jpg
a2d0f6501bc7b7303394771921b66550
fb2d83e0fec6203e26b6b89717103ba0b741f499
877503 F20110112_AAAWUC rao_y_Page_072.jp2
d75a012e9e5a8b4373944ded6fde4914
4cb3b24da13ba0e2c6bb16a458adcd5742b6388b
112794 F20110112_AAAWTN rao_y_Page_054.jp2
80822c231ac954d8e4ad5859f9ed37e5
817062cb4e1e761c6d08569188885faee635798f
94916 F20110112_AAAWSZ rao_y_Page_037.jp2
fbc8ababcb3fbe56526fb2288c098238
0e831ebceee6838ca5652d479c0690dc6b376906
34975 F20110112_AAAXXF rao_y_Page_148.QC.jpg
9ea140fac0dae2407965c96845eb99da
b600432fb1baf587e4906c641dc4855f667bc8f9
8707 F20110112_AAAXWQ rao_y_Page_140thm.jpg
67fbb5f3e76dfe2c4943ae240f056e45
e87c9c0d95e488399c715329cd5d938ade8ac8a0
84982 F20110112_AAAWUD rao_y_Page_074.jp2
a87cddf57c10f7651c7f8556a6486d76
db3e4709a51acbc41d7a78b1141c4157a7a1553f
102576 F20110112_AAAWTO rao_y_Page_055.jp2
555a1a38512f5cedf64655397bd6a771
1e0a59b4d6128872589fdda38593e5637e2be845
5703 F20110112_AAAXXG rao_y_Page_149thm.jpg
8ec4b21f5a58688ce350de56a6e8f49e
52e1c4566c721d5095a18f38ed34e5ac02b7e37d
31557 F20110112_AAAXWR rao_y_Page_140.QC.jpg
26e05c4bb5a91ebe76a19bbf4fc4a38e
7bbd9b9184ab3e542e6f0945c81dcb1bdd84a0fb
87968 F20110112_AAAWUE rao_y_Page_075.jp2
a5cfdfcedc1fdb4beb291a524c1066a6
10820ef79955b1d959e5697651304eba9bfef4d4
97613 F20110112_AAAWTP rao_y_Page_056.jp2
1a1535b3efc4b04748c30c3d051e9987
ed6e5e1c36f448135a79db6860fec8bbcb2b8223
8349 F20110112_AAAXWS rao_y_Page_141thm.jpg
099d5174b9d30e8a5f16dee0a867a7bf
47065e57ebee2a8e03a62037715fc49422bbb090
94147 F20110112_AAAWTQ rao_y_Page_057.jp2
b4be5bfcc104e505b3beb2d06f1a96d3
fed2d7d00c292f6c86e7c2221cbf529893c857ba
6878 F20110112_AAAXXH rao_y_Page_150thm.jpg
dba2bf3b5d4b01f726a277f1c6a8985a
36900e2fd13edcd6c2e98c1f3abf96632b978716
31744 F20110112_AAAXWT rao_y_Page_141.QC.jpg
e2ce0d60c3b7114fb037818a7733fd84
4d67b046240d259ae42353a87ad5d45f49cd4cbc
93378 F20110112_AAAWUF rao_y_Page_076.jp2
da165728bf1c7ca786b88c17df3635d7
b85eee5b8e90592fe901614c0b39bcfe79cdf60e
77730 F20110112_AAAWTR rao_y_Page_058.jp2
311844f1b232c42aedbdde5b907d21bb
e2871ca0e58b2eb9bb893c712ac1dc6882c36da8
F20110112_AAAXAA rao_y_Page_090.tif
e4dbaa9ec3e12788afe1e760c5f247dc
1f830d8f1262f92e1c9c26fdac0463766278ad52
21729 F20110112_AAAXXI rao_y_Page_150.QC.jpg
855ae09edb307bbf680f5bb7b57baa98
c164ee70650368d4f784b86c96f999fca7d82a4e
7815 F20110112_AAAXWU rao_y_Page_142thm.jpg
cb36e29f6d95c44ffacf7166e2b64e85
7b0289cb32ed0499330f73d7542c8abe88e7870e
74110 F20110112_AAAWUG rao_y_Page_077.jp2
05eb0b97393d8619f99b9be45c4d5f93
c7f5f54aafb734765b95cf3d22832d89e4022e66
81345 F20110112_AAAWTS rao_y_Page_059.jp2
e7067171b2caf3ce46d69aed51bec250
fbdb75de478045c1cff50ee3d4859ccab39f4f39
F20110112_AAAXAB rao_y_Page_091.tif
731ebef3369bb7d9d495a84513efa07e
92c66d0a88b3cef9073ac2f9802d8a6cb779d4a1
31228 F20110112_AAAXXJ rao_y_Page_151.QC.jpg
cb96cb4ce4f6ed5dfcef11a137e7d6af
be45be48b51deb9ef18325274d197f761dda4ff4
27241 F20110112_AAAXWV rao_y_Page_142.QC.jpg
deb23bb6f2f5d006dc027cbf334f5850
d0dad7fa31707fcfae358c997637383cd6dfb92e
88008 F20110112_AAAWUH rao_y_Page_078.jp2
70888903321371f647a2a2a01ff511a6
72239f5b1180fd518974e3b7a39542aa5db5e7f8
84193 F20110112_AAAWTT rao_y_Page_060.jp2
ba84124b4ea61b1841876f01658f2b85
c38db8b815ae72bdc8fe8c6c1769b5b21cad1ca2
F20110112_AAAXAC rao_y_Page_092.tif
3ebf2b01b83c644481a507f2d82f05f9
c5d718ac8a0bdbeb6dd26e367131c77c55267f8b
8027 F20110112_AAAXXK rao_y_Page_152thm.jpg
62d35ac9814c19b1d9db2a3c0e7eb928
5207db55b4bad507a8e163d0a1e1dd3ed9071e8e
8271 F20110112_AAAXWW rao_y_Page_143thm.jpg
4b34fcd0ff95f5a6c3d28f2cadba5f09
837a69b79cf289c89a9e8da22f706a3e549b0ac0
985623 F20110112_AAAWUI rao_y_Page_079.jp2
885b4acf0d0dc3a83a1cde6562d6a736
c4ec5ec592cb0f4f0e402ba36e1a76ee6541f3dd
79624 F20110112_AAAWTU rao_y_Page_062.jp2
8bdd9370bec16671523bd8beda2ce912
79c7201a83c86fb6a5f25d1874721a98ca3a3980
F20110112_AAAXAD rao_y_Page_093.tif
9c090c957b3c93c46f2086b807d4a618
3072c940ac9bc6b3f0aa8270ee421d2a02b7e0b1
8411 F20110112_AAAXYA rao_y_Page_161thm.jpg
203c34c9de5c67bd69e95977bfdb0ec6
29fb74dbb30412309e836cc5b54fcec0177e6365
29866 F20110112_AAAXXL rao_y_Page_152.QC.jpg
b32d7ca45f48ef6f12297af2c5f0e2c5
87e36d63863fd324410ea2b3612b4cda1526662e
7660 F20110112_AAAXWX rao_y_Page_144thm.jpg
ca221e7d075127e1213c2fa4d96c5a0a
0d4e9174e9f8535582974b4c53c540a91ccc3487
82864 F20110112_AAAWUJ rao_y_Page_080.jp2
f5b81a1c363db8bcefa1f1e042232171
a260cf6f791a7ac407607b18f931fa564f87ff6c
97458 F20110112_AAAWTV rao_y_Page_064.jp2
949ad91b20d9dbf87b69df0d29b60f11
f384725b9858853a5c0a5aebba0b65587f16268f
F20110112_AAAXAE rao_y_Page_094.tif
0fe2168b83e4382045688a94efe94197
0c054f89899787778a5cf39193f9145e3d284594
32608 F20110112_AAAXYB rao_y_Page_161.QC.jpg
711adaa12ce2cc2dda99638c6b9cd9f0
48014e6d7936c66c49143f255c79b8cde73b704b
7656 F20110112_AAAXXM rao_y_Page_153thm.jpg
082d88287df91422d9ae6e27333d2579
7b803cfe8bd1755c9178e9cf391566dfbc184b98
29089 F20110112_AAAXWY rao_y_Page_144.QC.jpg
bd07c0632bc97bf9f97d544714c37b7a
724a08fe9b92e5b37d61464e379e57be57ac8ced
95644 F20110112_AAAWUK rao_y_Page_082.jp2
20d29b5ec9a68fe7b5b65f9b5bbab4db
834b1d1d627f8aa11b05a3bb1c12afecfd12df52
103874 F20110112_AAAWTW rao_y_Page_065.jp2
ed5ddbcf15a78308741afd50e3fa8e94
5971b2bd19b8cfa304037f2b5595106133039ab1
F20110112_AAAXAF rao_y_Page_096.tif
ec6a21d183412fc4d58e9f422235f34b
b6a9f50ef3c9c1399d4376123d706482ab0fbac2
8318 F20110112_AAAXYC rao_y_Page_162thm.jpg
8a7ca744a8e89505d1a9aebcae0c9be1
c80da2fedc77dfb2dd9f6c9e6c6e391122f1a98d
30589 F20110112_AAAXXN rao_y_Page_153.QC.jpg
aa0e198bb1b8697d68c1d25a8afdbe7a
8c019762124a650ccf6bfbf1130d28665646e9e3
6608 F20110112_AAAXWZ rao_y_Page_145thm.jpg
3f3f431070e417f7432da7c9b976c1a6
7c8955a2510d67db87e2231b034076290b9c8524
97941 F20110112_AAAWUL rao_y_Page_083.jp2
db83991e679f50a700cbff4a1f281474
301ca39dc45d050b7be4895b90e9e0d2daad2884
101958 F20110112_AAAWTX rao_y_Page_066.jp2
478f520b90482ac3c3e081d570080fd2
5b44cf4a2dee5b77b05f2779eb18a46a21f5626f
F20110112_AAAXAG rao_y_Page_097.tif
22e5fa7e2240f3b0d0a173820eb2f992
1c42a288be95053723165acd19fca910c4bf78bc
96748 F20110112_AAAWVA rao_y_Page_108.jp2
08b3eb9a1415c38d733a3c4314602a15
1627e91f2d8871725c3edc58621bc7bc4f7eac44
30116 F20110112_AAAXYD rao_y_Page_162.QC.jpg
408424e7265b658ce8d6d776d5e836e8
c463816a28b9db4641612836f6d12ae88e36818c
25948 F20110112_AAAXXO rao_y_Page_154.QC.jpg
67673caa22900c7399cf928f14dff87d
27f09fdc127d8e6bf76c0ebffd2c49d0f7be7d7d
89441 F20110112_AAAWUM rao_y_Page_084.jp2
464c153c98d9e191815556e113a5989a
ee0b6ebe224c8fd7dc341340aaa994018a24c643
99735 F20110112_AAAWTY rao_y_Page_067.jp2
6d5cdf80f8b2270d62a160a253471953
39efe49d601cc94ed02ab4b776b47bd09a72f412
F20110112_AAAXAH rao_y_Page_098.tif
885550cd241343169adf26a532f22b9a
1d08d67a793f50cefcd01a3ea01417accc1f61c9
819087 F20110112_AAAWVB rao_y_Page_109.jp2
780a4a0d12a3f4ec2773d5d51fb2c47e
0dab1fd91973908be4c2ac43ad7ae80438ff5948
7116 F20110112_AAAXYE rao_y_Page_163thm.jpg
5450bceb33e244c60386bf95015d3820
57c26f52124ef728ee03d39ebb0a6f046afc9a86
6426 F20110112_AAAXXP rao_y_Page_155thm.jpg
0538fdf9c0af9c2785534633535cc2b3
720eb1569c8fc3fe32b3a0ca063e43894765b66c
81766 F20110112_AAAWUN rao_y_Page_085.jp2
e3561ed10128390766294a64cfa9d1e9
0977e735f0959bd5fbb89bbda58e48dfe184dddf
907827 F20110112_AAAWTZ rao_y_Page_068.jp2
29410d67130209861aa2f572f9884324
94cefb053a900d99e7842ac99171aa30d76205c4
F20110112_AAAXAI rao_y_Page_099.tif
54fd91917aa985049e76067b7ee0258b
7fbee406751bd92d9a675e996d025a73aab8a151
113753 F20110112_AAAWVC rao_y_Page_110.jp2
2ebc4fa78ecc34d90fc6f4f95c76fa61
88513064842dc3edd826300ab8200c241b29b6f1
27268 F20110112_AAAXYF rao_y_Page_163.QC.jpg
c81147863cff44cef9f8676e4f9f83ee
87acdacf9feade3b15636c970760d9c5d144337a
5506 F20110112_AAAXXQ rao_y_Page_156thm.jpg
cde15182ae91d4ad0bd18cbe9538a676
924e67a25544d252e55ac2b94abe186c37000a30
91271 F20110112_AAAWUO rao_y_Page_086.jp2
5b9255141f64944ca382b203e5323812
e183aef2fe0b73e727e1371939b02457a5ab52c5
F20110112_AAAXAJ rao_y_Page_100.tif
ef40bcc8770a87a60d6b6fd7e31ed878
5cfabf07b7e0d6d41cd2fc05ee6f50c13555c119
101293 F20110112_AAAWVD rao_y_Page_112.jp2
651f84af4cea7f7b4d19143ac8a5c8c4
1a4a5ee218bd5326bd5d54c8a6973b377f32e173
29410 F20110112_AAAXYG rao_y_Page_164.QC.jpg
bdc8e38e5815ce7d8a69ba0a06551893
09a34001a465b2827131d991d9cd99b2496b1366
20350 F20110112_AAAXXR rao_y_Page_156.QC.jpg
b34e8325439c34efdf40c956bf57581b
f8645e04a451b147b692890d814cd50bcd7ee3cc
90491 F20110112_AAAWUP rao_y_Page_087.jp2
f81c049c75890b14390394549aa2826d
3e1b5bb4bacecca427f97900183cd24a4e409fe6
F20110112_AAAXAK rao_y_Page_102.tif
b5d8111848000b2c2ac5132ffc0a1559
f504313c8c3756395cdced7e85175c1bd08d0d30
72372 F20110112_AAAWVE rao_y_Page_114.jp2
eeb9154b93eece4cbf350460f1383132
859a6ef7ea3e0a47c272c258fb414363ed736e0b
8087 F20110112_AAAXYH rao_y_Page_165thm.jpg
8e6fecf5eb65c58ca55f941b1fb2bc96
4781f1ea086421820e29fc3c29e3bb75ed10d025
3090 F20110112_AAAXXS rao_y_Page_157thm.jpg
f5915cab411f977873004b9899810170
61bf99608870b373806bbd4a950f6ab9ceda78f1
983415 F20110112_AAAWUQ rao_y_Page_092.jp2
2b9cbc27eb274de9dd55f7e207daf491
f5892ffe264e317776f45617a4ec9852f9fcb488
F20110112_AAAXAL rao_y_Page_103.tif
f6410d50c05842b8ab0babdd641c6531
fc2ae4d5d1b374bdc01c534aa878a38fd1777f81
97708 F20110112_AAAWVF rao_y_Page_115.jp2
329c87f0810a1b12bd1d0590eaa70b18
8f501cf8b2e1a56d6165e47273dab20789afa41c
12083 F20110112_AAAXXT rao_y_Page_157.QC.jpg
434ad7d85610011b616cab84994b416b
baaaa302113fd4f18a84012b9fe7df820b107415
1051929 F20110112_AAAWUR rao_y_Page_093.jp2
803986b38447adbe924c7c1553469a9f
74e790afdf4dbed3a0c40f5cb4156bc4547522e2
F20110112_AAAXBA rao_y_Page_123.tif
c76ad03982d06b691e8aa856ad716f8b
7357cbbdc35c8a89cee5eacb98154b063770a858
F20110112_AAAXAM rao_y_Page_104.tif
24f4f6a4961281e4ccb3e6c524079627
92620ad6ae3a642fd9c9381c4c7303b503233468
7625 F20110112_AAAXYI rao_y_Page_166thm.jpg
94327c671d17dd1f347a47d0f4a7bff9
b8898fbe76cd9f88b4191e77045858dface51f6c
7697 F20110112_AAAXXU rao_y_Page_158thm.jpg
04f41da8b93f9d7b67ce0b28eea6aa17
875cfcb7c9be6bbf409c7b3245821474df2246f0
1051851 F20110112_AAAWUS rao_y_Page_094.jp2
bab8085bfc088fea4d265d9aeee3abe8
22e76768128195761f6551076b60bdbde3d779ce
F20110112_AAAXBB rao_y_Page_124.tif
e9a06fbb5ea056552c655ad637942db8
91990e7c95e8271b68fc4a295dcd38ab57a5a196
F20110112_AAAXAN rao_y_Page_105.tif
da051148a943cbbb9b24e10d1359b56f
e739968e54ba63319866e7118cca3c3e3894062b
673514 F20110112_AAAWVG rao_y_Page_116.jp2
1af10144baf04c68de28c5752477f3bc
3135267792c14e2ba7ac3d94eefcb425af8f2186
8393 F20110112_AAAXYJ rao_y_Page_167thm.jpg
f70a7f50718ef7d93979d00005a6e827
f345fe671fcebcb5e597841de15e7be39a6b8089
30135 F20110112_AAAXXV rao_y_Page_158.QC.jpg
dc564d2e8b22c74987ab5d7aa2a2e436
5652403d2811969b4eac224b1af8fddab447b514
F20110112_AAAXBC rao_y_Page_125.tif
1532383047df235323a0b164d1829836
7a558da484f5bc12306a3ce86de96dd12f0d8da4
F20110112_AAAXAO rao_y_Page_106.tif
5438ba6073b7536b4e4341810b328917
c1374de28266f0207ec24bae65b853c2d9644ffc
740112 F20110112_AAAWVH rao_y_Page_117.jp2
676f1b01f06de8c6464a086cae879634
042f67ef70f1c3dd02585da28e057f838eab70dd
1051956 F20110112_AAAWUT rao_y_Page_095.jp2
9a39c0e6cc3621ff19cef3b1450fe1d9
8cdd0b307da5799e05e13ab384b14ab64d1646e8
9012 F20110112_AAAXYK rao_y_Page_168thm.jpg
505aa8207aa8de4e43a4516a12757b6e
d001870eea6133e0d75cdcca74b3ca00152df38b
9163 F20110112_AAAXXW rao_y_Page_159thm.jpg
0dcf8df2d58f3f7dec672331bd0a0065
f7b1593897265368aa44902d92df0d0b45ed9583
F20110112_AAAXBD rao_y_Page_126.tif
8ab2bdeead903fb52be6654662b488cf
aed31c29ac799e8fee0865786b2900b613f401d0
F20110112_AAAXAP rao_y_Page_107.tif
eab0ddc619da9394525c0a854711f13c
476c49013e3c6c1a9c13d6242c7e9dd917d7bc27
96085 F20110112_AAAWVI rao_y_Page_119.jp2
cc798c8d01b68eddbd7affeab74b3fa1
69e57de178e9254a08b775a6eea5cf00beaaaae8
758320 F20110112_AAAWUU rao_y_Page_097.jp2
9c518dfb0622b3ae737829fb3104fc8e
7b35e38518c40157dfb654d6a67e5539cdbcb2fd
37740 F20110112_AAAXZA rao_y_Page_179.QC.jpg
7a11f971de45bc4e6326c9c14a7f47a1
e469201fb59785cd5279ad14dceb084f8971edfe
35092 F20110112_AAAXYL rao_y_Page_168.QC.jpg
720f4cd8d617c289695129753f7110db
190e540a77fab6a99a64cb4e52ac08383a5e4476
35905 F20110112_AAAXXX rao_y_Page_159.QC.jpg
20fe6e0b4eb2b63e14675d0b6f073396
22d1a3e71eaefd52e20f542c11480bf65f365c5b
F20110112_AAAXBE rao_y_Page_128.tif
4c9afa26219524916ea3a850f335945e
ff4b96615e95716e17b229573228b66fa1902915
106500 F20110112_AAAWVJ rao_y_Page_120.jp2
dc8600f8b730e6d507be255e59a6b776
d587cec37f8bbe062fd971a83d502d0632237669
111324 F20110112_AAAWUV rao_y_Page_098.jp2
f67a28a1523ac9aaea3c22693e34d9f6
1b7f21d57d751e8256f2baeaeee98136ce4ed7e6
9190 F20110112_AAAXZB rao_y_Page_180thm.jpg
2e757a82cfb49f6100664270d988580e
c616961aa69ccb24568765a29fffbcc0fc511ceb
22781 F20110112_AAAXYM rao_y_Page_170.QC.jpg
b7a1375a490383865c34f9ca85135580
90c8e65e203e69d3515352862a9653df8b210ea7
8730 F20110112_AAAXXY rao_y_Page_160thm.jpg
325fbff0f787d52275771b706faeaea3
ec3d5f7bb70fa89a53117755016babf64edaecf5
F20110112_AAAXBF rao_y_Page_129.tif
fe10bbf8171b80d4187c616a2fcca1f3
1ab6faff03305d7e5c153e5985d998331f9a1068
F20110112_AAAXAQ rao_y_Page_111.tif
8be228719ce09f746f8fe3eb141dc66e
ed8c0db59b0037011a981352e2fafd84db78d32f
105105 F20110112_AAAWVK rao_y_Page_122.jp2
819b892a73e5ff704d6740ed9d7d6942
beb6081694ea1583bf38a4636e0750a67a0549f3
41712 F20110112_AAAWUW rao_y_Page_099.jp2
8a6f497353a0a11a25dbc445376789d6
e2329ea07e027111f8b985061b74161cbf85fd1f
36076 F20110112_AAAXZC rao_y_Page_180.QC.jpg
9e41cc3bb1a1a2ad0c1099fb545848f8
cdbf80a96bef29847efbc2551e27f5023e2731d1
6829 F20110112_AAAXYN rao_y_Page_172thm.jpg
909144f90aad6ed3e5cb207f18020df0
1e42d647408c725c4ad540de664ef6229ef66a15
35654 F20110112_AAAXXZ rao_y_Page_160.QC.jpg
122c7730f30b5b0f81cab3b10694ff1b
8e1e66637838a1b50c84029b3222eb33d3c19fce
F20110112_AAAXBG rao_y_Page_131.tif
34e4a257f6e4a4d3c3b4d19df7f57149
90bb712033f2062306ed243481722e23e12d9a2b
85946 F20110112_AAAWWA rao_y_Page_146.jp2
cfb9cd69cb4b8fa6e99b2d38bef68a15
e8f23a36b0d3d4cf5917e37960808b9e8e6e5e85
F20110112_AAAXAR rao_y_Page_114.tif
de4063d06b4bae585b6201146101098a
cb1ae0990503eec85e4fbcb63d83022b33647f0a
104168 F20110112_AAAWVL rao_y_Page_123.jp2
a2ab81684005dbf0bccb17696d5ebfb5
636e81c42c25c0f2c3aa25a9133493c2968e33fe
96915 F20110112_AAAWUX rao_y_Page_102.jp2
b388ceb4d229e6767abc7c50209f55eb
c480ed893786fe784ac8ff64873ec884e571929b
F20110112_AAAXZD rao_y_Page_181thm.jpg
1859cb190700dd8da395c06de1b6eb74
d6d19344db1b99e7fa6b20b3f89f69fcade4c963
7038 F20110112_AAAXYO rao_y_Page_173thm.jpg
a19b697b3018c8a721c7fadf4c07c717
be7898abebaa1e4e7e2de40999cfef6b4fc18c22
F20110112_AAAXBH rao_y_Page_132.tif
94ea841c967ac25b96b55e176a453fda
9d37b4ed642d18e42b181b54818c7c75560cd92e
104444 F20110112_AAAWWB rao_y_Page_147.jp2
997831104442ba0a3a7aa08777aeb9dc
200480bd6c79af2b0f22754d9487d32854ae334b
F20110112_AAAXAS rao_y_Page_115.tif
5ea0322fb6ec2e26ea66cfc70e729a76
b7ea3275fc856bfe5e329d4d1963c555fecfeec5
775316 F20110112_AAAWVM rao_y_Page_124.jp2
c2db36d7343ce2dc80b0dc1f053c398c
59f8127f5bb37f9dca4d4a681b43e4a4043fb7b6
80232 F20110112_AAAWUY rao_y_Page_106.jp2
94d1cff6a246aa60cd714e0226aa5da5
ab5601cb79899a3220c39ef7c8725293fcbc0195
33635 F20110112_AAAXZE rao_y_Page_181.QC.jpg
4cbaf77cc7dd944052fb8d1f5939dead
b92af06e37b97f5a0905554ff6863894b0caebb9
23976 F20110112_AAAXYP rao_y_Page_173.QC.jpg
015e4cc8efa40b723bd602b1666a2f68
bb243a75acc0a0ffc8ffa6e59e243474d1db029a
F20110112_AAAXBI rao_y_Page_133.tif
a2f858134fd0221a05bfc9940a965409
f6ddfba74fef22039bc4499620c4410d470a2a48
108423 F20110112_AAAWWC rao_y_Page_148.jp2
930496a39030a45b44958fbcbb2aad7e
eb23c77434bb12b3aedf913abaf551e869498c26
F20110112_AAAXAT rao_y_Page_116.tif
5d062ba8687c407caef7d1e0bc9e448c
1f2f57167f0ead47478df9da5f00b226ba34bbc8
507053 F20110112_AAAWVN rao_y_Page_126.jp2
92bb3cbe98e0969a6d0849f76c4fe3ef
1ed488cf20cc0ad5bc2adf0431e0135ba73bcaa6
71276 F20110112_AAAWUZ rao_y_Page_107.jp2
0ca1fda25fad048cd5cb46f6ab59791a
a7bf6f17eccaddc71ea9c10e8ac25da022e7882d
35016 F20110112_AAAXZF rao_y_Page_182.QC.jpg
7f1c629363c2028b2c18d95cdbdbc99b
dda834fd165c468f37d0d8c1ac1b471f3e8bcc89
20539 F20110112_AAAXYQ rao_y_Page_174.QC.jpg
0356751a1e740e72c2b9ef0212f224b8
4b1233b3bef0b9d34fe5561a38bc11b0526b8cf6
F20110112_AAAXBJ rao_y_Page_135.tif
f2ffc9ddd98facc19d2ab8a5706aa3f6
bc969398625dcbb488548230b355949b4c077f3f
67644 F20110112_AAAWWD rao_y_Page_150.jp2
0bd3817ad4e9c4f20b947dd06a6a0fb8
3ed56e6482fff0476c4f0d649a12700a3fb176e6
F20110112_AAAXAU rao_y_Page_117.tif
f6f343312052c7fd3eff893d927ad6f5
9f819cec468100ab3b49bdf6705f999a3fcaa8ac
45489 F20110112_AAAWVO rao_y_Page_128.jp2
d35973bf95db3a1ea659b4cd538fa2a9
b56032bc839496d4ed136bf83b9f11ec7221452e
8832 F20110112_AAAXZG rao_y_Page_183thm.jpg
266b152ab122521bcf6a63b5a8f47cd0
b3b096adb0942c239ec7e4f44a19e72e794b9584
7649 F20110112_AAAXYR rao_y_Page_175thm.jpg
ddd7006b8e229f5ebaa3dbad29f5cc29
6f3db54440af52b57d3b1f084cf7314d1544c239
F20110112_AAAXBK rao_y_Page_137.tif
347cd29e06a5642f233e6affcc739236
093fd289ffcb1a351fa1d90eaddd3a715dcb667c
100112 F20110112_AAAWWE rao_y_Page_151.jp2
96c63bff6450c76979ba7f95693b23bb
02b15a5d0fabf0270eb71bcd7732c2e8c42858e9
F20110112_AAAXAV rao_y_Page_118.tif
9515d41216f1f9404d88bb7b1e98d626
d6a921662b599cc94d40eeb006c1fdc1f57d8d9d
89847 F20110112_AAAWVP rao_y_Page_129.jp2
e919cbf58e50f4ea918143d5651c0e98
00e534ad54748cb3e18af7abad49586e6ba14827
35814 F20110112_AAAXZH rao_y_Page_183.QC.jpg
d1b5d842b52b878e4897aedb169ab639
7f75dc9c8e751c838c4ba86f46cc4ada5eddb5ea
29454 F20110112_AAAXYS rao_y_Page_175.QC.jpg
1eb2144789708752a4a8496a762b1eb1
bf6b4b961a0ccf2b5365e3d927fcfa4ddf6517c8
F20110112_AAAXBL rao_y_Page_138.tif
eabfc93e053353c58912f188c5b9c7cc
253dc2a7299fee0b4bd86dacbcf4ec2f1db4f7d4
84022 F20110112_AAAWWF rao_y_Page_152.jp2
4a17f4cd282033e789423f4919bfa212
c4c9ac243fd842be2997b3620de36b7a32d04393
F20110112_AAAXAW rao_y_Page_119.tif
eb919ad4b7fc870c938e48caf92e1229
ba31ed09e59bf73bd774de96850e284036c40788
105662 F20110112_AAAWVQ rao_y_Page_131.jp2
61320139cb464c1d5e5bbee6872e9b4c
345c2605fccc1e16bbd8ff839cd9d4080d91ea7d
8639 F20110112_AAAXZI rao_y_Page_184thm.jpg
6b638944c51c059cd29d81b7f3120ce4
55d48d1fa62b7cc1abbbfbf466a0f3073c876a65
9248 F20110112_AAAXYT rao_y_Page_176thm.jpg
dd5672d9aebc6b02feabc839ef53bd26
5cff52ee74aff9cef9c8ab9d21799a74100e4a17
F20110112_AAAXBM rao_y_Page_139.tif
3e492ef0d57cf0a31ac39bfb8fbc2ac9
e50c73c124dddd76b309208da635f886f399c6a2
88076 F20110112_AAAWWG rao_y_Page_153.jp2
a448a3513065ca8bbefcfaade108a877
8a962ccbc88947016e47df12590a159b68a255b5
F20110112_AAAXAX rao_y_Page_120.tif
b7b492de7e018c48b395885dd286a835
029e090c6f10e33b195bc751746afe5b540cc2ae
99470 F20110112_AAAWVR rao_y_Page_133.jp2
c89e9406b71f30d423cb00faca4e8122
3b7bc5c35040dea60d5a8e9915712814c623f3a7
F20110112_AAAXCA rao_y_Page_156.tif
821e407c21a3b445ca7ebd4fd568604f
c241b5a97665325dade21b6ee5eee4cb75f0a2e6
35328 F20110112_AAAXYU rao_y_Page_176.QC.jpg
5dc2b496543db0a9f84c906fe939ab5a
01607a5c71cacb9cebc0c0df3e50a0adffa68835
F20110112_AAAXBN rao_y_Page_140.tif
ff5d3c626d998dca68ce57d71be98641
4f5f526bfff09beb675801651e56f82b149fd2e0
F20110112_AAAXAY rao_y_Page_121.tif
732ce40aaed4681da78051af40644c2a
4d84e6a4b98b96edfe81fc3ea129483c40be4138
91527 F20110112_AAAWVS rao_y_Page_134.jp2
e4934c107fce3fda8596bbe3e711eca1
4fa54adc3e4a68d9a22f310a4bcc2c6a84b80e9a
F20110112_AAAXCB rao_y_Page_157.tif
b80fdb403c248976bb5aac2a93f92f1b
5786b675706303ca5954c7f2013eed577dc89e2a
35028 F20110112_AAAXZJ rao_y_Page_184.QC.jpg
40351dc62f37bc2ec4596791bb171582
ecabaf72727ad63a45d927e8e3520d8e3f35eec0
8771 F20110112_AAAXYV rao_y_Page_177thm.jpg
418fd0923716c2a296cb0ca8a3d6134c
2c4715073e61c2ff2ce4547ccff00f246c587739
F20110112_AAAXBO rao_y_Page_141.tif
b313f8d3fa540596c2924396253fd5c7
438fee2bd5039855da6668d73d12601339d3804c
81007 F20110112_AAAWWH rao_y_Page_154.jp2
552fc042baea1194986ac07c77ea1b68
33ca92ff103652c06aaa2ad3f31ab505dbfe0d68
F20110112_AAAXAZ rao_y_Page_122.tif
2fa6e0926549780fd9a8594251189aca
468d0d002625cadc725da432aa0efc4e6a104b7f
105696 F20110112_AAAWVT rao_y_Page_136.jp2
18deb9d3f197f4d42513f713c53ba6a7
10130eef65d1495dda22164513bc9d8045db33a2
F20110112_AAAXCC rao_y_Page_158.tif
35a559bea37db2246e27fc63519e3aaa
05f261e10c7510978a7426f59a58f1301b2f3981
8471 F20110112_AAAXZK rao_y_Page_185thm.jpg
2ad49637a7e8bf6f6151b2821edb23ea
ef52a854adf8f90fe305207ac6e2c87eb1cedca1
36697 F20110112_AAAXYW rao_y_Page_177.QC.jpg
3aa73dfc4129a4c0897531c50fba3c81
e3ec54b80079825bf0cb81dafb6d7dc67f84b2fb
F20110112_AAAXBP rao_y_Page_142.tif
5e95e7f03d860006d962682b32254276
4abeb586f3933c11ac10654d471eb4d76c98e114
75467 F20110112_AAAWWI rao_y_Page_155.jp2
dc420db294f0e03aa8a76d0bc1a2e1ae
5240e75909e0eb41307cd6a936dc76c0977a332b
84769 F20110112_AAAWVU rao_y_Page_139.jp2
f365410b7f487992488b97ac8184322e
40725a74b25294332caed78edf17125340d4c33c
F20110112_AAAXCD rao_y_Page_159.tif
0cea6d4762f06fa998cf9e6757499602
4a0d72cc59e5ac39e99ac999c51617fd18f0616b
35244 F20110112_AAAXZL rao_y_Page_185.QC.jpg
76155e22d41b66736f1bf3f8552036da
04ad9da8663c54001e327d8b80fdefbd1f9bd152
9166 F20110112_AAAXYX rao_y_Page_178thm.jpg
d2575980909950bb19da4327e09498af
feb7e546887f91d6a7d00f6a9cc2629443479fd2
F20110112_AAAXBQ rao_y_Page_143.tif
02ed1483b3a92e9cd97229272b0b8ee0
b29675622e0e309623029b48bca1c112d9898097
458938 F20110112_AAAWWJ rao_y_Page_157.jp2
0ad8677fd476357672aa11eba0f01dd2
3afbf8609407815e7a18f90daa2efdbe9d3dcb0a
97864 F20110112_AAAWVV rao_y_Page_140.jp2
cc9491da9f8bc6a72eb66de75333ea17
d8900cadcd8c75bd4a26efb15d7e69ca3c4ad90f
F20110112_AAAXCE rao_y_Page_160.tif
2ed80ce5328897049d105422cacfc0ef
08017b19b99faa916dc0b2439839acfbab38e391
2287 F20110112_AAAXZM rao_y_Page_186thm.jpg
093148d1b8383b2549c9147358b35364
e6398ca151bb8b8c80e6614b5dbbd14a953a02db
37280 F20110112_AAAXYY rao_y_Page_178.QC.jpg
4bf2ed62f220f2fb367f0b31de79ae48
e28f401f68a1d102ef0f0bd889d25d5b0aac9330
95901 F20110112_AAAWWK rao_y_Page_158.jp2
6a19ab1b28ac5568b56183e210e9d1a5
d794f4711768cc0852f3d364db97eaf69efc1bf6
94275 F20110112_AAAWVW rao_y_Page_141.jp2
475812469e61cf64c1cc20f9e4448924
76dc1517ee31462109acecbc948a079993c61f74
F20110112_AAAXCF rao_y_Page_161.tif
b1731cf343e0c8e25d3d014769065458
7dcc172ce002184584ad02029a530a5efcf3cf1f
8907 F20110112_AAAXZN rao_y_Page_186.QC.jpg
2a646ccf102fbafbe23d9dd15815079e
0b796edeb7e9aae14827c7948ab75d867ca6e678
9266 F20110112_AAAXYZ rao_y_Page_179thm.jpg
433fdf04879076a677de857b44d3b4c6
33c5d7799afea694a4e21d8bad9b8e01f3fbab57
113948 F20110112_AAAWWL rao_y_Page_159.jp2
ee2b838dab34b7a3ada1a50227210feb
00d97eab9af118491029e12f4c28c55b67f93246
80698 F20110112_AAAWVX rao_y_Page_142.jp2
bedc9039f919df3779dc627dc080aa1a
0d65756b86f8950a695be38f535414dfd8232da2
F20110112_AAAXCG rao_y_Page_162.tif
769fc3f28554b5d53f1213dbc64de86b
7b147ddf34a2c7f7c49a3b52c9a5aec35abff3c0
128684 F20110112_AAAWXA rao_y_Page_177.jp2
83df9b6b566c22bd71f529fcec2bd392
ea8611eb54a0e7d2738d92f5998908ba916e2c18
F20110112_AAAXBR rao_y_Page_144.tif
1c9e39cf872084353ab3e03bd6d5dfc2
5440c75b424f74aff888985c5842e281cc1a48a0
5853 F20110112_AAAXZO rao_y_Page_187thm.jpg
c3f3414ba2acff79dd7454a43b5cebb2
d8a34c9c8c6847a347d59cc47fd86d185f37cc29
108544 F20110112_AAAWWM rao_y_Page_160.jp2
66d3c41a39a5cabb8bf763891813cfe5
8bdb5889a73f3e0ab4cc26f583e3b69568a9ea2f
99932 F20110112_AAAWVY rao_y_Page_143.jp2
47281b0645668041e3617bf72d589d39
30b748233e0e7520df191aa0b013378737b86da6
F20110112_AAAXCH rao_y_Page_163.tif
8de9cdddd4f2e53c1124ebb2c621dc9a
a15f3211fc3c286c137ef3643c902f03796fb89b
127257 F20110112_AAAWXB rao_y_Page_178.jp2
bf365752d9269644a4987873f538bc96
0035babab3ca1ea9d9de66c1ed9490594cc09b08
F20110112_AAAXBS rao_y_Page_146.tif
cdb9fc05dcbd74b2e92724e48a9dbee8
4423ea47d0b28a4a406805ada77025eaf5702098
23360 F20110112_AAAXZP rao_y_Page_187.QC.jpg
4c2a30d7cc8f189aee8593084486fdf2
31e2b3f50d751c7ecfbf509cca9bf777068e6763
97743 F20110112_AAAWWN rao_y_Page_161.jp2
9b624b1ff93f1a9cc1164c3939fd8a8a
a13b7bd882e319d48d56aa41495556b517165e96
70629 F20110112_AAAWVZ rao_y_Page_145.jp2
6aee8814a6b279ce378544ca7afbfd92
9721746782f15a499259c8794e6f5e617d25f47f
F20110112_AAAXCI rao_y_Page_165.tif
e39a1500f8baae6f07aeb405327102aa
2f5007963189c6b7a642fb9dc45374463657ac55
135824 F20110112_AAAWXC rao_y_Page_179.jp2
52d33583b5094eb1f1bfed6b425ec5be
a431b417152fcd9d798c05f74d5d95e4abb71c3c
F20110112_AAAXBT rao_y_Page_147.tif
28f6b3ed6bcb73cce1f4b59b733c6f5e
99c6d4e67a86f098df4cf4a053a23efc926c8b01
212070 F20110112_AAAXZQ UFE0004355_00001.mets
677428e1dc9e494d9f26c4d61d2eb953
44ec6e3f7ed534a700e4915ce6d347d7327f1288
91750 F20110112_AAAWWO rao_y_Page_162.jp2
8dcf53cd3cde8d501bd078326df62143
b81db73385be7ee5d4678deb990202a3ac9e564b
F20110112_AAAXCJ rao_y_Page_166.tif
036b5bc1e1f3d1f776b153cfa485031f
9ff0a049f20f114ec8a25bb106deae588fdc426f
128925 F20110112_AAAWXD rao_y_Page_180.jp2
30a348012fe44ce57807c1c3b9e61937
49c05530cfa53e8320636f01438942ed7cdd5389
F20110112_AAAXBU rao_y_Page_148.tif
5a31749be7b51aaabf91d3d80917fe8f
00284a3d97aa41710cc7dcb90295717f996c680b
85282 F20110112_AAAWWP rao_y_Page_163.jp2
45d0970346b50cd20881fb22c47fc495
9cfe5bba6c76971672c86449de60f3812fe237b9
F20110112_AAAXCK rao_y_Page_168.tif
8f959fa6809cf2c008a90ffe70773e0e
4c55dcc8187a8effaa702d3c5873e376f36c73ff
120427 F20110112_AAAWXE rao_y_Page_181.jp2
81f302dde26cf780da8e4574e7fd35e4
7ef86377eeed72164c4299b05f76bf2126074560
F20110112_AAAXBV rao_y_Page_149.tif
355e4cbe2ee0d9060bee288d554c73b1
2d091742d1db15f99455584028d23b3bf0465b73
88549 F20110112_AAAWWQ rao_y_Page_164.jp2
279b9fcb942dec101a6eab8e9308f3df
e74872621c3e3df845bd5046407667e260715e89
F20110112_AAAXCL rao_y_Page_169.tif
29b9917eafa936c35ddeac53b25b9139
dd89e16839ced28e8837f1572e3922fbfd4dfe6d
123620 F20110112_AAAWXF rao_y_Page_182.jp2
5174ce18f582a42ba21f94663f135f76
a376c50e674d1a8a5f7ba56af3e04b927e8a626a
F20110112_AAAXBW rao_y_Page_150.tif
a5f47b26399388d9c4e6d3614461e21b
a67339062adeedf4df8ce66a1b5cad2bd06dc367
99721 F20110112_AAAWWR rao_y_Page_165.jp2
f48b8d423a578b7564b893146de3e325
1ac4ca22e7a15e747792c10e26fe98426fe1e5d9
F20110112_AAAXDA rao_y_Page_186.tif
f2182c064726cf1bd8459476895c1000
f6610050f6c557bd8b6f7175a10768e33747bf79
F20110112_AAAXCM rao_y_Page_170.tif
6a910c3df66d762f854fa363851162fe
b64dcd248f56e6b399980ae0f02c338aeca4ed90
119224 F20110112_AAAWXG rao_y_Page_184.jp2
e34ee5f43b55c1835ac14f887c28b350
d62451f78ed381bbe80a2f11d499231d45862534
F20110112_AAAXBX rao_y_Page_151.tif
fcf0a4d8c6d7134af2f94c6ee62fef95
4dacc496db881333a1af5a791ff2ae9832893904
94554 F20110112_AAAWWS rao_y_Page_167.jp2
fc2bab28f1ec14b14a563d3fe0e22df0
39e3148a14d3273ae65b31c834cc38c97f125244
8800 F20110112_AAAXDB rao_y_Page_001.pro
4732be5a597496b9928261da967b0fc7
d9a111deb7b0765bbf2030fe4b9ff10558334727
F20110112_AAAXCN rao_y_Page_171.tif
5d892eb77379a76c902afa76669a992b
dd11f12166e72d56612fac60322467376953dcad
124498 F20110112_AAAWXH rao_y_Page_185.jp2
68f75f0a39ae8a17839fe3927e718aca
862007740ac705ef0dc581aa33c6f67d923a24a8
F20110112_AAAXBY rao_y_Page_154.tif
08b8fd9ddee6fd030ce5a53a72d55878
90e3e77952478c2a2219e85b573e5638b6335cda
107469 F20110112_AAAWWT rao_y_Page_168.jp2
d100fd6f0b637360fda6668c8400522e
8199f19b2089109f75f4da6979efaeb51ff2c043
1403 F20110112_AAAXDC rao_y_Page_002.pro
3c2bd8dc875cba492c357dc744733a6d
31724c5de9dd0558259c581a7ee0495ff10b28db
F20110112_AAAXCO rao_y_Page_172.tif
a6f7939443226bd0f2f082733d1b23a9
a516731a82fef9bf81a2eadb2c7c49e84b90a209
F20110112_AAAXBZ rao_y_Page_155.tif
1aa50eef2b211d6f00f6c69eb5f94e28
1c2c72dea9a06961c6fd7f95b6671c72f0851df6
41888 F20110112_AAAWWU rao_y_Page_171.jp2
b3b3ad0fce718ab3a6a1e875f6c7e85b
9f6f3ee28acc1685ea9083b6fb4b2996c891b778
3260 F20110112_AAAXDD rao_y_Page_003.pro
2ed2a4280ad67abda9d6c4f50f9de7a1
91aa15f374503abe72c2e76def45ced97df51fe9
F20110112_AAAXCP rao_y_Page_174.tif
d52e957925355472ed5d08b7a070a161
6612a3d1466f5a3a1f5156c6a882381a5b43cda7
30306 F20110112_AAAWXI rao_y_Page_186.jp2
459e018891f0a2afd9723d3d5a70dc50
4aa689a02e77df8d22ff8972d3ca334a21fa1d3c
77590 F20110112_AAAWWV rao_y_Page_172.jp2
6899cef0175c2b8dfb1f3f37598e9e5b
2390c5c6e30ac386b07655bc14b9242451c264cb
43062 F20110112_AAAXDE rao_y_Page_004.pro
61b822adfe30f5a92fe09ec5c9a0de20
ea249cb440dbbe53816f963ae2ee20d3f7bf0c5b
F20110112_AAAXCQ rao_y_Page_175.tif
1d84d1b36d294743e927761a5700666a
3baa520483322039036bae8f0c128f2af8d2569d
71887 F20110112_AAAWXJ rao_y_Page_187.jp2
7e4018abd9a3961afcaed0a65b63b6b5
eb942d57d9065491c427226110f83624addbbe29
79397 F20110112_AAAWWW rao_y_Page_173.jp2
661fe7f012b14199db3ae3afcbdc22ae
006a5e20f009a4fbd1c96f3f98474e501d8789a4
5069 F20110112_AAAXDF rao_y_Page_005.pro
d42b702cd761e57ba3ccb69712f6aec2
74ab488132bcca5a221de2951f926d5d2676db57
F20110112_AAAXCR rao_y_Page_176.tif
1262bdc3f0ec103a6a6e59d611e63340
dd0a5baf929945b5f25eb49dec9189ac38cdb517
F20110112_AAAWXK rao_y_Page_001.tif
d1e3760534cc774604bf2929d0a61533
f83e7e215b82c3539e537e17c540d27fafcab6af
61006 F20110112_AAAWWX rao_y_Page_174.jp2
3c1c28df88f9d8ea40f1c629209f3baa
9f08d28583fc3e0805fbe565183598e0c1c792d8
65577 F20110112_AAAXDG rao_y_Page_006.pro
db46a4620a6722ec48281a482226c761
5a23da9ea70de0f854a5921aa77d0bf3f854e29f
F20110112_AAAWYA rao_y_Page_020.tif
e8e2fb8e50cba95f5599f1d1fcb0a1e3
ed6463ce8456c68ee869e94a586704a0c23fbda5
F20110112_AAAWXL rao_y_Page_002.tif
dd278c0bd6ae82d79ecf49eeec8535d5
d88775ed2c8ce615c938edadde311186a79d5ff9
99062 F20110112_AAAWWY rao_y_Page_175.jp2
b00f4efc218c3f6a9ffcae1bef8a7c32
b0dd7544eab24c0817411142cf55385b2712376c
93106 F20110112_AAAXDH rao_y_Page_007.pro
89a36a9bc78f328d0f196be61d3b60e4
18c96c8dd6929d9a2cc59d75b5868776a1585dbb
F20110112_AAAWYB rao_y_Page_021.tif
fb0360abeecd70b14afcf6544f45cb1e
a86c2faea30d762a48f36f2af66eb88d94707399
F20110112_AAAXCS rao_y_Page_177.tif
79a8a2862fcf15861440e9312b07c0e0
1128d10d4c60019f5090e224703d01d13afd57dd
F20110112_AAAWXM rao_y_Page_003.tif
c7ceaec00ad88c8a943536d0620d9ae0
360854b611fad8338419eea3cf331a067db16235
126466 F20110112_AAAWWZ rao_y_Page_176.jp2
7a79aedccb427354e4e0112a5d52ae73
a90f4bb248d86855d2dcfab2a481ead196f88642
70913 F20110112_AAAXDI rao_y_Page_008.pro
fa99048493ab1f6c18db58dadc71050c
3c72965608752e3f9caada019cb8852cba74b714
F20110112_AAAWYC rao_y_Page_022.tif
b466cb551417b214579434cb87b734f7
15399272eae7087a4fb6eae210ca49fa5c902c62
F20110112_AAAXCT rao_y_Page_178.tif
409bcee6a8709c59dddee77767f7f7a8
a37376377db0387fabfc1e196ba42eb0c93c066e
F20110112_AAAWXN rao_y_Page_004.tif
e94307ae686fef28e35586725950dfaa
207765a2b8e5fbc943a6198ad3798473af4a3e94
12018 F20110112_AAAXDJ rao_y_Page_009.pro
746366c98d13991aea1eddca6712f677
27e7421d63538d56bfa23ab38a12a0415c58cd41
F20110112_AAAWYD rao_y_Page_026.tif
03e80d70652ed825d3c1d89739706297
19770375764993aaa9417ed2e681cd4e4b7b69e8
F20110112_AAAXCU rao_y_Page_179.tif
f13a26c5d83f46a5a0185b0bc7647687
0a0f58faa1127b37ef78b55fad4e49b829eb7859
F20110112_AAAWXO rao_y_Page_006.tif
04b8dcf45a3fd19118c6d2a9b6d7d3fa
f0e65520e5cfd8c0bfb2bca8229d934642d261e8
6156 F20110112_AAAXDK rao_y_Page_010.pro
af0f342f7a27187b52934c4351dd0c2f
a103c4bdc6a684b38105baaad52eb18d1d299020
F20110112_AAAWYE rao_y_Page_027.tif
accdc7c3e3dfd63a5278b78c4648ea94
519040e31794fec618c64b07aec366c82991e553
F20110112_AAAXCV rao_y_Page_180.tif
ae1802a74e88a3ca82e0cce57ed3717e
fdbdb3beb4df14468bfcbb748bfc4ceb2b2b51a0
F20110112_AAAWXP rao_y_Page_007.tif
0653fd6f96906b712063083eb05408d2
fb0cf50029289d6ed4444c6958cca780e4e7e8fd
55678 F20110112_AAAXDL rao_y_Page_011.pro
fb87910c367115a15ed3b51d9c7d76a8
0c356f1b953ad9c33030951c1030862f13520c2d
F20110112_AAAWYF rao_y_Page_028.tif
d6a74b97f2b5e05648a978b322e8a66e
8e75547509e7b3337e857910e5db723e491a7659
F20110112_AAAXCW rao_y_Page_182.tif
71133f98f3dfbcb1b4367a788796b36b
16f82e3bd15eda16785df14bf81bfdce3d4aca64
F20110112_AAAWXQ rao_y_Page_008.tif
993702b61de17b13ee29a64532811a6f
9f3f65f6f6ea5f3ac8779ec099625f422efef7e7
16424 F20110112_AAAXDM rao_y_Page_013.pro
20f9b9638ae8636e261247cc943d2574
cf85a23bbbcfb18fe07a7cedcbdb3561c42ca210
F20110112_AAAWYG rao_y_Page_030.tif
f7790b3a170b86ac246c354cd83c7eb2
e7fc581c3ddc6573cf23f3d5d4925e814f8ed98c
F20110112_AAAXCX rao_y_Page_183.tif
d4d7fcd9c83098f09e2c397ac5ebcb80
652c1c8c8571daafec81952c1b73bc0030e4e8b7
F20110112_AAAWXR rao_y_Page_009.tif
3429bd552342fad223d4c59d6e39d795
910984c1b0645d07f20e42cb796a3551d2987d91
50017 F20110112_AAAXEA rao_y_Page_032.pro
c1b39b108c5f2d3dc7183531ef6ab574
120fa3445f7c766caa815d41513ef672f692fee2
22972 F20110112_AAAXDN rao_y_Page_015.pro
cfeeb4f05563418d11b87352a563e16a
870154db4743083bf596953720f876eca21836e3
F20110112_AAAWYH rao_y_Page_031.tif
501f043c3c7dbcd8ea71f7b1540d9cad
bf61769c305fb82b3e50401637b7ee4f6a4fa9da
F20110112_AAAXCY rao_y_Page_184.tif
ce290dafdb43d71f82d378ac5ff654a0
3559cbe7a3f9ff3bcf5cd20e095c61887fa4af50
F20110112_AAAWXS rao_y_Page_010.tif
eab1fbeb11937e8363134a18679f081f
3d413013be9640392a3d3623459efc7b53f4c81c
50261 F20110112_AAAXEB rao_y_Page_033.pro
c7013eeacb7888d8784980dd86893338
29cb08f138b1d0e8bedc46c056f3403daa94d501
41296 F20110112_AAAXDO rao_y_Page_016.pro
f21019cd9ca42b94ebcb7d09d9e7136b
568a56a86bfe6ee799eb8687675e03c66b9d8cc4
F20110112_AAAWYI rao_y_Page_032.tif
dba1a5bac7b3333d75465b1e471ff081
e23d6f04f621543a95433a04092d7498e350de18
F20110112_AAAXCZ rao_y_Page_185.tif
de3dc4eadcbf68e8d238e045c4eb96f2
eade801b0cb911bf714e419948dccbd97b4da0d4
F20110112_AAAWXT rao_y_Page_011.tif
6c3db906af5c8061b4b5dff8f0b1fad6
134cf6e485c1ba5b8c57e0aa3c78f22507ea37a5
41230 F20110112_AAAXEC rao_y_Page_035.pro
56340440fe8fc4ff5add782985d322c2
c8056bee1c35b58550af2e4fab75e2466a631488
54762 F20110112_AAAXDP rao_y_Page_017.pro
3c4d5911b46a8fe781073182c2233f8b
ed0f65f37f137d2dea63574b1296fb49e0e0b74e
F20110112_AAAWXU rao_y_Page_012.tif
68674c543cc3b7f56f5583a26c27da68
db9d94787e809e99115e7e746c3cf8fc8538b139
44736 F20110112_AAAXED rao_y_Page_037.pro
f4da182e6d3e8c8ec6731f934ed99c92
38c66a014930e5de4a475d95c25ea21062d77dd6
44369 F20110112_AAAXDQ rao_y_Page_018.pro
3043efa6e165fd54d5869b6f240c7f4f
673c5b28465841b74db7c8b1ab33ad33f21e2ddd
F20110112_AAAWYJ rao_y_Page_033.tif
2bae313cb6a3ff49d9a67c5a2584f44d
0435cbc763951401b16c4b0639d7f74461be2fd0
F20110112_AAAWXV rao_y_Page_013.tif
5fcf7be195a3f71f2ab4a40d03ab1e7a
2b23d99dcf1bdaf4c01787049a96db8f8e852c18
44197 F20110112_AAAXEE rao_y_Page_038.pro
0419bc798e1fe1ff90833a8d25ab0ddc
c063be4858dfd1a82a8d32710e8923e901b0e785
33928 F20110112_AAAXDR rao_y_Page_019.pro
c9dbef83c984a478b1f18ffbb00fd500
d445c99eed85f56bbbf4736cf1c758560088e868
F20110112_AAAWYK rao_y_Page_034.tif
e7d1ef1236c72ec10e76fd8fd518cab4
af81219cc6f2939dae8ce31a3b5dff0da36b01c7
F20110112_AAAWXW rao_y_Page_014.tif
69a9a4c78b9ba99fdacd1322f971502f
5aa01ff3c9424d069ba9204b49f9cdcbfdf549f9
47518 F20110112_AAAWBD rao_y_Page_112.pro
132538ced79bae6c8783acf563d284d1
6bd85ff37de4f03456b79dd5d79ff3f1dff761df
44430 F20110112_AAAXEF rao_y_Page_039.pro
fd0abcda42f41372b3f3261b8006563b
cdc7bdd0fd5d65e7d73422789177fa4225a7cae4
F20110112_AAAWZA rao_y_Page_052.tif
32bf727b8b6edc965e31610142ef82db
f9d3977d0ab054032f9c185f9cfdfaa5c47ebaba
39414 F20110112_AAAXDS rao_y_Page_020.pro
1c326c009652ab6c24bf96d68b175fe2
f9ad4328e2b6941687bb94803251db79ff6d5f35
F20110112_AAAWYL rao_y_Page_035.tif
5f5742325a438e86e3c04deae6e9d69f
fcd02a0de85f8d207581b04ad3c3da7867c2465c
F20110112_AAAWXX rao_y_Page_015.tif
580a52110f092c98a7abf56c3cfb1b21
3118c5e5d214523df5014707969a183e4f7bc32a
6999 F20110112_AAAWBE rao_y_Page_069thm.jpg
dba8012a2be684ad4c84c270bf4aafc7
06cf735716a69153196acc1f5d32dc5ff9c59427
23904 F20110112_AAAXEG rao_y_Page_040.pro
2933d64f84bd307c3173fb419d2913ae
62cb1294bd215a794942fc5c42475c8395113e86
F20110112_AAAWZB rao_y_Page_053.tif
c2d30a3ca90e691fd1384bae5f1a340c
c8909a16af26e2e7c14fc3ab6c4a94cbc43a5f3e
F20110112_AAAWYM rao_y_Page_036.tif
04a6982688049de6048f9e7a9401e198
f605926639cd872a8c88e0dc257789f228efea37
F20110112_AAAWXY rao_y_Page_017.tif
5e9cfa06785cfca179a79e7ae4effd23
adfd51f17b12be631155f4f3f0a8acc0008baeca
39566 F20110112_AAAWBF rao_y_Page_081.pro
b376c3e0a024abd1a2a96f8b81a36675
6900cce56e12fcc5e0c326be631950ef4b64b2ca
34806 F20110112_AAAXEH rao_y_Page_041.pro
ce8814c1ace86f4826691fa50d70282c
8e5b0cd1bf757dd3ca228cd4786777ce458ec624
F20110112_AAAWZC rao_y_Page_054.tif
0eecdfb79ec916cb45c5e08e7548f5b3
70786f261ae7c7f4efb227535c03f10cee9b5df3
41218 F20110112_AAAXDT rao_y_Page_021.pro
d47582c3ef696146d4bba610fd1d4e15
c763636c4485f4944bb5b2ee3139f095c427aadd
F20110112_AAAWYN rao_y_Page_037.tif
0da8721728ac43acea1265bd7873717a
81774a0b42cf2d8ffeecc365f6d4be4ad17dd5a4
F20110112_AAAWXZ rao_y_Page_018.tif
b769f92a415e825feabc70ed8114b66a
49482ff3396b5805363309aa84d6e3304db49421
22709 F20110112_AAAWBG rao_y_Page_111.pro
f4a490341758537d0dbea5e7a0f3a3c4
9c8aee7e9b89ea48978d5cf40cfe388dd6fc9540
45520 F20110112_AAAXEI rao_y_Page_044.pro
bd7a1ac4c2bac1e91e193b966141ec5f
52b1c6d8fbf4fa8cd6bfc2a6772dfa30e979bd8b
F20110112_AAAWZD rao_y_Page_055.tif
a809fdc1e29e125e322e8789c0ccaa11
38b2ff9f7faf2b4a38d39757d8144ff0a2dfad29
53413 F20110112_AAAXDU rao_y_Page_023.pro
66bf88a0d59fa56c2464e356db75d469
3887eb3b288fba06616d35c44924d1df5926e3ef
F20110112_AAAWYO rao_y_Page_038.tif
2db24d7625687ff919f60ce309800f98
1021f1273a9f03b270128cb3fc112677a98d8019
1908 F20110112_AAAWBH rao_y_Page_064.txt
5e874b9e51119a2f2bcaab5d27729dbe
62a15ae5f018b4e6f9f6517cfe701eb796a5060f
48797 F20110112_AAAXEJ rao_y_Page_045.pro
e451bdfb0082d56e9e4fbbf2902afe06
4f03da21fbb1f1c4a5de53588b10e52760b9d6ce
F20110112_AAAWZE rao_y_Page_056.tif
f9df8571be9abae55492302d3f84039a
08e596f2fc5332644b27a18ea1980959d7e979c1
37080 F20110112_AAAXDV rao_y_Page_024.pro
eeca31681e706252386087540c7feb4a
4e8dfa20e69f0cf0dc1839368a98a1a998447d42
F20110112_AAAWYP rao_y_Page_039.tif
a4640649e2b9799674cee8bad8ea5581
ee54fa87f0d8926658a5ef1bb42d47c1c9eac290
40654 F20110112_AAAWBI rao_y_Page_086.pro
337f9f62798e43b18eb958a37ae7c52c
20edeb11e1c41b3989d905fa894219d79a755108
51686 F20110112_AAAXEK rao_y_Page_046.pro
4ae017063ba17a76eb83e985c53d8d81
99ffd9e1cb78c8ae1ecbd242372de2cb648bbd5b
F20110112_AAAWZF rao_y_Page_057.tif
391be2ee4b01827ca421317b3bfaeef9
d786ee747007e806b116e3c473e3f77bf20ba6dd
46729 F20110112_AAAXDW rao_y_Page_026.pro
2f23e2ab71f1332350e936a347b0e1c1
c322dfb5b47201288d46a93fd8d61192227961cf
F20110112_AAAWYQ rao_y_Page_040.tif
3324823e6b97f504685cb4cab82db6a8
63f4bff360ff5cafad4fdddc505d89d349424ce3
1925 F20110112_AAAWBJ rao_y_Page_143.txt
fd4c9cb9ef4dcb45ce823a4c5faab0b8
a433445ccb151e4b068f7ec1a8338e36fc99a8a8
46778 F20110112_AAAXEL rao_y_Page_047.pro
03961237ac1391d69d5bdc84f01bb2e8
05e6c9df39cf6734f0375ac48f487f8b7da8558c
F20110112_AAAWZG rao_y_Page_058.tif
11e1a1ad76ecfef9632b508c95b67bb8
da326c8c184299e0ebe09a9212c35dc325712b5a
41603 F20110112_AAAXDX rao_y_Page_027.pro
eb92b8d4e61626778f78ee6125286756
f9d3cad9a8d68f1bd959bedc5e762fc271b06fff
F20110112_AAAWYR rao_y_Page_041.tif
f5a43e98230bbc162e563b2ed5bb6f06
3c4863322042728699b8bd345a96cc143a66511f
46690 F20110112_AAAXFA rao_y_Page_066.pro
6743d4d4d17ce7822713ccea2fcf5592
06221312c25fe3e94e84cb8603e8e840c095eb80
F20110112_AAAWBK rao_y_Page_153.tif
534df5e59c801f1cb90327439bc892d9
3dc001b1607ad152ac2ee1024d42adfdb6a0ffb1
43103 F20110112_AAAXEM rao_y_Page_048.pro
34ebc07c1492f1c95a45040894c334de
3f6b5a9786338a99f9ff123d2adf5e66eab07a3e
F20110112_AAAWZH rao_y_Page_060.tif
735c5e3ec9977d49edb72d8b7f50a7a5
2dc450c1a736b453edaf42cc466a7591451108f1
17649 F20110112_AAAXDY rao_y_Page_029.pro
bd12137e71a5495aa411b423434cd2b7
70d2afd15829f21e91b67322b6417d60c41c982e
F20110112_AAAWYS rao_y_Page_042.tif
861992082bd6e2c775dca2f2565605d0
0a18474af844fc80114ed4a067c0e54973e0f04f
46725 F20110112_AAAXFB rao_y_Page_067.pro
1b0edd1680d406bc93a9db68e3548487
83b6f212aa9b536a61b453ec798b1bcb38823317
755038 F20110112_AAAWBL rao_y_Page_091.jp2
b20bcdc6e0d5f26566bc51ed7e6a3fda
9f48dd9fe0f6e3bace67c312278935f657b1559b
29742 F20110112_AAAXEN rao_y_Page_049.pro
d2ce5de7ddc5cb7d2a8cebb4ff27fba7
5622778eea54fc56e95fe4af66fe26462cc9ad80
F20110112_AAAWZI rao_y_Page_062.tif
6f52e72d6ef34fe09a95a369b80d38b1
ed921e9b710ae7ce8b559119af31c672064674fa
45688 F20110112_AAAXDZ rao_y_Page_030.pro
6c145cf5c0e355ff395660378afc4358
69fc0d3305255d55b4772f2116f5aec59c992e29
F20110112_AAAWYT rao_y_Page_043.tif
7845d1b28ab7afcaedff7aa6b4442d7c
e8e33005b5e4beee119fd189a3300eeeca1397be
F20110112_AAAWCA rao_y_Page_049.tif
586eb519160b241aae462856318c7063
53cd1e28e7c1d8f6e8ddf4be3b05efa451c85391
15470 F20110112_AAAXFC rao_y_Page_068.pro
ad5519a584712594d314c0a3b25384d9
9ad07ed274cc8b41a1574cc5b9a7ade14365baa9
F20110112_AAAWBM rao_y_Page_059.tif
5412bbaf3027d8b4cc0df9a355bd700d
1f465217f341d90aa0ec59a37a94b3d5975762c9
29395 F20110112_AAAXEO rao_y_Page_050.pro
ce7f52f4c46a6ddf57d8b3baf0b1d774
7db6c22d1d8c87a9555230611554c20d877aebfb
F20110112_AAAWZJ rao_y_Page_065.tif
698023fe810ea3d27267bf3e99d2b4e5
928548b99674c2747195e4d41c927fff882d7333
F20110112_AAAWYU rao_y_Page_044.tif
9ec0714882e5a6708e67deaeaddc07d4
25af4846e2c7565ffb92b0ce0d92ae1a1d9b82d7
16643 F20110112_AAAWCB rao_y_Page_015.QC.jpg
f90e41a2ec2a28734079ffee8a3dc85c
3c1c6d65e24474184ffa05e2438a608675726a03
36802 F20110112_AAAXFD rao_y_Page_069.pro
f5500bc17430204348baf85a0e098ee6
9762ca007bc54dd94b7074cb2efff7867d0853ef
25901 F20110112_AAAWBN rao_y_Page_020.QC.jpg
a364b425ae84063b5b28f3aababab315
44ca7f25195384298a89d77f183eedcdc7862ce7
52189 F20110112_AAAXEP rao_y_Page_051.pro
38f3c53a0a6a422ddf85e353e495a66f
86f5693d01ec6898e154ac11cfffb7253f1465ac
F20110112_AAAWYV rao_y_Page_045.tif
875d030109f0b6825cc945fed7fe8e72
ec5730a1df96e537f48495642f030ec7386e430b
108761 F20110112_AAAWCC rao_y_Page_138.jp2
2a32e4242a27fbacec2a315820f41d04
3c71e9dc2d34e193b6a3e2d07f4f5bdf1bcc4391
28988 F20110112_AAAXFE rao_y_Page_071.pro
5908109b65d6fbff1318bb30a1cae2ac
ff8f7093ecf066ea355ffd84e390e18fa71ef2b3
F20110112_AAAWBO rao_y_Page_085.tif
c51d343cbcfc890465204426acdb1861
f9670c1b55603b674e2ba9f8137a45554af5038f
25514 F20110112_AAAXEQ rao_y_Page_052.pro
dc2a7a0fd8aa418ef24cc1825c247fc3
8faaedbeed288eed00b5ac5b5f139eb58cccf8cd
F20110112_AAAWZK rao_y_Page_066.tif
8480c6c480f527b2ab512ea865d3dad6
333b9ec90ab8813964c4d0ba0cef7eb26dcb8b7a
F20110112_AAAWYW rao_y_Page_047.tif
02dfa9df9d08b6d87445d083464d8a87
0dbed15e48a549010c614aa2f142ab6526b00bc9
29604 F20110112_AAAWCD rao_y_Page_165.QC.jpg
60b257251d2fcc0cdc705e61e86bde25
ff8af38463f535fee00f1624aea3c8e3fb1ea04c
38923 F20110112_AAAXFF rao_y_Page_072.pro
b0871c5ea9238d6df763c38be7f325b5
cdc594d2c0d19967beeae2a16ad3877d5d8185ac
96650 F20110112_AAAWBP rao_y_Page_140.jpg
83df8eb2a925d088cc42dd71ca8b9283
6b1d624d4962e99b3da6dd79eab74a510eed56a7
37162 F20110112_AAAXER rao_y_Page_053.pro
d449dba29e5aafde16e6bddcb723d943
9fb65729cd6034012e879ee7e190f81731bda35f
F20110112_AAAWZL rao_y_Page_067.tif
00a0c5a17900b850b3da33f4f743a84a
ce8d0455cda58dc22563469e5068a9e34068b8ee
F20110112_AAAWYX rao_y_Page_048.tif
ab661624c6849b1774188c0e4d066e4a
e1f314cee53c85cce77f43bcf743357467c4020c
29716 F20110112_AAAWCE rao_y_Page_094.QC.jpg
b3f24f7b55d1d147d6f4d28a607c2e91
776c02df12fd0f3f908fa3dd7ef6fb0f5265c331
35415 F20110112_AAAXFG rao_y_Page_073.pro
b300ba46bb9baeefcec3f2553f1cbd79
26b4960b5ef53c6d77b7ff070031aaa46e147736
8194 F20110112_AAAWBQ rao_y_Page_115thm.jpg
ff969206b6c6a759d3e5cab124003a13
0a2254d88b33d0e04ac49cf2c8eafd78ba53f656
52904 F20110112_AAAXES rao_y_Page_054.pro
10589a13b275a42a4aef5b2401874f08
84f8b2bc7c349b7d44f5c39cae800d65da6d707d
F20110112_AAAWZM rao_y_Page_068.tif
e4f9e6e8fbbe0f0c9232fbf20e6322f6
23cba2587c5ab194d0598f51e89e1955316a08c7
F20110112_AAAWYY rao_y_Page_050.tif
6393b8d2519b9c97b6fa55af22a07f01
52d5400df60392b4707f729308210764d90f2633
49951 F20110112_AAAWCF rao_y_Page_121.pro
1b21f324f8180f3b92099567479b3f5a
006084cd96cd39a7835fc8f0cfd7d94c2e7001d8
37997 F20110112_AAAXFH rao_y_Page_074.pro
726aeae127b8b8da4f0906713ee7a904
35d4a04285102fc2ee3a10de2806610835534dad
F20110112_AAAWBR rao_y_Page_064.tif
fcd7ce70b026ad6f7863f54b872fa5a6
d08e14855d3a22e2d2f23bb29ce2fb062b8edf07
45246 F20110112_AAAXET rao_y_Page_056.pro
af2a54da4b47cd162b8026aeba64cb60
17fc73f8596df0beef6e8f96eeb75e1b585ab8c1
F20110112_AAAWZN rao_y_Page_069.tif
443004eb38c5ae13b04aed17145a0866
d53f950e59fd8dcea7c84832375a32a0e1c27f5c
F20110112_AAAWYZ rao_y_Page_051.tif
3eeeb0610e62fcbd06bb769666d6260c
5163001d4e1c4239bec3d7aaefcca88ab502dee5
1455 F20110112_AAAWCG rao_y_Page_077.txt
7c313f7f25feffec24e111fea51f3e66
daf3b00af24157a3f1d9918358cd9c07959f57af
38917 F20110112_AAAXFI rao_y_Page_075.pro
f456647c36ae007b6787790c03c57182
3c7ca51838e905aee2fd0674f51b741e9cc722ec
42266 F20110112_AAAXFJ rao_y_Page_076.pro
29899b9967654321a52e1a35cc81251e
d7729b77bb381d69e80703dc223380774d597a36
F20110112_AAAWZO rao_y_Page_070.tif
90c061cb9bd1157f11ad9fff48dbe53f
aabbdd70f11d000b8a0a017be837d69f0bb7e051
F20110112_AAAWCH rao_y_Page_046.tif
17d012abacc5b81a2c5302e785560100
f84a42e9e5c181c3ada621529fafebb2121df873
1051960 F20110112_AAAWBS rao_y_Page_008.jp2
e8215c523edb2f24a38a8cf738757f9b
6476b2dae9378a6a9783cc9ad267dae12b4051f9
35576 F20110112_AAAXEU rao_y_Page_058.pro
5274e107905befc616e8a569c9197002
41a4e74c6559cd99f78d58f41d24072f042544da
31702 F20110112_AAAXFK rao_y_Page_077.pro
f3a2c735b1e0d471fb9b4851a70aea70
aa016e80106ba810831105763ccdb3326457835a
F20110112_AAAWZP rao_y_Page_071.tif
43977361c16ecd49fe26e18ecdb8e4ad
b912ac3f8249263bc07f6b6a54b7a0ebd7a99a2d
118025 F20110112_AAAWCI rao_y_Page_176.jpg
2bc4bbadcb5ef2d8ccaabc4b62e6a89a
3db4d948306b263a922129bd76a4005c12f8a452
21361 F20110112_AAAWBT rao_y_Page_118.jp2
4a17d6107f60e2f90b2a9787dec764be
9cb151333e87ddf9fc109d40f907f262a0aded53
34609 F20110112_AAAXEV rao_y_Page_059.pro
fbc6f9da0b3d1f70a20da80c821c001f
7b41fb46b86c5c1de3040fc10d13476da7abda61
38616 F20110112_AAAXFL rao_y_Page_078.pro
676fbf9f10ce9d95a37a40f5000ba240
6852279a4a287102451054c2017f791f73ea51bc
F20110112_AAAWCJ rao_y_Page_061.tif
4cbba670b64cf431b9a00774250031d0
418f56640150b9865b90499620d9bc21bcb3f923
104298 F20110112_AAAWBU rao_y_Page_063.jpg
38bc0b2d9c3ea055fc119ecfa0cf51ac
f59f0beed482b8e586c5be385c3deff2b35e34a4
32870 F20110112_AAAXEW rao_y_Page_061.pro
d548de939952a4186d1fb1f4032f548f
940995591ae83715c6d88d41612bc7ce0c3c86d9
F20110112_AAAWZQ rao_y_Page_073.tif
8c62c3c1e1fa357c098ddd7facaba50e
11c1e714f32606e2c157b201bf1d1c57ff6316ed
17676 F20110112_AAAXGA rao_y_Page_099.pro
c3279dadb5ac46ed4ca0870da76fb525
dbb280a4e880fbc6bb0d26a26a7ae4c5cd58ccaf
42997 F20110112_AAAXFM rao_y_Page_082.pro
bc6a0b3b46ba2d0786f9d6766dd0f28c
2243126ce060e12a5074dc43bae4a76b3ad72be0
3660 F20110112_AAAWCK rao_y_Page_171thm.jpg
ff644ecb81a91471c62a09bd108b7570
b89c78c55918a1d5b553577bb63ec5e4d2f8a494
34854 F20110112_AAAWBV rao_y_Page_098.QC.jpg
187c49bd91867ef37043aab7d507e348
4d25ebf4ca360738ee042e6948205707b9b55e2e
35606 F20110112_AAAXEX rao_y_Page_062.pro
8d87ce61324d362137e8d866eea3673c
68c6d89631187b067952a5cfa18140fed404393a
F20110112_AAAWZR rao_y_Page_074.tif
95e27ef0ea8a12866db9ff09d5c843b6
b0b074d7d9113422124d25ed98205d1222a516f1
42651 F20110112_AAAXGB rao_y_Page_100.pro
4ca2401ff82717f5fdce4c5a272bbdc9
5c031ce436d705e35d83abe732348ccb8f74ecfc
44742 F20110112_AAAXFN rao_y_Page_083.pro
a688ffb4f348f67bf5e1ad6b54021f25
ed03011d1f5c0afee4d233f161c3eebd3611f6d5
F20110112_AAAWCL rao_y_Page_127.tif
5905f1eae8341a33679c2cc1929415e5
16d547164da23adbc6a1fda9a252175d2a638877
8166 F20110112_AAAWBW rao_y_Page_137thm.jpg
7cd176f4ad8086a24bb863fe5be641be
acd5b0927b81835ef900b58e366737d3afeeff17
46128 F20110112_AAAXEY rao_y_Page_064.pro
0a0a672e30d4b01ea010d6a07a78e28d
ae1a52672fae572c9e5bcb28960d1360a31c5d37
F20110112_AAAWZS rao_y_Page_078.tif
980bbb8e08a1c9333a80927fbc360d38
fd94e0a2dcadb9b5de84d760bb2e7cc3160f3c0a
51919 F20110112_AAAXGC rao_y_Page_101.pro
f360598376082c7879c878dd87c70362
b83dd3fa0b608da409190312fa786dfd5fc70d38
40773 F20110112_AAAXFO rao_y_Page_084.pro
b53c104f1275827a2ef679923bb23530
fa4c5f3f0dcc3639066ba4789dc99cd18ec2a046
107740 F20110112_AAAWDA rao_y_Page_135.jp2
3d7b50e7a228c860f4ecbc95b41f6886
0d2ebbadc73726591ccca2140f5e4201dd657385
7206 F20110112_AAAWCM rao_y_Page_096thm.jpg
08987c67e3377ffe7598bfd1f90fe76a
83918d33de3f550b6318a7343013fa96b97347cb
1436 F20110112_AAAWBX rao_y_Page_125.txt
c83b48781a3e436e1cbe8878719fc2e4
b04299bf49480cd0d0ada0727f8c5ad25f483bcf
47917 F20110112_AAAXEZ rao_y_Page_065.pro
7936470dc511f8e2d6a83edd4c0b601c
2c8d6a2cbfae37ab248d1b88b1dc5038f5710e22
F20110112_AAAWZT rao_y_Page_079.tif
43cb760d7844020382b9a373cc2e0ce0
6254be986c4e7a4f739a0de2c1e0bdb186558105
45107 F20110112_AAAXGD rao_y_Page_102.pro
4afadfd1d2861407e8d5979753993947
9b84e6cf29b7d3e8525128222c943a65b383c6f7
34337 F20110112_AAAXFP rao_y_Page_085.pro
894a1a7d513e0beee455db72c8445744
7472662456e35794b9ab131e31b0eeed68ee2570
96763 F20110112_AAAWDB rao_y_Page_044.jp2
e1dfc22b8fa1c7f6d2cc902c345b3297
d404d1fc4ba06f1ca5a9e204207c8bb615f0d315
824 F20110112_AAAWCN rao_y_Page_171.txt
34d13e6be5775547d8f1eea8a649b8e2
9e29d65e074cd61e9476c4a786dbc8ddab8e359e
105778 F20110112_AAAWBY rao_y_Page_063.jp2
1bd89e6cb5d232d72ec7c6099e63bbd8
45736d434cdf505300ae2a149706356b74258454
F20110112_AAAWZU rao_y_Page_080.tif
9f9c40725a0a2a8919ce06db6f42cb73
2c699566a2a79e1525d0079da7047f35b45e14fa
27267 F20110112_AAAXGE rao_y_Page_103.pro
4e13893721b5b309fb463d97975abc64
2168235baa1a48f473e41b2ab093560813dc79c7
40373 F20110112_AAAXFQ rao_y_Page_087.pro
049b1198b96f1cc7d8ac28423c698e04
92342a87eea920c217221744665877a2b74b5d06
85932 F20110112_AAAWDC rao_y_Page_086.jpg
743ada2f081fe6eac2ac05e6f0f4d925
feda3ac077fda3cdc3bb6dd2a732c8417c47414a
23853 F20110112_AAAWCO rao_y_Page_149.pro
547571a1db540aab7dd4e23a34590f0c
e12a0f6c7fd9b99ac619bbe3156f65894bde968c
104533 F20110112_AAAWBZ rao_y_Page_089.jp2
9deee8a8cb0c35097bb647d340d5d5f2
3a91c2996c9cfe1aa37857a40e4eae1b9c706572
F20110112_AAAWZV rao_y_Page_081.tif
76f8ca87ab0828c6493c536dd8a454e2
1eb0888bbc5720da8d2ed55ad7a8947349749482
40538 F20110112_AAAXGF rao_y_Page_104.pro
77fadeb100f2529d1119a787d1b44b33
c32a90690465e9e9b914087afc9570d74d4ed69e
47549 F20110112_AAAXFR rao_y_Page_089.pro
f86d01651a472a5bac681a3a4e4bae1b
0e7e6afc6ad1bb015e61d5290ad08eb8acbd5d3e
7477 F20110112_AAAWCP rao_y_Page_058thm.jpg
3409a2902e833a3e64158e72acba1091
e1af2c754b3c0af7bec7f5c92f8dc9a583b3d6e5
F20110112_AAAWZW rao_y_Page_082.tif
5a19eac787a8fc0f6a3863b8f1b35ec5
d9a1830864903649f95b85d83407a046b3118b0e
8564 F20110112_AAAWDD rao_y_Page_033thm.jpg
15dbd5cdaf71994b489dfd4a5c0babfa
9c75a3680684d15374dc6473235acb1a9c9c8d96
39014 F20110112_AAAXGG rao_y_Page_105.pro
764001a1313e10a4798137ef0576d293
727a9d18c2553ecdfbb723252e903ae20dc7b68b
32914 F20110112_AAAXFS rao_y_Page_091.pro
404da95da1f1e0cacc724dca8145e1a3
10eb2e1c2c242750d0d4207750dc84c2f5a47f77
19995 F20110112_AAAWCQ rao_y_Page_118.jpg
7c6f36aad8313713b15367473e90056d
a31441bfa50659d3d53d9670493ff48d3ac096bd
F20110112_AAAWZX rao_y_Page_083.tif
28eb0dbbe179a8553857e79015e2fb8f
7ee3d58048076336d4715d207d5cfe155da85955
34167 F20110112_AAAWDE rao_y_Page_136.QC.jpg
5ebb04386fad545f8341c5c3b47ef380
bbf23a8c30b455ee3428a52835b03fd13aa14d61
36058 F20110112_AAAXGH rao_y_Page_106.pro
e5c907ae536be03eab74bbe4b9a09c65
11ddfda3723e9ef56ccd9cb78838816805e887ae
22950 F20110112_AAAXFT rao_y_Page_092.pro
7b29aabb87e25e9ea523fb357fd39aba
2ae4cb43f93af56e35a30fca5387c8b1553d1aca
F20110112_AAAWCR rao_y_Page_109.tif
9f4f406ac91774c0fec42e731e6d8d49
9885ba30650663aae451b56d16ebecc56a2555f6
F20110112_AAAWZY rao_y_Page_086.tif
e2a849ba26bbec18e6e211e18aa4e95a
80a49b21fba44d68928b19a430360de4de54d8c7
93120 F20110112_AAAWDF rao_y_Page_100.jp2
3478104a8dbc512f83465c2201d59111
3faadad49f26c9eccfc2035399c8a85ac0528426
29750 F20110112_AAAXGI rao_y_Page_107.pro
40478f7edddec5a39dc28e5a226f754a
0c396ac4038938a46a62dd28a2e62c35d5a27b52
25231 F20110112_AAAXFU rao_y_Page_093.pro
829a9d518c235750ca0045768ff91c76
5df31701e21410a54efabbe6a4abdb92e5fc6387
110148 F20110112_AAAWCS rao_y_Page_046.jp2
ad440748c940736ac4d6b518ba935a11
753e5b059d5be01c515607bd2f1899a197539019
F20110112_AAAWZZ rao_y_Page_087.tif
a3668db9f14fa4c85545cc767b33dc5b
f131a918451977fcb5f56df61788bab2b9106976
66244 F20110112_AAAWDG rao_y_Page_145.jpg
719ccca1210399c684ecde0ae3238776
5d89c0a6ad04a8e29b7864758d17ff9d95593218
43277 F20110112_AAAXGJ rao_y_Page_108.pro
4c6fa71e2923f7a2536a3b5f454c42d0
333b5547806de88bec06a716db7ff270e75caa3b
F20110112_AAAWDH rao_y_Page_081.txt
b2963096c11fcd0f901981101f807264
e0fc865fb1b72b74754c3ad85256c0ed55ac5719
52933 F20110112_AAAXGK rao_y_Page_110.pro
74f2a017efbd761f1d59d1b53ed846ff
6ba71a8f30bb8848f786f739a5f25d4f69f8cdc3
24775 F20110112_AAAXFV rao_y_Page_094.pro
0283179dd0ee20d7dd0f1a2eccab277d
ffa3c1a7f90bba4122112d03ae7e02a975ce9e44
59354 F20110112_AAAWCT rao_y_Page_176.pro
8b4db260f5f679673418dd71fa0041a5
ee0eb6a30fadddfc8cbe0edf9f3da27cf6a16c0b
98869 F20110112_AAAWDI rao_y_Page_028.jpg
06d8627d39242e950930dc93125fb94e
5ae9424faaa31e9c89f2066accfb31b3bd982234
31896 F20110112_AAAXGL rao_y_Page_114.pro
9df507b04aed420c64e27381fcf64058
4bf4644193001eb1db1a890dffa5d60aa4fcb191
39377 F20110112_AAAXFW rao_y_Page_095.pro
ecd3360cbbe5c19416ae4e29eca8f773
ac4e1bda495d72e1fe4ba74f9857d8e911402b8c
32157 F20110112_AAAWCU rao_y_Page_147.QC.jpg
d9c0d34c3c609116f0d66677f0a20130
24910729c207a46f89c83a90cb97e82e85e6f3aa
28964 F20110112_AAAWDJ rao_y_Page_072.QC.jpg
262485603d13e115925302a10bac8bda
1cff25662aae34c789a365d00b6b658f13d7a22a



PAGE 1

AN AUGMENTED ERROR CRITERION FO R LINEAR ADAPTIVE FILTERING: THEORY, ALGORITHMS AND APPLICATIONS By YADUNANDANA NAGARAJA RAO A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by YADUNANDANA NAGARAJA RAO

PAGE 3

This dissertation is dedicated to my famil y, teachers and friends for their enduring love, support and friendship.

PAGE 4

iv ACKNOWLEDGMENTS First of all, I would like to thank Dr. Jo se Principe, for his constant guidance, encouragement, patience and continuous support over the past five ye ars. His enthusiasm for research and quest for excellence have left an everlasting impre ssion in my mind. To me, he has been more than an advisor, and this research would not have been possible without him. Secondly, I would like to tha nk Dr. John Harris for being on my committee and offering me guidance not only in research but in many other aspects of life. I would also like to thank Dr. Michael Nechyba a nd Dr. Mark Yang for being on my committee. I would like to thank Dr. Deniz Erdogmus, my friend and colleague at CNEL, whose contributions in this research have been tremendous. I deeply benefited from all the long hours of fruitful discussions with him on a multitude of topics. His drive for research and enormous ability to motivate othe rs have been quite in spirational. I also wish to extend my acknowledgements to all the members of CNEL who have been primarily responsible for my fruitful stay in the lab. I would like to extend my gratitude to the always cheerful Ellie Goodwin for her golden words of wisdom. Her ability to get things done was truly remarkable. I would also like to acknowledge Linda Kahila for the extensive support and assist ance she provided during my stay at UFL. I would like to thank my family and friends for their constant love and encouragement. They have allowed me to pursu e whatever I wanted in life. Without their guidance and affection, it would have been im possible for me to advance my education.

PAGE 5

v Lastly, I would like to thank my life part ner Geetha for making my life beautiful and for being on my side whenever I needed her. Her everlasting love has made me a better individual.

PAGE 6

vi TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES...............................................................................................................x LIST OF FIGURES...........................................................................................................xi ABSTRACT.....................................................................................................................xi v CHAPTER 1 MEAN SQUARED ERROR BASED ADAPTIVE SIGNAL PROCESSING SYSTEMS: A BRIEF REVIEW..................................................................................1 Introduction................................................................................................................... 1 Why Do We Need Adaptive Systems?.........................................................................2 Design of Adaptive Systems.........................................................................................3 Least Mean Squares (LMS) Algorithm.................................................................5 Recursive Least Squares (RLS) Algorithm...........................................................6 Other Algorithms...................................................................................................7 Limitations of MSE Criterion Ba sed Linear Adaptive Systems...................................8 Total Least Squares (TLS) and Other Methods..........................................................10 Limitations of TLS..............................................................................................11 Extended TLS for Correlated Noise....................................................................11 Other Methods.....................................................................................................13 Summary.....................................................................................................................13 2 AUGMENTED ERROR CRITERION FOR LINEAR ADAPTIVE SYSTEMS......15 Introduction.................................................................................................................15 Error Whitening Criterion (EWC)..............................................................................16 Motivation for Error Whitening Criterion...........................................................17 Analysis of the Autocorrel ation of the Error Signal............................................17 Augmented Error Criterion (AEC).............................................................................22 Properties of Augmented Error Criterion...................................................................24 Shape of the Performance Surface......................................................................24 Analysis of the Noise-free Input Case.................................................................25 Analysis of the Noisy Input Case........................................................................27 Orthogonality of Error to Input...........................................................................29

PAGE 7

vii Relationship to Error Entropy Maximization......................................................30 Note on Model-Order Selection..........................................................................31 The Effect of on the Weight Error Vector........................................................32 Numerical Case Studies of AEC with the Th eoretical Solution.................................33 Summary.....................................................................................................................40 3 FAST RECURSIVE NEWTON TY PE ALGORITHMS FOR AEC.........................41 Introduction.................................................................................................................41 Derivation of the Newton Type R ecursive Error Whitening Algorithm....................41 Extension of the REW Algorithm for Multiple Lags..........................................45 Relationship to the Recursive In strumental Variables Method...........................48 Recursive EWC Algorithm Based on Minor Components Analysis..........................49 Experimental Results..................................................................................................51 Estimation of System Parameters in White Noise Using REW..........................51 Effect of and Weight Tracks of REW Algorithm............................................53 Performance Comparisons between REW, EWC-TLS and IV methods............55 Summary.....................................................................................................................57 4 STOCHASTIC GRADIENT ALGORITHMS FOR AEC.........................................59 Introduction.................................................................................................................59 Derivation of the Stochastic Gradient AEC-LMS Algorithm....................................59 Convergence Analysis of AEC-LMS Algorithm........................................................61 Proof of AEC-LMS Convergence for > 0........................................................61 Proof of AEC-LMS Convergence for < 0........................................................63 On-line Implementations of AEC-LMS for < 0...............................................67 Excess Error Correlation Bound for EWC-LMS.................................................69 Other Variants of the AEC-LMS Algorithms.....................................................72 AEC-LMS Algorithm with Multiple Lags..........................................................73 Simulation Results......................................................................................................74 Estimation of System Parameters in White Noise...............................................74 Weight Tracks and Convergence.........................................................................76 Inverse Modeling and Contro ller Design Using EWC........................................80 Summary.....................................................................................................................83 5 LINEAR PARAMETER ESTIMATION IN CORRELATED NOISE......................85 Introduction.................................................................................................................85 Existing Solutions.......................................................................................................86 Criterion for Estimating the Parameters in Correlated Noise.....................................87 Stochastic Gradient Algorithm and Analysis.............................................................90 Simulation Results......................................................................................................93 System Identification with the Analytical Solution.............................................93 System Identification with St ochastic Gradient Algorithm.................................95 Verification of the Local Stabil ity of the Gradient Algorithm............................95 Extensions to Correlated Noise in the Desired Data..................................................97

PAGE 8

viii Experimental Results................................................................................................100 System Identification.........................................................................................100 Stochastic Algorithm Performance....................................................................100 Summary...................................................................................................................101 6 ON UNDERMODELING AND OVERESTI MATION ISSUES IN LINEAR SYSTEM ADAPTATION........................................................................................104 Introduction...............................................................................................................104 Undermodeling Effects.............................................................................................105 Overestimation Effects.............................................................................................108 Experimental Results................................................................................................109 Summary...................................................................................................................113 7 CONCLUSIONS AND FUTURE DIRECTIONS...................................................114 Conclusions...............................................................................................................114 Future Research Directions.......................................................................................116 APPENDIX A FAST PRINCIPAL COMPONENTS ANALYSIS (PCA) ALGORITHMS...........118 Introduction...............................................................................................................118 Brief Review of Existing Methods...........................................................................119 Derivation of the Fixed-Point PCA Algorithm.........................................................121 Mathematical Analysis of th e Fixed-Point PCA Algorithm.....................................123 Self-Stabilizing Fixed-Point PCA Algorithm...........................................................128 Mathematical Analysis of the Self-S tabilizing Fixed-Point PCA Algorithm...........129 Minor Components Extraction: Self-Sta bilizing Fixed-Poin t PCA Algorithm........132 B FAST TOTAL LEAST-SQUARES ALGORITHM USING MINOR COMPONENTS ANALYSIS..................................................................................135 Introduction...............................................................................................................135 Fast TLS Algorithms................................................................................................136 Simulation Results with TLS....................................................................................139 Simulation 1: Noise Free FIR Filter Modeling..................................................139 Simulation 2: FIR Filter Modeling with Noise..................................................140 C ALGORITHMS FOR GENERALI ZED EIGENDECOMPOSITION.....................143 Introduction...............................................................................................................143 Review of Existing Learning Algorithms.................................................................143 Fixed-Point Learning Algorithm for GED...............................................................145 Mathematical Analysis.............................................................................................150

PAGE 9

ix D SOME DERIVATIONS FOR THE NOISY INPUT CASE.....................................155 E ORTHOGONALITY OF ERROR TO INPUT........................................................156 F AEC AND ERROR ENTR OPY MAXIMIZATION...............................................157 G PROOF OF CONVERGENCE OF ERRO R VECTOR NORM IN AEC-LMS......159 LIST OF REFERENCES.................................................................................................160 BIOGRAPHICAL SKETCH...........................................................................................172

PAGE 10

x LIST OF TABLES Table page 1-1. Outline of the RLS Algorithm......................................................................................7 3-1. Outline of the REW Algorithm..................................................................................45

PAGE 11

xi LIST OF FIGURES Figure page 1-1. Block diagram of an Adaptive System.........................................................................4 1-2. Parameter estimates using RLS algorithm with noisy data..........................................9 2-1. Schematic diagram of EWC adaptation......................................................................16 2-2. The MSE performance surfaces, the AEC contour plot, and the AEC performance surface for three different training data sets and 2-tap adaptive FIR filters...............25 2-3. Demonstration scheme with coloring filter h true mapping filter w and the uncorrelated white signals..........................................................................................34 2-4. The average squared error-norm of the optimal weight vector as a function of autocorrelation lag L for various values and SNR levels........................................35 2-5. The average squared error-norm of the optimal weight vector as a function of filter length m for various values and SNR levels............................................................35 2-6. Histograms of the weight error norms (d B) obtained in 50 Mont e Carlo simulations using 10000 samples of noisy data us ing MSE (empty bars) and EWC with = -0.5 (filled bars). The subfigures in each row use filters with 4, 8, and 12 taps respectively. The subfigures in each co lumn use noisy samples at –10, 0, and 10 dB SNR, respectively.......................................................................................................37 2-7. Error autocorrelation f unction for MSE (dotted) and EWC (solid) solutions............38 3-1. Histogram plots showing the error vect or norm for EWC-LMS, LMS algorithms and the numerical TLS solution........................................................................................53 3-2. Performance of REW algorithm (a) SNR = 0dB and (b) SNR = -10 over various beta values......................................................................................................................... .54 3-3. Weight tracks for REW and RLS algorithms.............................................................55 3-4. Histogram plots showing the error vector norms for all the methods........................56 3-5. Convergence of the minor eigenvector of G with (a) noisy data and (b) clean data..57

PAGE 12

xii 4-1. Histogram plots showing the error vect or norm for EWC-LMS, LMS algorithms and the numerical TLS solution........................................................................................75 4-2. Comparison of stochastic versus recursive algorithms...............................................76 4-3. Contour plots with the weight track s showing convergence to saddle point..............77 4-4. Weight tracks for th e stochastic algorithm.................................................................77 4-5. Contour plot with weight tracks for different initia l values for the weights..............78 4-6. Contour plot with wei ght tracks for EWC-LMS algorit hm with sign information (left) and without sign information (right)..................................................................79 4-7. EWC performance surface (left) and wei ght tracks for the noise -free case with and without sign information (right).................................................................................80 4-8. Block diagram for model reference inverse control...................................................81 4-9. Block diagram for inverse modeling..........................................................................81 4-10. Plot of tracking results and error histograms............................................................82 4-11. Magnitude and phase responses of the reference model and designed modelcontroller pairs..........................................................................................................82 5-1. System identification block diagra m showing data signals and noise........................88 5-2. Histogram plots showing the error vect or norm in dB for the proposed and MSE criteria....................................................................................................................... ..94 5-3. Weight tracks for LMS and the stochas tic gradient algorithm in the system identification example................................................................................................96 5-4. Weight tracks for LMS and the stochas tic gradient algorithm showing stability around the optimal solution........................................................................................96 5-5. Histogram plots of the error nor ms for the proposed method and MSE..................101 5-6. Weight tracks showing the convergence of the stochastic gr adient algorithm.........102 6-1. Undermodeling effects with input SNR = 0dB (left) and input SNR = 5dB (right).109 6-2. Crosscorrelation plots fo r EWC and MSE for undermodeling.................................110 6-3. Crosscorrelation plots for EW C and MSE for overestimation.................................111 6-4. Power normalized error crosscorrelati on for EWC and MSE with overestimation.111

PAGE 13

xiii 6-5. Weight tracks for LMS and the stochas tic gradient algorithm in the case of undermodeling..........................................................................................................112 A-1. Representative network architec ture showing lateral connections..........................134 B-1. Estimation of minor eigenvector..............................................................................140 B-2. Minimum eigenvalue estimation..............................................................................141 B-3. Comparison between the estimated and true filter coefficients using TLS.............141 B-4. Comparison between the estimated and true filter coefficients using RLS.............142

PAGE 14

xiv Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy AN AUGMENTED ERROR CRITERION FO R LINEAR ADAPTIVE FILTERING: THEORY, ALGORITHMS AND APPLICATIONS By Yadunandana Nagaraja Rao May 2004 Chair: Jose C. Principe Cochair: John G. Harris Major Department: Electrical and Computer Engineering Ever since its conception, the mean-squa red error (MSE) criterion has been the workhorse of optimal linear adaptive filtering. However, it is a well-known fact that the MSE criterion is no longer optimal in situati ons where the data are corrupted by noise. Noise, being omnipresent in most of the engi neering applications, ca n result in severe errors in the solutions produced by the MSE criterion. In this dissertation, we propose novel error criteria and the associated learning algorithms followed by a detailed mathematical analysis of these algorithms. Specifically, these criteria are designed to solve the problem of optimal filtering with noisy data. Firstly, we discuss a new criterion called augmented error criteri on (AEC) that can provide unbiased parameter estimates even in the presence of additive white noi se. Then, we derive novel, online sample-bysample learning algorithms with varying degr ees of complexity and performance that are tailored for real-world applications. Ri gorous mathematical analysis of the new algorithms is presented.

PAGE 15

xv In the second half of this dissertati on, we extend the AEC to handle correlated noise in the data. The modifications introduced will enable us to obtain optimal, unbiased parameter estimates of a linear system when the data are corrupted by correlated noise. Further, we achieve this w ithout explicitly assuming any prior information about the noise statistics. The analytical solution is deri ved and an iterative st ochastic algorithm is presented to estimate this optimal solution. The proposed criteria and the learning algorithms can be applied in many engineering problems. System identificati on and controller design problems are obvious areas where the proposed criteri a can be efficiently used. Other applications include model-order estimation in the presence of noise and design of multiple local linear filters to characterize complicated nonlinear systems.

PAGE 16

1 CHAPTER 1 MEAN SQUARED ERROR BASED ADAPTIVE SIGNAL PROCESSING SYSTEMS: A BRIEF REVIEW Introduction Conventional signal processing techniques can be typically formulated as linear or non-linear operations on the input data. For example, a finite impulse response (FIR) filter is a linear combination of the time delayed versions of the input signal. We know that a linear combiner is nothing but a linear projector in the input space. Mathematically speaking, a projection can be defined as a linear transformation between two vector spaces [1]. These linear transformations can be vectors spanning 1 nx or matrices spanning nxn. For vector transformations the proj ections are given by the inner products and in case of matrix transformations the proj ections become rotations. Often, most of the design tasks in signal processing involve findi ng appropriate projectio ns that perform the desired operation on the input. For instance, th e filtering task is basically finding the projection that preserves only a specified part of the input information [2]. Another example is data compression, wherein we es timate an optimal projection matrix or rotation matrix that preserves most of the info rmation in the input sp ace. The first step in finding these projections is to understand th e specifications of the problem. Then, the specifications are translated into mathematical criteria and equations that can be solved using various mathematical and statistical t ools. The solutions thus obtained are often optimal with respect to the criterion used.

PAGE 17

2 Why Do We Need Adaptive Systems? Depending on the problem at hand, estima ting the optimal projections can be a daunting task. Complexities can arise due to th e non-availability of a closed form solution or even the non-existence of a feasible analytical solution. In the latter case, we may have to be contented with sub-optimal solutions. On the other hand, scen arios exist where we have to synthesize projections that are not based on user specifications. For instance, suppose we are given two signals, an input and a desire d signal, and the goal is to find the optimal projection (filter) that generates th e desired signal from the input. Thus the specifications do not convey any explicit inform ation regarding the type of filter we have to design. The conventional filter synthesi s cookbook does not cont ain any recipes for these types of problems. Such problems can be solved by learning mechanisms that intelligently deduce the optimal projections using only the input and desired signals or at times using the input signal alone. These l earning mechanisms form the foundation of adaptive systems and neural ne tworks. All learning mechanisms have at least two major pieces associated with them. The first is the criterion and the second is the search algorithm. The search algorithm finds the best possible solution in the space of the inputs under some constraints. Optimization theory has provided us with a variety of search techniques possessing different degrees of complexity and robustness [3]. These learningbased adaptive systems provide us with a powerful methodology that can go beyond conventional signal processing. The projection s derived by these adaptive systems are called optimal adaptive projections. Another ve ry desirable feature of adaptive systems is their innate ability to automatically adjust and track according to th e changing statistical properties of signals. This can be vital in ma ny engineering applications, viz., wireless data transmission, biomedical monitoring and control, echo cance llation over wired

PAGE 18

3 telephone lines etc., wherei n the underlying physical s ources that generate the information change over time. In the next section, we will briefly review the theory behind the design of linear adaptive systems. Design of Adaptive Systems A block diagram of an adaptive system is shown in Figure 1-1. Assume that we are given a zero-mean input signal nx and a zero-mean desired signal nd Further, these signals are assumed to be corrupted by noise terms nv and nu respectively. Let the parameters of the adaptive system be denoted by the weight vector w Note that we have not put any constraints on the topology of the adaptive filter. For convenience, we will assume a FIR topology in this chapter. Th e goal then is to generate an output ny that best approximates the desired signal. In order to achi eve that, a criterion (often referred to as the cost J ( w )) is devised which is typical ly a function of the error ne defined as the difference between the desired signal and the output, i.e., n n ny d e The most widely used criterion in the literature is the M ean-Squared Error (MSE) which is defined as ) ( ) (2 ne E J w (1.1) The MSE cost function has some nice properties, namely, Physical relevance to energy The performance surface (shape of J ( w )) is smooth and has continuous derivatives The performance surface is a convex para boloid with a single global minimum The weight vector *w corresponding to the global minimum is the best linear unbiased estimate in the absence of noise [4] If the desired signal is a future sample of the input, i.e., n nx d then the filter with coefficients *w is guaranteed to be minimum phase [5]

PAGE 19

4 Figure 1-1. Block diagram of an Adaptive System. Once the criterion is fixed, the next step is to design an algorithm to optimize the cost function. This forms another important elemen t in an adaptive system. Optimization is a well researched topic and there is a plethora of s earch methods for convex cost functions. Specifically, we minimize the MSE cost func tion and since the performance surface is quadratic with a single global minimum, an analytical closed form optimal solution *w can be easily determined. The optimal solution is called the Wiener solution for MSE [6] (Wiener filter), which is given by P R w1 (1.2) In equation (1.2), R denotes the covariance matrix of the input defined as T k kE x x R and the vector P denotes the cross correlation between the desired sign al and the lagged input defined as k kd E x P Computing the Wiener solu tion requires inverting the matrix R which requires O(N3) operations [7]. However, due to the time-delay embedding of the input, the matrix R can be easily shown to be symmetric and Toeplitz, which facilitates a computat ionally efficient inverse operation with complexity O(N2) [8]. From the point of view of an adaptive syst em, the Wiener solution is still not elegant un dn en yn xn vn Adaptive Filter {w} + + Criterion + Algorithm

PAGE 20

5 because one requires the knowledge of all da ta samples to compute equation (1.2). A sample-by-sample (iterative) algorithm is more desirable as it suits the framework of an adaptive system. The most commonly used algo rithms to iteratively estimate the optimal Wiener solution *w are the stochastic gr adient based Least Mean Squares (LMS) and the fixed-point type Recursiv e Least Squares (RLS). Least Mean Squares (LMS) Algorithm The gradient of the cost f unction in (1.1) is given by ) ( 2 ) (k ke E J x w w (1.3) Notice that the output of the adaptive filter ny is simply the inner-product between the weight vector w and the vector nx which is a vector comprise d of the delayed versions of the input signal nx Instead of computing the exact gradient, Widrow and fellow researchers [9,10] proposed the instantaneous gradient wh ich only considered the most recent data samples (both input and desired). This led to the development of the stochastic gradient al gorithm for MSE minimization that is popularly known as the Least Mean Squares (LMS) algorithm. The stochastic gradie nt is given by k ke J x w w 2 ) ( (1.4) Once the instantaneous gradient is known, th e search should be in the direction opposite to the gradient which gives us the stochastic LMS algorithm in (1.5). k ke k k k x w w ) ( ) ( ) 1 ( (1.5) The term ) ( k denotes a time-varying step-size that is typically chosen from a set of small positive numbers. Under mild conditions it is possible to show that the LMS

PAGE 21

6 algorithm converges in the mean to the Wi ener solution [10-14]. The stochastic LMS algorithm is linear in complexity, i.e., O (N), and allows on-line, local computations. These nice features facilitate efficient hard ware implementation for real-world adaptive systems. Being a stochastic gradient algor ithm, LMS suffers from problems related to slow convergence and excessive misadjustmen t in the presence of noise [14,15]. Higher order methods have been proposed to mitigate these effects and mainly they are variants of Quasi-Newton, Levenberg-Marquardt (LM) and Conjugate-Gradient (CG) methods popular in optimization [16-17]. Alternatively, we can deri ve a recursive fixed-point algorithm to iteratively estimate the optimal Wiener solution. This is the well-known Recursive Least Squares (RLS) algorithm [18,19]. Recursive Least Squares (RLS) Algorithm The derivation of the RLS algorithm utili zes the fact that the input covariance matrix R can be iteratively estimated from its past values using the recursive relation, T k kk kx x R R ) 1 ( ) ( (1.6) The above equation can also be viewed as a rank-1 update on the inpu t covariance matrix R. Further, the cross correlation vector P satisfies the following recursion. k kd k kx P P ) 1 ( ) ( (1.7) We know that the optimal Wiener solution at the time instant k is simply ) ( ) ( ) (1 *k k kP R w (1.8) Recall the matrix inversion lemma [7,8] at this point which allows us to recursively update the inverse of a matrix. k T k T k kk k k k kx R x R x x R R R) 1 ( 1 ) 1 ( ) 1 ( ) 1 ( ) (1 1 1 1 1 (1.9)

PAGE 22

7 It is important to note that the inversion lemma is useful only when the matrix itself can be expressed using reduced rank updates as in equation (1.6). By plugging equation (1.9) into the Wiener solution in (1.8) and using the recursive update for P( k ) in (1.7), we can derive the RLS algorithm outlined in Table 1-1 below. Table 1-1. Outline of the RLS Algorithm. Initialize R-1(0) = cI, where c is a large positive constant w(0) = 0, initialize the weight vector to an all zero vector At every iteration, compute k T k kk k kx R x x R ) 1 ( 1 ) 1 ( ) (1 1 k T kk d k ex w) 1 ( ) ( ) ( ) ( ) 1 ( ) ( k k k k e w w ) 1 ( ) ( ) 1 ( ) (1 1 1 k k k kT kR x R R The RLS algorithm is a truly fixed-point met hod as it tracks the exac t Wiener solution at every iteration. Also, observe the complexity of the algorithm is O (N2) as compared to the linear complexity of the LMS algorithm. Th is additional increase in complexity is compensated by the fast convergence and zero misadjustment of the RLS algorithm. Other Algorithms Although LMS and RLS form the core of adaptive signal processing algorithms, researchers have proposed many other variants possessing varying de grees of complexity and performance levels. Important amongst them are the sign LMS algorithms that were introduced for reduced complexity hardware implementations [20, 21]. Historically, the sign-error algorithm has been utilized in the de sign of channel equalizer s [20] and also in the 32kbps ADPCM digital coding scheme [ 22]. In terms of improving the speed of convergence with minimum misadjustment, va riable step-size LMS and normalized LMS algorithms have been proposed [23-27]. Leaky LMS algorithms [28] have been explored

PAGE 23

8 to mitigate the finite word length effects at the expense of introducing some bias in the optimal solution. Several extensions to the RLS algorithm have also been studied. Some of these algorithms show improved robustn ess against round-off errors and superior numerical stability [29,30]. The conventiona l RLS algorithm works well when the data statistics do not change over time (stationarity assumption). Analysis of the RLS tracking abilities in non-st ationary conditions have been studied by Eleftheriou and Falconer [31] and many solutions have been proposed [14]. Limitations of MSE Criterion Ba sed Linear Adaptive Systems Although MSE based adaptive systems have been very popular, the criterion may not be the optimal choice for many engineeri ng applications. For instance, consider the problem of system identification [32] which is stated as follows: Gi ven a set of input and output noisy measurements where the output s are the responses of an unknown system, obtain a parametric model estimate of th e unknown system. If the unknown system is nonlinear, then it is obvious th at MSE minimization would not result in the best possible representation of the system (plant). Criteria that utilize higher order statistics like the error entropy for instance, can potentially pr ovide a better model [33,34]. Let us restrict ourselves to the class of linear parametric models. Although the Wiener solution is optimal in the least square s sense, the biased input covariance matrix R, in the presence of additive white input noise yields a bias1 in the optimal solution compared to what would have been obtaine d with noise-free data. This is a major drawback, since noise is omnipresent in practi cal scenarios. In order to illustrate the degradation in the quality of the parameter estimate, we created a random input time 1 The Wiener solution with noise-free data gives unbiased estimates. We refer to this mismatch in the estimates obtained with and without noise as the bias introduced by noise.

PAGE 24

9 0 5 10 15 20 25 30 35 40 45 50 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Filter coefficients estimated using RLS vs. true values RLS True values Figure 1-2. Parameter estimates us ing RLS algorithm with noisy data. series with arbitrary coloring and passed it through a FIR filter with 50 taps. The filtered data were used as the desired signal. Unco rrelated white noise was added to the colored input signal and the input si gnal-to-noise ratio (SNR) was fixed at 0dB. The RLS algorithm was then used to estimate the weight vector. Ideally, if the SNR was infinite, RLS would have resulted in a weight vector exactly matching the FIR filter. However, because of the noisy input, the RLS estimates were biased as can be seen in Figure 1-2. This is a very serious drawback of the MSE criterion which is further accentuated by the fact that the optimal Wiener MSE solution vari es with changing noise power Researchers have dwelt on this problem for many years a nd several modifications have been proposed to mitigate the effect of noise on the esti mate. Total least-squares (TLS) is one method which is quite powerful in eliminating the bi as due to noise [35-42]. The instrumental variables (IV) method proposed as an extens ion to the Least-Squa res (LS) has been previously applied for parameter estimation in white noise [32]. This method requires

PAGE 25

10 choosing a set of instruments that are uncorrelated with the noise in the input [32,43]. Yet another classical approach is subspace Wiener filtering [14,44]. This approach tries to suppress the bias by performing an optimal subspace projection (Principal Component Space) and then training a filter in the reduced input space. In the next few sections, we will briefly cover some of these methods a nd discuss their benefits and the limitations. Total Least Squares (TLS) and Other Methods Mathematically speaking, TLS solves an over-determined set of linear equations of the form b Ax where n m A is the data matrix, m b is the desired vector, and n x is the parameter vector and m denotes the number of di fferent observation vectors each of dimension n [41]. Alternatively, the linear equations can be written in the form 0 ; x b A; ] 1 ][ [T, where ] [ b A; denotes an augmented data matrix. Let S be the SVD [8] of the augmented data matrix ] [ b A; such that S = U VT, where m TI U U 1 n TI V V and ] ) ,...., , ( [1 1 1 4 3 2 1 n n m ndiag 0 ; with all singular values 0k If 0 ; x b A; ] 1 ][ [T, the smallest singular value must be zero. This is possible only if ] 1 [; xT is a singular vector of ] [b A; (corresponding to the zero singular value) normalized such that its (n+1)th element value is -1. When ] [b A; is a symmetric square matrix, the solution reduces to finding th e eigenvector corresponding to the smallest eigenvalue of ] [ b A;. The TLS solution in this special case is then 1 1 11 n n nv v x; (1.10) where 1 1 n nv is the last element of the minor eigenvector 1nv. The Total Least-Squares technique can be easily applied to estimate the optimal solution using minor components estimation algorithms [45-51]. The computati on of the TLS solution requires efficient

PAGE 26

11 algorithms for extracting the principal compon ents [52] or the eige nvectors of the data covariance matrix. Eigendecomposition is a well studied problem and many algorithms have been proposed for online estimation of eigenvectors and eigenva lues directly from data samples [53-77]. We have proposed robust sample efficient algorithms for solving Principal Components Analysis (PCA) that have outperformed most of the available methods. A brief review of PCA theory a nd the proposed algorithms are outlined in appendix A. Brief mathematical analyses of the proposed algorithms according to the principles of stochastic appr oximation theory [78-85] are also included. A fast minor components analysis (MCA) based TLS algorit hm [86] is discussed in appendix B. Limitations of TLS Total least squares gives unbiased estimates only when both the noise in the input and the desired data are independent and iden tically distributed (i.i .d.) and have same variance. Further, when the noi se is truly i.i.d. Gaussian-d istributed, the TLS solution is also the maximum likelihood solution. However, the assumption of e qual noise variances is very restrictive, as measurement noi ses seldom have similar variances. The Generalized TLS (GTLS) problem [87] specifically deals with cases where the noise (still assumed to be i.i.d.) variances are different. Ho wever, the caveat is th at the ratio of noise variances is assumed to be known which is, once again, not a pr actical assumption. Extended TLS for Correlated Noise In order to overcome the i. i.d. assumption, Mathews a nd Cichocki have proposed the Extended TLS (ETLS) [88] that allows th e noise to have non-ze ro correlations. We will briefly describe the approach they adopted. Let the augmented input matrix ] [ b A; be

PAGE 27

12 represented as ] [b A; H. Then, the square matrix H HT can be written as a combination of the clean data matrix HTH and the noise covariance matrix RN. N T TR H H H H (1.11) The above equation is true when the noise is uncorrelated with the clean data. This assumption is reasonable as the noise pr ocesses in general are unrelated (hence independent) to the physical sources that produced the data. Assume that there exists a matrix transformation H ~ such that 2 / 1 ~ NR H H (1.12) The transformed data correlation matrix of H ~ is simply I R H H R H H 2 / 1 2 / 1 ~ ~ N T N T (1.13) Equation (1.13) basically tells us that the tr ansformed data are now corrupted by an i.i.d. noise process. Hence, we can now find the regular TLS solution with the transformed data by estimating the minor eigenvector of the matrix H H ~ ~ T. In other words, the optimal ETLS solution for correlated noise signals is given by estimating the generalized eigenvector corresponding to th e smallest generalized eigenva lue of the matrix pencil ( H HT, RN ). Solving the generalized eigenvalue problem [8] is a non-trivial task and there are only a handful of al gorithms that can provide onlin e solutions. Our research in the area of PCA provided us the tools to develop a novel generalized eigenvalue decomposition (GED) algorithm. A short summary of the GED problem, existing learning algorithms and the proposed al gorithm are listed in appendix C. Although the ETLS seems to solve the general problem of linear parameter estimation, there is an inherent drawback. The ETLS requires the fu ll knowledge of the

PAGE 28

13 correlation matrix of the noise (RN). This assumption potentially leaves the problem of linear parameter estimation w ith noisy data wide open. Other Methods Infinite Impulse Response (IIR) system id entification methods [89-92] deal with the problem of measurement noise in the output (desired) data The Instrumental Variables (IV) method [93] fo r IIR system identification on the other hand, does not guarantee stability. It has been known for quite a while that the unit norm constraint for the equation-error (EE) based system identif ication is much better compared to the conventional monic constraint [90-92]. Ho wever, imposing the unit norm constraint appears too restrictive and he nce limits the applicability. Summary In this chapter, we started by describi ng linear adaptive systems criteria and their associated algorithms. Most often, adapti ve solutions are de rived using the MSE criterion. We showed that the MSE criterion produces biased solutions in the presence of additive noise. The optimal Wiener MSE solu tion varies with changing noise variances which is highly undesirable. Alternative approach es to combat the effect of noise in the parameter estimation have been explored. Th e most popular approach es are based on the Total Least-Squares principles. Generali zed TLS and Extended TLS improve upon the ability of the TLS to provide bias free estim ates in the presence of additive noise. However, these methods rely on assumptions th at can be very restri ctive for real-world applications. Further, they require SVD and Generalized SVD computation [94-105] which increases the complexity. Another method called subspace Wiener filtering relies on the accurate estimation of the signal subspace from the noisy data. This technique

PAGE 29

14 reduces the effect of the bias when the signa ls are distinguishable from noise (high SNR scenario). Otherwise, it fails since noise and signal subs paces cannot be separated. Thus, it would not be fallacious to say that the problem of linear parameter estimation with noisy data is a hard problem that does not yet have a satisfactory solution in the existing literature. One of the major contributions of this dissertation is the development of an elegant solution to th is problem without making any unreasonable assumptions about the noise statistics. Towards this end, we will present a new criterion based on the error signal and de rive new learning algorithms.

PAGE 30

15 CHAPTER 2 AUGMENTED ERROR CRITERION FOR LINEAR ADAPTIVE SYSTEMS Introduction In the previous chapter, we discusse d the Mean-Squared Error (MSE) criterion which has been the workhorse of linear op timization theory due to the simple and analytically tractable structure of linear l east squares. In adaptive filter theory, the classical Wiener-Hopf equations [6,10] are more commonly used owing to the extension of least squares to functi onal spaces (Hilbert spaces [106 ]) proposed by Wiener [6]. However, for finite impulse response (FIR ) filters, (vector spaces ) the two solutions coincide. There are also a number of importa nt properties that he lp us understand the statistical properties of the Wi ener solution, namely the orthogonality of the error signal to the input vector space as well as the whiteness of the predictor error signal for stationary inputs, provided the filter is long enough [5,14]. However, in a number of applications of practical importance, the e rror sequence produced by the Wiener filter is not white. One of the most important is th e case of inputs corrupted by white noise, where the Wiener solution is biased by the noise variance as we saw before in Chapter 1. In this chapter, we will develop a new cr iterion which augments the MSE criterion. In fact, MSE becomes a special case of th is new criterion which we call the Augmented Error Criterion (AEC). Further, we will show that, under some conditions, this new criterion can produce a pa rtially white error sequence at th e output of an ad aptive system even with noisy data. This special case of the AEC is called the Error Whitening Criterion (EWC). Our approach in this chapte r will be as follows. We will first focus on

PAGE 31

16 the problem of parameter estimation with noi sy data and motivate the derivation of the error whitening criterion. Then, we will deduce the more generic augmented error criterion. Error Whitening Criterion (EWC) Consider the problem of parameter es timation with noisy data. Instead of minimizing the MSE, we will tackle the problem by introducing a new adaptation criterion that enforces zero autocorrelation of the error si gnal beyond a certain lag; hence the name error whitening criterion (EWC). Since we want to preserve the on-line properties of the adaptation algorithms, we propose to expand the error autocorrelation around a lag larger than the fi lter length using Taylor series Thus, instead of an error signal, we will end up with an error vector, containing as many components as the terms kept in the Taylor series expansion. A schematic diagra m of the proposed adaptation structure is depicted in Figure 2-1. The properties of this solution are very interesting, and it contains the Wiener solution as a special ca se. Additionally, for th e case of two error terms, the same analytical tools developed fo r the Wiener filter can be applied with minor modifications. Moreover, when the input signal is contaminated with additive white Figure 2-1. Schematic diagram of EWC adaptation. Adaptive Filter (w) d/dt dt yt xt d/dt d/dt et tete Update Update Update te

PAGE 32

17 noise, EWC produces the same optimal solutio n that would be obtained with the noise free data, with the same computational complexity of the Wiener solution. Motivation for Error Whitening Criterion The classical Wiener solution yields a biased estimate of the reference filter weight vector in the presence of input noise. This problem arises due to the contamination of the input signal autocorrelation matrix with that of the additive noise. If a signal is contaminated with additive white noise, only the zero-lag autocorrelation is biased by the amount of the noise power. Autocorrelation values at all other lags still remain at their original values. This observ ation rules out MSE as a good op timization criterion for this case. In fact, since the error power is the va lue of the error autoco rrelation function at zero lag, the optimal weights will be biased because they depend on the input autocorrelation values at zero-lag. The fact that the autocorrela tion values at non-zero lags are unaffected by the presence of noise will be proved useful in determining an unbiased estimate of the filter weights. Analysis of the Autocorrel ation of the Error Signal The question that arises is wh at lag should be used to obta in the true weight vector in the presence of white input noise. Let us consider the autocorrel ation of the training error at non-zero lags. Suppose noisy training data of the form )) ( ), ( ( t d t x are provided, where ) ( ) ( ~ ) (t t t v x x and ) ( ) ( ~ ) (t u t d t d with ) ( ~ t x being the sample of the noisefree input vector at time t (time is assumed to be continuous), ) (t v being the additive white noise vector on the input vector, ) ( ~ t d being the noise-free desired output and ) (t u being the additive white noise on the desired ou tput. Suppose that the true weight vector of the reference filter that generated the data is Tw (moving average model). Then the

PAGE 33

18 error at time t is w v xTt t t u t d t e)) ( ) ( ~ ( )) ( ) ( ~ ( ) ( where w is the estimated weight vector. Equivalently, when the desired response belongs to the subspace of the input, i.e., T Tt t d w x) ( ~ ) ( ~ the error can be written as w v w w x w v x w x ) ( ) ( ) )( ( ~ )) ( ) ( ~ ( )) ( ) ( ~ ( ) ( t t u t t t t u t t eT T T T T T (2.1) Given this noisy training data, the MSE-base d Wiener solution will not yield a residual training error that has zero au tocorrelation for a number of consecutive lags, even when the contaminating noise signals ar e white. From (2.1) it is easy to see that the error will have a zero autocorrelati on function if and only if the weight vector is equal to the tr ue weights of the reference model, the lag is beyond the Wiener filter length. During adaptation, the issue is that the filter weights are not set at Tw, so the error autocorrelation function will be generally non zero. Therefore a criterion to determine the true weight vector when the data is contaminated with white noise should be to force the long lags (beyond the filter length) of the erro r autocorrelation function to zero by using an appropriate criterion. This is exactly what the erro r-whitening criterion (EWC) that we propose here will do. There are two interesting situations that we should consider: What happens when the selected autocorre lation lag is smaller than the filter length? What happens when the selected autocorrela tion lag is larger than the lag at which the autocorrelation function of the input signal vanishes? The answer to the first question is simply that the solution will be still biased since it will be obtained by inverting a biased input au tocorrelation matrix. If the selected lag is L
PAGE 34

19 the special case of MSE, the selected lag is zero and the zeroth sub-diagonal becomes the main diagonal, thus the soluti on is biased by the noise power. The answer to the second question is equally important. The MSE solution is quite stable because it is determined by the inverse of a diagonally dominant Toeplitz matrix. The diagonal dominance is guaran teed by the fact that the au tocorrelation function of a real-valued function has a peak at zero-lag. If other lags are used in the criterion, it is important that the lag is selected such that the corresponding autocorrelation matrix (which will be inverted) is not ill conditioned. If the selected lag is larger than the length of the input autocorre lation function, then the autocorr elation matrix becomes singular and a solution cannot be obtai ned. Therefore, lags beyond the input signa l correlation time should also be avoided in practice. The observation that, constrai ning the higher lags of the error autocorrelation function to zero yields unbiased weight solutions is quite significant. Moreover, the algorithmic structure of this new solution a nd the lag-zero MSE solu tion are still very similar. The noise-free case helps us unders tand why this similar ity occurs. Suppose the desired signal is generate d by the following equation: T Tt t d w x) ( ~ ) ( ~ where Tw is the true weight vector. Now multiply both sides by ) ( ~ t x from the left and then take the expected value of both sides to yield T Tt t E t d t E w x x x)] ( ~ ) ( ~ [ )] ( ~ ) ( ~ [ Similarly, we can obtain T Tt t E t d t E w x x x)] ( ~ ) ( ~ [ )] ( ~ ) ( ~ [ Adding the corresp onding sides of these two equations yields T T Tt t t t E t d t t d t E w x x x x x x)] ( ~ ) ( ~ ) ( ~ ) ( ~ [ )] ( ~ ) ( ~ ) ( ~ ) ( ~ [ (2.2) This equation is similar to the st andard Wiener-Hopf equation [9,10]

PAGE 35

20 T Tt t E t d t E w x x x)] ( ~ ) ( ~ [ )] ( ~ ) ( ~ [. Yet, it is different due to the correlations being evaluated at a lag other than zero, which means that the weight vector can be determined by constraining higher order lags in the error autocorrelation. Now that we have described the structure of the solution, let us address th e issue of training linear systems using error correlations. Adaptation exploits the sensitivity of the error autocorr elation with respect to the weight vector of the adaptive filter. We will formulate the solution in continuous time first, for the sake of simplicity. If the s upport of the impulse response of the adaptive filter is of length m, we evaluate the derivative of the error autocorrelation function with respect to the lag where m are both real numbers. Assuming that the noises in the input and desired are uncorrelated to each other and the input signal, we get ) ( ) ( ~ ) ( ~ 2 ) ( ) ( ~ ) ( ~ ) ( ) ) ( ) ( )( ) ( ) ( ( ) )( ( ~ ) ( ~ ) ( ) ) ( ) ( ) )( ( ~ )( ) ( ) ( ) )( ( ~ ( )] ( ) ( [ ) ( w w x x w w w x x w w w w v w v w w x x w w w w v w w x w v w w x w w T T T T T T T T T T T T T T T T T T et t E t t E t t u t t u t t E t t u t t t u t E t e t e E (2.3) The identity in equation (2.3) immediately te lls us that the sensitivity of the error autocorrelation with respect to the weight vector becomes zero, i.e., 0 w / ) (e if 0 w w ) (T. This observation emphasizes the fo llowing important co nclusion: when given training data that is generated by a linear filter, but contaminated with white noise, it is possible to derive simple adaptive al gorithms that could de termine the underlying filter weights without bias. Furthermore, if ) (w w T is not in the null space of

PAGE 36

21 )] ( ~ ) ( ~ [ t t ETx x, then only 0 w w ) (T makes 0 ) (e and 0 w / ) (e But looking at (2.3), we conclude that a proper delay depends on the autocorrelation of the input signal that is, in general, unknown. Therefore, the selection of the delay is important. One possibility is to evaluate the error autocorrelation function at different lags m and check for a non zero input autocorr elation function for that delay, which will be very time consuming and in appropriate for on-line algorithms. Instead of searching for a good lag-, consider the Taylor series approximation of the autocorrelation function around a fixed lag-L, where m L 2 2) )]( ( ) ( [ 2 1 ) )]( ( ) ( [ )] ( ) ( [ ) )( ( 2 1 ) )( ( ) ( ) (L L t e t e E L L t e t e E L t e t e E L L L L Le e e e (2.4) In (2.4), ) ( t e and ) ( t e (see Figure 2-1) represent the deri vatives of the error signal with respect to the time index. Notice that we do not take the Taylor seri es expansion around zero-lag for the reasons i ndicated above. Moreover, L should be less than the correlation time of the input, such that the Taylor e xpansion has a chance of being accurate. But since we bring more lags in the expansion, th e choice of the lag becomes less critical than in (2.3). In principle, the more terms we keep in the Taylor expansion the more constraints we are imposing on the autocorrelat ion of the error in adaptation. Therefore, instead of finding the weight vect or that makes the actual grad ient in (2.3) zero, we find the weight vector that makes the derivative of the approximation in (2.4) with respect to the weight vector zero. If the adaptive filter is operating in disc rete time instead of continuous time, the differentiation with respect to time can be replaced by a first-order forward difference,

PAGE 37

22 ) ( ) ( ) (L n e n e n e Higher order derivatives can al so be approximated by their corresponding forward diffe rence estimates, e.g., ) 2 ( ) ( 2 ) ( ) (L n e L n e n e n e etc. Although the forward difference normally uses tw o consecutive samples, for reasons that will become clear in the following sections of the chapter, we will utilize two samples separated by L samples in time. The first-order trun cated Taylor series expansion for the error autocorrelation function for lag evaluated at L becomes )] ( ) ( [ ) 1 ( )] ( [ ) ( ) ))]( ( ) ( )( ( [ )] ( ) ( [ ) (2L n e n e E L n e E L L L n e n e n e E L n e n e Ee (2.5) Analyzing (2.5) we remark another advantage of the Taylor series expansion because the familiar MSE is part of the expansion. Notice also that as one forces L the MSE term will disappear and only the lag-L error autocorrelation will remain. On the other hand, as 1 L only the MSE term will prevail in the autocorrelation function approximation. Introducing more terms in th e Taylor expansion will bring in error autocorrelation constraints from lags iL. Augmented Error Criterion (AEC) We are now in a position to formulate th e augmented error criterion (AEC). To the regular MSE term, we add another function ) (2e E to result in the augmented error criterion as shown in equation (2.6). )] ( [ )] ( [ ) (2 2n e E n e E J w (2.6) where is a real scalar parameter. Equivale ntly, (2.6) can also be written as )] ( ) ( [ 2 )] ( [ ) 2 1 ( ) (2L n e n e E n e E J w (2.7) which has the same form as in (2.5). Notice that when 0 we recover the MSE in (2.6) and (2.7). Similarly, we would have to select L in order to make the first-order

PAGE 38

23 expansion identical to the exact value of th e error autocorrelation function. Substituting the identity ) ( ) 2 1 (L and using L we observe that 2 / 1 eliminates the MSE term from the criterion. Interestingly, this value will appear in a later discussion, when we optimize in order to reduce the bias in the solution introduced by input noise. If is positive, then minimizing the cost function J(w) is equivalent to minimizing the MSE but with a constraint that the error signal must be smoot h. Thus, the weight vector corresponding to the minimum J(w) will result in a higher MSE than the Wiener solution. The same criterion can also be obtained by considering performance functions of the form )] ( [ )] ( [ )] ( [ ) ( ) ( ) ( ) (2 2 2 2 2n e E n e E n e E n e n e n e E JT w (2.8) where the coefficients , etc. are assumed to be positiv e. Notice that (2.8) is the L2 norm of a vector of different objective functi ons. The components of this vector consist of ) ( n e ) ( n e ) ( n e etc. Due to the equivalenc e provided by the difference approximations for derivative, these terms c onstrain the error autocorrelation at lags iL as well as the error power as seen in (2.8). In summary, the AEC defined by equation (2.6) can take many forms and hence results in different optimal solutions. If is 0, then AEC exactly becomes the MSE criterion If is -0.5, then AEC becomes the EWC which will result in an unbiased estimate of the parameters even in the presence of noise If is positive and not equal to 0, then th e cost function minimizes a combination of MSE with a smoothness constraint In the following sections, we will further elaborate on the properties of AEC.

PAGE 39

24 Properties of Augmented Error Criterion Shape of the Performance Surface Suppose that noise-free training data of the form )) ( ~ ), ( ~ ( n d n x generated by a linear system with weight vector Tw through T Tn n d w x ) ( ~ ) ( ~ is provided. Assume without loss of generality that the adaptive filter an d the reference filter are of the same length. This is possible since it is possible to pad Tw with zeros if it is shorter than the adaptive filter. Therefore, the input vector mn ) ( ~ x the weight vector m T w and the desired output ) ( ~ n d Equation (2.6) has a quadratic form and has a unique stationary point. If 0 then this stationary point is a minimu m. Otherwise, the Hessian of (2.6) might have mixed-sign eigenvalues. We demonstr ate this fact with sample performance surfaces obtained for 2-tap FIR filters using 2 / 1 For three differently colored training data, we obtain the AEC performance surfaced shown in Figure 2-2. In each row, the MSE performance surface, the AEC cost contour plot, and the AEC performance surf ace are shown for the corresponding training data. The eigenvalue pairs of the Hessian ma trix of (2.6) are (2.35,20.30), (-6.13,5.21), and (-4.08,-4.14), for these repr esentative cases in Figure 22. Clearly, it is possible for (2.6) to have a stationary poi nt that is a minimum, a saddl e point, or a maximum and we start to see the differences brought about by the AEC. The performance surface is a weighted sum of paraboloids, which will complicate gradient-based adaptation, but will not aff ect search algorithms utilizing curvature information. We will discuss more on the search techniques later in this Chapter and also in Chapter 4.

PAGE 40

25 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 w1w2 Contour plot for AEC showing the minimum Performance surface for AEC with solution as minimum -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 w1w2 Contour plot for AEC showing saddle point solution Performance surface for AEC with saddle point solution -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 w1w2 Co n tou r p l ot f o r AE C s h ow in g m a xim u m Performance surface for AEC with solution as maximum Figure 2-2. The MSE performance surfaces the AEC contour plot, and the AEC performance surface for three different tr aining data sets and 2-tap adaptive FIR filters. Analysis of the Noise-free Input Case Theorem 2.1: The stationary point of the quadratic form in (2.6) is given by ) ~ ~ ( ) ~ ~ (1 *Q P S R w (2.9) where we defined )] ( ~ ) ( ~ [ ~ n n ETx x R )] ( ~ ) ( ~ [ ~ n n ETx x S )] ( ~ ) ( ~ [ ~ n d n E x P and )] ( ~ ) ( ~ [ ~ n d n E x Q

PAGE 41

26 Proof : Substituting the proper variables in (2.6), we obtain the following explicit expression for J ( w ). w Q P w S R w wT Tn d E n d E J ) ~ ~ ( 2 ) ~ ~ ( )] ( ~ [ )] ( ~ [ ) (2 2 (2.10) Taking the gradient with respect to w and equating to zero yields ) ~ ~ ( ) ~ ~ ( ) ~ ~ ( 2 ) ~ ~ ( 2 ) (1 *Q P S R w 0 Q P w S R w w J (2.11) Notice that selecting 0 in (2.6) reduces the criterion to MSE and the optimal solution, given in (2.9), redu ces to the Wiener solution. Thus, the Wiener filter is a special case of the AEC solution (though not optimal for noisy inputs, as we will show later). Corollary 1. An equivalent expression for the stationary point of (2.6) is given by L LP P R R w ~ ~ ) 2 1 ( ~ ~ ) 2 1 (1 (2.12) where we defined the matrix )] ( ~ ) ( ~ ) ( ~ ) ( ~ [ ~ L n n n L n ET T L x x x x R and the vector )] ( ~ ) ( ~ ) ( ~ ) ( ~ [ ~ L n d n n d L n EL x x P Notice that the interesting choice 2 / 1 yields L LP R w ~ ~1 Proof Substituting the definitions of R ~ S ~ P ~ Q ~ and then recollecting terms to obtain LR ~ and LP ~ yields the desired result. L L L L T T T T TL n d L n E n d n E n d n E L n L n E n n E n n E L n d n d L n n E n d n E L n n L n n E n n E P P R R P x x x R x x x x x x x x x x x x x x x Q P S R w ~ ~ ) 2 1 ( ~ ~ ) 2 1 ( ) ~ )] ( ~ ) ( ~ [ )] ( ~) ( ~ [ ( )] ( ~ ) ( ~ [ ) ~ )] ( ~ ) ( ~ [ )] ( ~ ) ( ~ [ ( )] ( ~ ) ( ~ [ ))] ( ~ ) ( ~ ))( ( ~ ) ( ~ [( )] ( ~ ) ( ~ [ ] )) ( ~ ) ( ~ ))( ( ~ ) ( ~ [( )] ( ~ ) ( ~ [ ) ~ ~ ( ) ~ ~ (1 1 1 1 (2.13)

PAGE 42

27 From these results we deduct two extremely interesting conclusions: Lemma 1. (Generalized Wiener-Hopf Equations) In the noise-free case, the true weight vector is given by L T LP w R ~ ~ (This result is also true for noisy data.) Proof This result follows immediately from the substitution of T Tn n d w x ) ( ~ ) ( ~ and T TL n L n d w x ) ( ~ ) ( ~ in the definitions of LR ~ and LP ~ Lemma 2. In the noise-free case, regardless of the specific value of the optimal solution is equal to the tr ue weight vector, i.e., Tw w *. Proof This result follows immediately from the substitution of the result in Lemma 1 into the optimal solution expression given in (2.9). The result in Lemma 1 is especially significant, sin ce it provides a generalization of the Wiener-Hopf equations to autocorrelation a nd cross correlation matrices evaluated at different lags of the sign als. In these equations, L represents the spec ific correlation lag selected, and the choice L =0 corresponds to the traditi onal Wiener-Hopf equations. The generalized Wiener-Hopf equations are essentially stating that, the true weight vector can be determined by exploiting correlations evalua ted at different lags of the signals, and we are not restricted to the zero-lag co rrelations as in the Wiener solution. Analysis of the Noisy Input Case Now, suppose that we are given noisy training data )) ( ), ( ( n d n x where ) ( ) ( ~ ) ( n n n v x x and ) ( ) ( ~ ) ( n u n d n d The additive noise on both signals are zeromean and uncorrelated with each other and with the input and de sired signals. Assume that the additive noise, u ( n ), on the desired is white (in time) and let the autocorrelation matrices of v ( n ) be )] ( ) ( [ n n ETv v V, and )] ( ) ( ) ( ) ( [ L n n n L n ET T L v v v v V.

PAGE 43

28 Under these circumstances, we have to estimate the necessary matrices to evaluate (2.9) using noisy data. These matrices evaluated using noisy data, R S, P, and Q will become (see appendix D for details) L T L L T TL n d n d L n n E n d n E L n n L n n E n n E P P x x Q P x P V R V R x x x x S V R x x R ~ ~ 2 ] )) ( ) ( ))( ( ) ( [( ~ )] ( ) ( [ ~ ) ~ ( 2 ] )) ( ) ( ))( ( ) ( [( ~ )] ( ) ( [ (2.14) Finally, the optimal solution estimate of AEC, when presented with noisy input and desired output data, will be L L L L L LP P V R V R P P P V R V R V R Q P S R w ~ ~ ) 2 1 ( ~ ) ~ )( 2 1 ( ) ~ ~ 2 ( ~ ) ~ ) ~ ( 2 ( ) ~ ( ) ( ) ( ˆ1 1 1 (2.15) Theorem 2.2: (EWC Noise-Rejection Theorem) In the noisy-input data case, the optimal solution obtained using AEC will be identically equal to the true weight vector if and only if 2 / 1 0 R L ~ and 0 V L. There are two situations to consider: When the adaptive linear system is an FIR filter, the input noise vector vk consists of delayed versions of a single dimens ional noise process. In that case, 0 V L if and only if m L where m is the filter length and the single dimensional noise process is white. When the adaptive linear system is an ADALINE, the input noi se is a vector process. In that case, 0 V L if and only if the input no ise vector process is white (in time) and 1L. The input noise vector ma y be spatially correlated. Proof: Sufficiency of the first statement is immediately observed by substituting the provided values of and LV. Necessity is obtained by equating (2.15) to Tw and substituting the generalized Wi ener-Hopf equations provided in Lemma 1. Clearly, if 0 R L~ then there is no equation to solve, thus the weights cannot be uniquely

PAGE 44

29 determined using this value of L. The statement regarding the FIR filter case is easily proved by noticing that the temporal correlatio ns in the noise vector diminish once the autocorrelation lag becomes greater than equal to the filter length. The statement regarding the ADALINE structure is immediat ely obtained from the definition of a temporally white vector process. Orthogonality of Error to Input An important question regarding the beha vior of the optimal solution obtained using the AEC is the relationship between th e residual error signal and the input vector. In the case of MSE, we know that the Wien er solution results in the error to be orthogonal to the input signal, i.e., 0 x )] ( ) ( [n n e E [10,14,15]. However, this result is true only when there is no noise and also when the estimated filter length is greater than the actual system impulse response. Similarly, we can determine what the AEC will achieve. Lemma 3: At the optimal solution of AEC, th e error and the input random processes satisfy )] ( ) ( [ ) 2 1 ( )] ( ) ( ) ( ) ( [n n e E n L n e L n n e Ex x x for all 0 L. Proof: We know that the optimal solution of AEC for any 0 L is obtained when the gradient of the cost function with resp ect to the weights is zero. Therefore, 0 x x x x x x w )] ( ) ( ) ( ) ( [ )] ( ) ( [ ) 2 1 ( ))] ( ) ( ))( ( ) ( [( 2 )] ( ) ( [ 2 n L n e L n n e E n n e E L n n L n e n e E n n e E J (2.16) It is interesting to note that if 2 / 1 then we obtain 0 x x )] ( ) ( ) ( ) ( [ n L n e L n n e E for all L. On the other hand, since the criterion reduces to MSE for 0 then we obtain 0 x )] ( ) ( [ n n e E. The result shown in (2.16), if interpreted in terms of Newtonian physics, reveals an interesting insight as to the

PAGE 45

30 behavior of the EWC criterion (2 / 1 ) at its optimal solution (regardless of the length of the reference filter that created the desired signal). In a simplistic manner, this behavior could be summarized by the followi ng statement: The optim al solution of EWC tries to decorrelate the residua l error from the estimated future value of the input vector (see appendix E for details). The case where 2 / 1 is especially interesting, because it results in complete noise rejection. Notice that, in this case, si nce the optimal solution is equal to the true weight vector, the residual error is given by T Tn n u n e w v ) ( ) ( ) ( which is composed purely of the noise in the training data. Certai nly, this is the only way that the adaptive filter can achieve 0 x x )] ( ) ( ) ( ) ( [ n L n e L n n e E for all L values, since 0 x x )] ( ) ( [ )] ( ) ( [ n L n e E L n n e E for this error signal. Thus, EWC not only orthogonalizes the instan taneous error and input signals, but it orthogonalizes all lags of the error from the input. Relationship to Error Entropy Maximization Another interesting property that the AEC solution exhibits is its relationship with entropy [107]. Notice that when 0 the optimization rule tries to minimize MSE, yet it tries to maximize the separation between samples of errors, simultaneously. We could regard the sample separation as an estimate of the error entropy. In fact, the entropy estimation literature is full of methods based on sample separations [108-113]. Specifically, the EWC case with 2 / 1 finds the perfect balance between entropy and MSE that allows us to eliminate the eff ect of noise on the solution. Recall that the Gaussian density displays maximum entropy among distributions of fixed variance [114]. In the light of this fact, the aim of EWC could be unde rstood as finding the minimum

PAGE 46

31 error variance solution, while keeping the error close to Gaussian. Notice that, due to central limit theorem [114], the error signal will be closely approxi mated by a Gaussian density when there are a larg e number of taps. A brief desc ription of the relationship between entropy (using estimators) [115-117] and sample differences is provided in appendix F. Note on Model-Order Selection Model order selection is another important issue in adaptive filter theory. The actual desired behavior from an adaptive fi lter is to find the right balance between approximating the training data as accura tely as possible and generalizing to unseen data with precision [118]. One majo r cause of poor generalizati on is known to be excessive model complexity [118]. Under these circumstances, the designer’s aim is to determine the least complex adaptive system (which tran slates to smaller numbe r of weights in the case of linear systems) that minimizes the approximation error. Akaike’s information criterion (AIC) [119] and Rissanen’s minimu m description length (MDL) [120] are two important theoretical results regarding model order selectio n. Such methods require the designer to evaluate an objective function, wh ich is a combination of MSE and the filter length or the filter weights, using different lengths of adaptive filters. Consider the case of overmodeling in the problem of linear FIR filter (assume N taps) estimation. If we use th e MSE criterion, and assume th at there is no noise in the data, then, the estimated Wiener solution will have exactly N non-zero elements that exactly match with the true FIR filter. This is a very nice property of the MSE criterion. However, when there is noise in the data, then this property of MSE is no longer true. Therefore, increasing the length of the adaptive filter will only result in more parameter bias in the Wiener solution. On the other hand, EWC successfully determines the length

PAGE 47

32 of the true filter, even in the presence of additive noise. In the overmodeling case, the additional taps will decay to zero indicating that a smaller filter is sufficient to model the data. This is exactly what we would like an automated regularization algorithm to achieve: determining the proper length of the filter without requiri ng external discrete modifications on this parameter. Therefore, EWC extends the regularization capability of MSE to the case of noisy training data. Altern atively, EWC could be used as a criterion for determining the model order in a fashion similar to standard model order selection methods. Given a set of training samples, one could start solving for the optimal EWC solution for various lengths of the adaptive filte r. As the length of the adaptive filter is increased past the length of the true filte r, the error power with the EWC solution will become constant. Observing this point of transi tion from variable to constant error power, we can determine the exact model order of the original filter. The Effect of on the Weight Error Vector The effect of the cost function free parameter on the accuracy of the solution (compared to the true weight vector that ge nerated the training data ) is another crucial issue. In fact, it is possible to determine the dynamics of the weight error as a function of This result is provided in the following lemma. Lemma 4: (The effect of on AEC solution) In the noisy training data case, the derivative of the error vector between the optimal EWC solution and the true weight vector, i.e., Tw w *ˆ ˆ, with respect to is given by T L L Lw R R R R V R 1 *ˆ ) ( 2 ) )( 2 1 ( ˆ (2.17) Proof: Recall from (2.15) that in the noisy da ta case, the optimal AEC solution is given

PAGE 48

33 by L L LP P V R V R w ) 2 1 ( ) )( 2 1 ( ˆ1 *. Using the chain rule for the derivative and the fact that for any nonsingular matrix A (), 1 1 1) / ( / A A A A the result in (2.17) follows from straightforward derivation. In order to get the derivative as2 / 1 we substitute this value and 0 *ˆ. The significance of Lemma 4 is that it shows that no finite value will make this error derivative zero. The matrix inversion, on the other hand, approaches to zero for unboundedly growing In addition, it could be used to determine the Euclidean error norm derivative, / ˆ2 2 Numerical Case Studies of AEC with the Theoretical Solution In the preceding sections, we have built the theory of the augmented error criterion and its special case, the error whitening criter ion, for linear adaptive filter optimization. We have investigated the be havior of the optimal solutio n as a function of the cost function parameters as well as determining the optimal value of this parameter in the noisy training data case. This section is desi gned to demonstrate these theoretical results in numerical case studies with Monte Carlo simulations. In these simulations, the following scheme will be used to gene rate the required autocorrelation and crosscorrelation matrices. Given the scheme depicted in Figure 2-3, it is possible to determine the true analytic auto/cross-correlations of all signals of interest, in terms of the filter coefficients and the noise powers. Suppose v ~ and u are zero-mean white noise signals with powers 2x, 2v, and 2u, respectively. Suppose that the coloring filter h and the

PAGE 49

34 Figure 2-3. Demonstration scheme with coloring filter h true mapping filter w and the uncorrelated white signals. mapping filter w are unit-norm. Under these conditions, we obtain M j j j xh h n x n x E0 2)] ( ~ ) ( ~ [ (2.18) 0 )], ( ~ ) ( ~ [ 0 ))] ( ~ ) ( ~ ))( ( ~ ) ( ~ [(2 2n x n x E n v n x n v n x Ev x (2.19) N l l vl n x n x E w w n d n v n x E0 2)] ( ~ ) ( ~ [ )] ( ˆ )) ( ~ ) ( ~ [( (2.20) For each combination of SNR from {-10dB,0dB,10dB}, from {-0.5,-0.3,0,0.1}, m from {2,…,10}, and L from { m ,…,20} we have performed 100 Monte Carlo simulations using randomly selected 30-tap FIR coloring and n -tap mapping filters. The length of the mapping filters and that of the adaptive filters were selected to be equal in every case. In all simulations, we used an input signal power of 12x, and the noise powers 2 v=2 u are determined from the given SNR using ) / ( log 102 2 10 v xSNR The matrices R S, P, and Q, which are necessary to evaluate the op timal solution given by (2.15) are then evaluated using (2.18), (2.19), and (2.20) analytically. The results obtained are summarized in Figure 2-4 and Figure 2-5, wh ere for the three SNR levels selected, the z-1 z-1 z-1 … k … h0 h1 hM kx ~ kv ~ z-1 z-1 z-1 … … w0 w1 wN ku kd ˆ

PAGE 50

35 average squared error norm for the optimal solu tions (in reference to the true weights) are given as a function of L and n for different values. In Figure 2-4, we present the average normalized weight vector error norm obtained using AEC at different SNR levels and using different values as a function of the correlation lag L that is used in the criterion. The filter length was fixed to 10 in these simulations. 10 15 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR=-10 LE[||w*-wT||2/||wT||2] 10 15 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR=0 L 10 15 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR=10 L beta=-1/2 beta=-1/2 beta=-1/2 beta=-0.3 beta=-0.3 beta=-0.3 beta=0 beta=0 beta=0 beta=0.1 beta=0.1 beta=0.1 Figure 2-4. The average squared error-norm of the optimal wei ght vector as a function of autocorrelation lag L for various values and SNR levels. 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR=-10 nE[||w*-wT||2/||wT||2] 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR=0 n 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR=10 n beta=-1/2 beta=-1/2 beta=-1/2 beta=-0.3 beta=-0.3 beta=-0.3 beta=0 beta=0 beta=0 beta=0.1 beta=0.1 beta=0.1 Figure 2-5. The average squared error-norm of the optimal wei ght vector as a function of filter length m for various values and SNR levels.

PAGE 51

36 From the theoretical analysis we know that if the input autocorrelation matrix is invertible, then the solution accuracy should be independent of the autocorrelation lag L The results of the Monte Carlo simulations pres ented in Figure 2-4 conform to this fact. As expected, the optimal choice of 2 / 1 determined the correct filter weights exactly. Another set of results, presented in Figure 2-5, shows the effect of filter length on the accuracy of the solutions provided by the AEC. The optimal value of 2 / 1 always yields the perfect solution, whereas th e accuracy of the optimal weights degrades as this parameter is increased towards zero (i.e. as the weights approach the Wiener solution). An interesting observation from Fi gure 2-5 is that for SNR levels below zero, the accuracy of the solutions using sub-optimal values increases, whereas for SNR levels above zero, the accuracy decreases wh en the filter length is increased. For zero SNR, on the other hand, the accuracy seems to be roughly unaffected by the filter length. The Monte Carlo simulations performed in the preceding examples utilized the exact coloring filter and the true filter coeffi cients to obtain the analytical solutions. In our final case study, we demonstrate the pe rformance of the batc h solution of the AEC criterion obtained from sample estimates of all the relevant autoand cross-correlation matrices. In these Monte Carlo simulations, we utilize 10,000 samples corrupted with white noise at various SNR levels. The resu lts of these Monte Ca rlo simulations are summarized in the histograms shown in Fi gure 2-6. Each subplot of Figure 2-6 corresponds to experiments pe rformed using SNR levels of –10 dB, 0dB, and 10 dB for each column and adaptive filter lengths of 4-taps, 8-taps, and 12-taps for each row, respectively. For each combination of SNR and filter length, we have performed 50 Monte Carlo simulations using MSE (0 ) and EWC (2 / 1 ) criteria. The

PAGE 52

37 correlation lag is selected to be equal to th e filter length in all simulations, due to Theorem 2.2. Clearly, Figure 2-6 demonstrates th e superiority of the AEC in rejecting noise that is present in the training data. Notice that in all subplots (for all combinations of filter length and SNR), AEC achieves a smaller average error norm than MSE. Figure 2-6. Histograms of the weight erro r norms (dB) obtained in 50 Monte Carlo simulations using 10000 samples of noisy data using MSE (empty bars) and EWC with = -0.5 (filled bars). The subfigur es in each row use filters with 4, 8, and 12 taps respectively. The subfi gures in each column use noisy samples at –10, 0, and 10 dB SNR, respectively.

PAGE 53

38 The discrepancy between the performances of the two solutions intensifies with increasing filter length. Next, we will dem onstrate the error-whitening property of the EWC solution. From equation (2.1) we can exp ect that the error autocorrelation function will vanish at lags greater than or equal to the length of the reference filter, if the weight vector is identical to the true weight vector For any other value of the weight vector, the error autocorrelation fluctuates at non-zero values. A 4-tap reference filter is identified with a 4-tap adaptive filter using noisy training data (hypot hetical) at an SNR level of 0dB. The autocorrelation func tions of the error signals corresponding to the MSE solution and the EWC solution are shown in Figure 2-7. Clearly the EWC criterion determines a solution that forces the error autocorrelation function to zero at lags greater than or equal the filter length (partial whitening of the error). 0 5 10 15 20 25 30 -1 -0.5 0 0.5 1 1.5 2 2.5 3 LagError Autocorrelation EWC MSE 5 10 15 20 25 30 -0.1 -0.05 0 0.05 0.1 Figure 2-7. Error autocorrelati on function for MSE (dotted) and EWC (solid) solutions. Finally, we will address the order selection capability and demonstrate how the AEC (specifically EWC) can be used as a tool for determining the correct filter order, even with noisy data, provided that the gi ven input-desired output pair is a moving

PAGE 54

39 average process. For this purpose, we dete rmine the theoretical Wiener and EWC (with 2 / 1 and m L where m is the length of the adaptive filter) solutions for a randomly selected pair of coloring filter, h, and mapping filter w, at different adaptive filter lengths. The noise level is selected to be 20 dB, and the length of the true mapping filter is 5. We know from our theoretical analys is that if the adaptive filter is longer than the reference filter, the EWC will yield the tr ue weight vector padded with zeros. This will not change the MSE of the solution. Thus, if we plot the MSE of the EWC versus the length of the adaptive filter, starting from the length of the actual filter, the MSE curve will remain flat, whereas the Wiener so lution will keep decreasing the MSE, contaminating the solution by learning the no ise in the data. Figure 2-8(a) shows the MSE obtained with the Wiener solution as well as the EWC solution for different lengths of the adaptive filter using the same training data described above. Notice (in the zoomed-in portion) that the MSE with EWC re mains constant starting from 5, which is the filter order that generated the data. On th e other hand, if we we re to decide on the filter order looking at the MSE of the Wiener solution, we would select a model order of 4, since the gain in MSE is insignificantly sm all compared to the previous steps from this point on. Figure 2-8(b) shows the norm of th e weight vector error for the solutions obtained using the EWC and MSE criteria, which confirms that the true weight vector is indeed attained with the EWC criterion on ce the proper model order is reached. This section aimed at experimentally de monstrating the theoretical concepts set forth in the preceding sections of the chap ter. We have demonstrated with numerous Monte Carlo simulations that the analytical solution of the EWC criterion eliminates the effect of noise completely if the proper value is used for We have also demonstrated

PAGE 55

40 that the batch solution of EWC (estimated fr om a finite number of samples) outperforms MSE in the presence of noise, provided that a sufficient number of samples are given so that the noise autocorrelation matrices diminish as required by the theory. Summary In this chapter, we derived the augmen ted error criterion (AEC) and discussed a special case of AEC called the error wh itening criterion (EWC ). The proposed AEC includes MSE as a special case. We discussed some of the interesting properties of the AEC cost function and worked out the analytic al optimal solution. Further, we discussed the reasoning behind naming the special case of AEC with the parameter 5 0 as EWC. The intuitive reasoning is that this crit erion partially whitens the error signal even in the presence of noise which cannot be ach ieved by the MSE criterion. Thus the error whitening criterion is very useful for es timating the parameters of a linear unknown system in the presence of additive white noise. AEC with other values of can be used as a constrained MSE criterion where the constrai nt is the smoothness of the error signal. Most of the material presented in th is chapter can be found in [121]. Although we have presented a complete th eoretical investigation of the proposed criterion and its analytical solution, in pr actice, on-line algorithms that operate on a sample-by-sample basis to determine the desi red solution are equally valuable. Therefore, in the following chapters, we will focus on designing computationally efficient on-line algorithms to solve for the optimal AEC solu tion in a fashion similar to the well-known RLS and LMS algorithms. In fact, we aim to co me up with algorithms that have the same computational complexity with th ese two widely used algorithms.

PAGE 56

41 CHAPTER 3 FAST RECURSIVE NEWTON TYPE ALGORITHMS FOR AEC Introduction In Chapter 2, we derived the analyti cal solution for AEC. We also showed simulation results using block methods. In this Chapter, the focus will be on deriving online, sample-by-sample Newton type algorithms to estimate the optimal AEC solution. First, we will derive a Newt on type algorithm that has a structure similar to the wellknown RLS algorithm that estimates the optimal Wiener solution for MSE criterion. The complexity of the proposed algorithm is O (N2) which is again comparable with that of the RLS algorithm. Then, we will propose another Newton type algorithm derived from the principles of TLS using minor components anal ysis. This algorithm in its current form estimates the optimal EWC solution which is a special case of AEC with = -0.5. Derivation of the Newton Type Re cursive Error Whitening Algorithm Given the estimate of the filter tap weights at time instant ) 1 ( n, the goal is to determine the best set of tap weights at the next iteration n that would track the optimal solution. We call this algorithm as Recurs ive Error Whitening (REW) algorithm although the error whitening property is applicable only when the parameter is set to -0.5. But, the algorithm can be applied with any value of Recall that the RLS algorithm belongs to the class of fixed-point al gorithms in the sense that they track the optimal Wiener solution at every time step. The REW algorith m falls in the same category and it tracks the optimal AEC solution at every iteration. The noteworthy feature of the fixed-point algorithms is their exponential convergence rate as they ut ilize higher order information

PAGE 57

42 like curvature of the performance surface. Although the complexity of the fixed-point Newton type algorithms is higher when compar ed to the conventional gradient methods, the superior convergence and robustness to the eigenspread of the data can be vital gains in many applications. For convenience purposes we will drop the tilde convention that we used in the previous chapter to differentiate between noi se-corrupted and noise free matrices and vectors. Recall that the optim al AEC solution is given by ) ( ) (1 *Q P S R w (3.1) Letting ) ( ) ( ) ( n n n S R T and ) ( ) ( ) ( n n n Q P V we obtain the following recursion. T T T T T T T T TL n n n n L n n n L n n n n n L n n n n L n n n L n n n n n )) ( ) ( )( ( ) ( )) ( ) ( 2 ( ) 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 1 ( ) 1 ( ) ( x x x x x x T x x x x x x x x T x x x x x x T T (3.2) Realize that equation (3.2) basica lly tells us that the matrix T( n ) can be obtained recursively using a rank-2 update. In co mparison (see Chapter 1), the RLS algorithm utilizes a rank-1 update for upda ting the covariance matrix. At this point, we invoke the matrix inversion lemma2 (Sherman-Morrison-Woodbury identity) [7,8] given by 1 1 1 1 1 1 1) ( A D B) A D B(C A A BCD AT T T (3.3) Substituting ) 1 ( n T A, ] ) ( )) ( ) ( 2 ( [n L n nx x x B 2 2 xI C a 2x2 identity matrix and ] )) ( ) ( ( ) ( [L n n n x x x D we get the equation (3.2) in the same form as the LHS of equation (3.3). Therefore, the recursion for the inverse of ) ( n T becomes 2 Notice that the matrix inversion lemma simplifies the computation of the matrix inverse only when the original matrix can be written using reduced rank updates.

PAGE 58

43 ) 1 ( ) ) 1 ( ( ) 1 ( ) 1 ( ) (1 1 1 2 2 1 1 1 n n n n nT T xT D B T D I B T T T (3.4) Note that the computation of the above inve rse is different than the conventional RLS algorithm. It requires the inversion of a 2x2 matrix ) ) 1 ( (1 2 2B T D I nT x owing to the rank-2 update of T( n ).The recursive estimator for ) ( n V is a simple correlation estimator given by )] ( ) ( ) ( ) ( ) ( ) ( ) 2 1 [( ) 1 ( ) ( n L n d L n n d n n d n nx x x V V (3.5) Using ) (1nT and ) ( n V, an estimate of the filter weight vector at iteration index n is ) ( ) ( ) (1n n nV T w (3.6) We will define a gain matrix analogous to the gain vector in the RLS case [14] as 1 1 2 2 1) 1 ( ) 1 ( ) ( B T D I B T n n nT x (3.7) Using the above definition, the recursive estimate for the inverse of ) ( n T becomes ) 1 ( ) ( ) 1 ( ) (1 1 1 n n n nTT D T T (3.8) Once again, the above equation is analogous to the Ricatti equation for the RLS algorithm. Multiplying (3.7) from the right by ) ) 1 ( (1 2 2B T D I nT x, we obtain B T B T D B T B T B T D I ) ( ) ( ) 1 ( ) ( ) 1 ( ) ( ) 1 ( ) 1 ( ) (1 1 1 1 1 2 2n n n n n n n n nT T x (3.9) In order to derive an update equation for the filter weig hts, we substitute the recursive estimate for ) ( n V in (3.6). )] ( ) ( ) ( ) ( ) ( ) ( ) 2 1 )[( ( ) 1 ( ) ( ) (1 1n L n d L n n d n n d n n n n x x x T V T w (3.10) Using (3.8) and recognizing the fact that ) 1 ( ) 1 ( ) 1 (1 n n n V T w the above

PAGE 59

44 equation can be reduced to )] ( ) ( ) ( ) ( ) ( ) ( ) 2 1 )[( ( ) 1 ( ) ( ) 1 ( ) (1n L n d L n n d n n d n n n n nTx x x T w D w w (3.11) Using the definition for ] ) ( )) ( ) ( 2 ( [ n L n n x x x B we can easily see that ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 1 ( L n d n d n d n L n d L n n d n n d B x x x (3.12) From (3.9) and (3.12), the weight update equation simplifies to ) ( ) ( ) ( ) ( ) 1 ( ) ( ) 1 ( ) ( L n d n d n d n n n n nT w D w w (3.13) Note that the product ) 1 ( nTw D is nothing but the matrix of the outputs TL n y n y n y ) ( ) ( ) ( where ) 1 ( ) ( ) ( n n n yTw x, ) 1 ( ) ( ) ( n L n L n yTw x. The apriori error matrix is defined as ) ( ) ( ) ( )) ( ) ( ( ) ( ) ( ) ( ) ( ) ( L n e n e n e L n y L n d n y n d n y n d n e (3.14) Using all the above definitions, we will formal ly state the weight update equation for the REW algorithm as ) ( ) ( ) 1 ( ) ( n n n ne w w (3.15) The overall complexity of (3.15) is O (N2) which is comparable to the complexity of the RLS algorithm (this was achieved by using the matrix inversion lemma). Unlike the stochastic gradient algorithms that are easily affected by the eigenspread of the input data and the type of the stationa ry point solution (minimum, maximum or saddle), the REW algorithm is immune to these problems. This is because it inherently makes use of more information about the performance surface by computing the inverse of the Hessian

PAGE 60

45 matrix S R A summary of the REW algorithm is given below in Table 3-1. Table 3-1. Outline of the REW Algorithm. Initialize ) 0 (1I T c cis a large positive constant 0 w ) 0 ( At every iteration, compute ] ) ( )) ( ) ( 2 ( [ n L n n x x x B and ] )) ( ) ( ( ) ( [ L n n n x x x D 1 1 2 2 1) 1 ( ) 1 ( ) ( B T D I B T n n nT x ) 1 ( ) ( ) ( n n n yTw x and ) 1 ( ) ( ) ( n L n L n yTw x ) ( ) ( ) ( )) ( ) ( ( ) ( ) ( ) ( ) ( ) ( L n e n e n e L n y L n d n y n d n y n d n e ) ( ) ( ) 1 ( ) ( n n n ne w w ) 1 ( ) ( ) 1 ( ) (1 1 1 n n n nTT D T T The above derivation assumes stationary signals. For non-stationary signals, a forgetting factor is required for tracking. Inclusio n of this factor in the derivation is trivial and is left out in this chapter. Also, note that the REW algorithm can be applied for any value of When = -0.5, we know that AEC reduces to EWC and hence REW algorithm can be used for estimating the parameters in the presence of input white noise. Extension of the REW Algorithm for Multiple Lags In Chapter 2, we briefly mentioned the fact the AEC can be extended by including multiple lags in the cost function. It is easy to see that the extended AEC is given by 2 1 2)] ( ) ( [ )] ( [ ) (maxL n e n e E n e E JL L w (3.16) where, Lmax denotes the maximum number of lags util ized in the AEC cost function. It is not mandatory to use the same constant for all the error lag terms. However, for the sake of simplicity, we assume single value. The gradient of (3.16) with respect to the

PAGE 61

46 weight vector w, is max1)] ( ) ( )][ ( ) ( [ ) ( ) ( ) (L LL n n L n e n e n n e Jx x x w w (3.17) Recall the following matrix definitio ns (restated here for clarity), L T L L T T L L T L TL n d n d L n n E n d L n L n d n E n d n E n L n L n n E L n n L n n E n n E P P x x Q x x P x P x x x x R R R x x x x S x x R 2 ] )) ( ) ( ))( ( ) ( [( )] () ( ) ( ) ( [ ] ) ( ) ( [ ] ) ( ) ( ) ( ) ( [ ) ( 2 ] )) ( ) ( ))( ( ) ( [( )] ( ) ( [ (3.18) Using the above definitions in (3.17) and equating the gradient to zero, we get the optimal extended AEC solution as shown below. ) ( ) (max max1 1 1 L L L L L LQ P S R w (3.19) At first glance, the computational co mplexity of (3.19) seems to be O (N3). But, the symmetric structure of the ma trices involved can be exploited to lower the complexity. Once again, we resort to the matrix inversion lemma as before and deduce a lower O (N2) complexity algorithm. Realize that the optimal extended AEC solution at any time instant n will be ) ( ) ( ) (1n n nV T w (3.20) where, max1) ( ) ( ) (L L Ln n n S R T and max1) ( ) ( ) (L L Ln n n Q P V as before. The estimator for the vector V ( n ) will be a simple recursive correlator. ] ) ( ) ( ) ( ) ( ) ( ) ( ) 2 1 [( ) 1 ( ) (max1 max L Ln L n d L n n d n n d L n nx x x V V (3.21)

PAGE 62

47 The matrix T ( n ) can be estimated recursively as follows. T L L T L L L L T T L L T T L L T L L T TL n n n n L n n L n n L n n n n n L n n n L n L n n n L n n n L n n max max max max max max1 1 max 1 1 max 1 1 max) ( ) ( ) ( ) ( ) ( ) ( 2 ) 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 1 ( ) 1 ( ) ( x x x x x x T T x x x x x x x x T x x x x x x T T (3.21) Now, the matrices A B C and D used in the inversion le mma in equation (3.3) are defined as follows. max max1 2 2 1 max) ( ) ( ) ( ) ( ) ( ) ( 2 ) 1 (L L L LL n n n n L n n L n x x x D I C x x x B T A (3.22) The only differences from the previous definitions lie in the expressions for the B and D matrices that now require an inner loop running up to Lmax. The rest of the procedure remains the same as before. Once again, by the proper application of the matrix inversion lemma, we were able to reduce the complexity of the matrix inversion to O (N2) by recursively computing the inverse in a way that we only require an in version of a simple 2x2 matrix. This measure of complexity does not include the comput ations involved in building the B and D matrices. However, typically, the maximum number of lags will be smaller than the length of the adaptive filter Therefore, the additional overhead incurred in the estimation of B and D matrices will not result in a significant change in the overall complexity.

PAGE 63

48 Relationship to the Recursive Instrumental Variables Method The previously derived REW algorithm for the single lag case has a structure similar to the Instrumental Variables (IV) method. The IV method has its origins in statistics and was apparently proposed by Reie rsl [122]. Over a period of time, it has been adapted to model dynamical systems in control engineering. Lot of work in the applications of IV to control engineering problems has been done by Wong, Polak [123] and Young [124-126]. Recent adva nces in IV methods for system identification and control have been mainly due to Sderstrm and Stoica [32,93]. It is beyond the scope of this dissertation to summarize the applications and impacts of IV in various engineering problems. For more details, refer to [32]. Basically, IV can be viewed as an ex tension to the standard Least Squares regression and can be used to estimate the parameters in white noise once the model order is known. The fundamental principle is to choose dela yed regression vectors known as instruments that are uncorrelated with the additive white noise. IV can also be extended to handle colored noise situations. This will be ex clusively handled in Chapter 5. For now, the discussion will be strictly limited to the white noise scenario. Mathematically speaking, the IV method computes the solution )] ( ) ( [ ] ) ( ) ( [1 n d n E n n ET IVx x x w (3.23) where, the lag is chosen such that the outer product of the regr ession vector x ( n ) with the lagged regression vector x ( n ) result in a matrix that is independent of the additive white noise components v ( n ). In comparison, the REW solution is given by L LP R w1 Notice that in REW solution, the matrix LR is symmetric and Toeplitz [8], which is very much desirable and we exploi t this fact to derive an elegant minor components based

PAGE 64

49 algorithm in the next section. Thus, in effect the IV method can be considered as a special case of the REW algorithm obtained by removing the symmetric terms in LR and LP. We will compare the performances of REW and IV methods later in this chapter. Recursive EWC Algorithm Based on Minor Components Analysis Till now, we focused on a Newton type algorithm to compute the optimal AEC solution. Although the algorithm is fast converg ing, the convergence of the algorithm can be sensitive to the ill-cond itioning of the Hessian matrix ) ( ) ( n n S R which can happen at the first few iterations. Alternatively, we can explore the idea of using the minor components analysis (MCA) to derive a recu rsive algorithm similar to the TLS algorithm for MSE. We call this algorithm as EWC-TL S algorithm. As the name suggests, this algorithm can be used only for the case with = -0.5, which defaults the augmented error criterion to error whitening cr iterion. Recall that the TLS pr oblem in general, solves an over-determined set of linear equations of the form b Ax where n m A is the data matrix, m b is the desired vector, and n x is the parameter vector and m denotes the number of different observati on vectors each of dimension n[41]. Alternatively, the linear equations can be written in the form 0 ; x b A; ] 1 ][ [T, where ] [b A; denotes an augmented data matrix. When ] [b A; is a symmetric square matrix, it can be shown that the TLS solution is simply given by 1 1 11 n n nv v x; (3.24) where, 1 1 n nv is the last element of the minor eigenvector 1 nv In the case of EWC, it is easy to show that the augmented data matrix [127,128] (analogous to ] [b A;) is ) ( 2 Ld T L L LP P R G (3.25)

PAGE 65

50 The term ) ( Ld in (3.25) denotes the autocorrela tion of the desired signal at lag L It is important to note that the matrix (3.25) is square symmetric due to the symmetry of LR. Hence, the eigenvectors of G are all real which is highly desirable. Also, it is important to note the fact that (3.25) still holds ev en with noisy data as the entries of G are unaffected by the noise terms. In the infinite-sample case, the matrix G is not full rank and we can immediately see that one of the eigenvalues of (3.25) is zero. In the finite-sample case, the goal would be to find the eigenvector corresponding to the minimum absolute eigenvalue (finite samples also imply that G-1 exists). Since the eigenvalues of G can be both positive and negative, typical iterative gradient or even some fixed-point type algorithms tend to become unstable. A workaround would be to use the matrix G2 instead of G This will obviate the problem of having mixed eigenvalues while still preserving the eigenvectors. However, the squaring ope ration is good only if the eigenvalues of G are well separated. Otherwise, the smalle r eigenvalues blend together making the estimation of the minor component of G2 more difficult. Also, the squaring operation creates additional overh ead, thereby negating any computat ional benefits offered by the fixed point type PCA solutions as discussed in Appendix A. So, we propose to use the inverse iter ation method for estimating the minor eigenvector of G [8]. If 1) ( Nn w denotes the estimate of the minor eigenvector corresponding to the smallest abso lute eigenvalue at time instant n then the estimate at the ( n +1)th instant is given by ) 1 ( ) 1 ( ) 1 ( ) ( ) 1 ( ) 1 (1 n n n n n n w w w w G w (3.26) The term ) 1 ( nG denotes the estimate of the augmented data matrix G (equation (3.25))

PAGE 66

51 at the ( n +1)th instant. It is easy to see that ) ( n G can be recursively estimated as T Tn L n L n n n n ) ( ) ( ) ( ) ( ) 1 ( ) ( G G where, )] ( ); ( [ ) ( n d n nx is the concatenated vector of the input and desired response. Now, we can invoke the inversion lemma as before and obtain a recursive O ( n2) estimate for matrix inversion in (3.26). The details of this derivation are trivial and om itted here. Once the minor component estimate converges i.e., v w ) ( n the EWC-TLS solution is simply given by equation (3.24). Thus, the overall complexity of the EWC-TLS algorithm is still O( n2) which is the same as the REW algorithm. However, we have ob served through simulations that, the EWCTLS method converges faster than the EWCREW while preserving the accuracy of the parameter estimates. Experimental Results We will now show the simulation results with the Newton type algorithms for AEC. Specifically, our objective is to show the superior performance of the proposed criterion and the associated algorithms in th e problem of system identification with noisy input data. Estimation of System Parameters in White Noise Using REW The REW algorithm can be used effectively to solve the system identification problem in noisy environments. As we have seen before, setting the value of 5 0 noise immunity can be gained for parameter estimation. We generated a purely white Gaussian random noise of length 50,000 samples and added this to a colored input signal. The white noise signal is uncorrelated with the input signal. The noise free, colored, input signal was filtered by the unknown reference filter, and this formed the desired signal for the adaptive filter. Since, the noise in the desired signal would be averaged out for both

PAGE 67

52 RLS and REW algorithms, we decided to use the clean desired signal itself. This will bring out only the effects of input noise on the filter estimates. Also, the noise added to the clean input is uncorrelated with the desire d signal. In the experiment, we varied the Signal-to-Noise-Ratio (SNR) in the range –10dB to +10dB. The number of desired filter coefficients was also varied from 4 to 12. We then performed 100 Monte Carlo runs and computed the normalized error vector norm given by T Terror w w w*10 log 20 (3.27) where, *w is the weight vector estimated by the REW algorithm with 5 0 after 50,000 iterations or one complete pr esentation of the input data and Tw is the true weight vector. In order to show the effectiveness of the REW algorithm, we performed Monte Carlo runs using the RLS algorithm on the same data to estimate the filter coefficients. Further, we also evaluated the analytical TLS solution for each case Figure 3-1 shows a histogram plot of the normalized error vect or norm given in (3.27) for all the three methods. It is clear that the REW algorithm wa s able to perform better than the RLS at various SNR and tap length settings. In the high SNR cases, there is not much of a difference between RLS and REW results. Ho wever, under noisy circumstances, the reduction in the parameter estimation error with REW is orders of magnitude more when compared with RLS. Also, the RLS algorithm results in a rather useless zero weight vector, i.e., 0 w when the SNR is lower than –10dB. On the other hand, TLS performs well only in the cases when th e noise variances in the input and desired signals are the same. This is in conformance with the we ll-known theoretical limitations of the TLS algorithm.

PAGE 68

53 Figure 3-1. Histogram plots showing th e error vector norm for EWC-LMS, LMS algorithms and the numerical TLS solution. Effect of and Weight Tracks of REW Algorithm Since we have a free parameter to choose, it would be worthwhile to explore the effect of on the AEC parameter estimates. The SN R of the input signal was fixed at values 0dB and –10dB, the number of filter taps was set to 4 and the desired signal was noise free as before. We performed 100 M onte Carlo experiments and analyzed the average error vector norm values for 1 1 The results of the experiment are shown in Figure 3-2. Notice that there is a dip at 5 0 (indicated by a “*” in the figure) and

PAGE 69

54 Figure 3-2. Performance of REW algorithm (a) SNR = 0dB and (b) SNR = -10 over various beta values. this clearly gives us the minimum estima tion error. This corresponds to the EWC solution. For 0 (indicated by a “o” in the figure) the REW algorithm reduces to the regular RLS giving a fairly significant estimation error. Next the parameter is set to –0.5 and SNR to 0dB, and the weight tracks are estimated for the REW and the RLS algorithms Figure 3-3 shows the averaged weight tracks for both REW and RLS algorithms averag ed over 50 Monte Carl o trials. Asterisks on the plots indicate the true parameters. The tracks for the RLS algorithm are smoother, but they converge to wrong values, which we have observed quite consistently. The weight tracks for the REW algorithm are noisier compared to those of the RLS, but they eventually converge to values very close to the true weight s. This brings us to an important issue of estimators viz., bias and the variance. The RLS algorithm has a reduced variance because of the positive-definiteness of the covariance matrix R ( n ). However, the RLS solution remains asymptotical ly, biased in the presence of noisy input. (a) (b)

PAGE 70

55 Figure 3-3. Weight tracks for REW and RLS algorithms. On the other hand, REW algorithm produces zer o bias, but the variance can be high owing to the conditioning of the Hessian matrix. However, this variance diminishes with increasing number of samples. The noisy initial weight tracks of the RE W algorithm may be attributed to the ill conditioning that is mainly caused by the sm allest eigenvalue of the estimated Hessian matrix, which is ) ( ) ( n n S R The same holds true for the RLS algorithm, where the minimum eigenvalue of ) ( n R affects the sensitivity [14]. The instability issues of the RLS algorithm during the initial stages of ad aptation have been well studied in literature and effects of round off error have been anal yzed and many solutions have been proposed to make the RLS algorithm robust to such effects [129]. Similar analysis on the REW algorithm is yet to be done and this would be addressed in future work on the topic. Performance Comparisons between REW, EWC-TLS and IV methods In this example, we will contrast the performances of the REW, EWC-TLS and the Instrumental Variables (IV) method in a 4-ta p system identification problem with noisy data. The input signal is colored and corrupted with wh ite noise (input SNR was set at 5dB) and the desired signal SNR is 10dB. For th e IV method, we chose the delayed input

PAGE 71

56 Figure 3-4. Histogram plots showing the error vector norms for all the methods. vector ) ( n x as the instrument and the lag was chosen to be four, which is the length of the true filter. Once again, the performance metric was chosen as the error vector norm in dB given by equation (3.27) Figure 3-4 shows the error histograms for REW, EWC-TLS, IV and the optimal Wiener solutions. EWC-TLS and REW algorithms outperform the Wiener MSE solution. The IV method also produces better results than the Wiener solution. Amongst the EWC soluti ons, we obtained better results with the EWC-TLS algorithm (equations 3.24 and 3. 26) than REW. However, both EWC-TLS and REW outperformed IV method. This may be partially attributed to the conditioning of the matrices involved in the estima tion of the REW and IV methods. Further theoretical analysis is requi red to quantify the effects of conditioning and symmetric Toeplitz structure of LR. In Figure 3-5, we show the angle between the estimated minor eigenvector and the true eigenvector of the augmented data matrix G for a random single

PAGE 72

57 trial in scenarios with and w ithout noise. Notice that the rates of convergence are very much different. It is well known that the rate of convergence for inverse iteration method is given by the ratio 2 1 where 1 1 is the largest eigenvalue of G-1 and 1 2 is the second largest eigenvalue of G-1 [8]. Faster convergence can be seen in the noiseless case owing to the huge 2 1 ratio. Figure 3-5. Convergence of the minor eigenvector of G with (a) noisy da ta and (b) clean data. Summary In this chapter, we derived recursive Newton type algorithm to estimate the optimal AEC solution. First, the Recursive Error Whitening (REW) algorithm was derived using the analytical AEC solution and the ma trix inversion lemma. The well-known RLS algorithm for MSE becomes a special case of the REW algorithm. Further, a Total LeastSquares based EWC algorithm called EWC-TLS was proposed. This algorithm works with = -0.5 and can be easily applied to estimate parameters in the presence of white noise. A fixed-point minor components extr action algorithm was developed using the inverse iteration method. Other fixed-point or gradient-based methods cannot be used because of the indefiniteness (matrix with mi xed eigenvalues make the algorithms locally

PAGE 73

58 unstable) of the matrix involved in the EWC-TLS formulation. The computational complexity of the above mentioned algorithms is O (N2). We briefly explored an extension of the Newton type algorithm fo r the extended AEC with multiple lags. Effective usage of the matrix inversion lemma can cut the complexity of the extended REW algorithm to O (N2). In the later half of the chapter, we discussed the performance of the algorithms in the problem of system identification in the presence of additive white noise. The proposed recursive algorithms outperfor m the RLS and the analytical MSE TLS solutions. We also showed the simulati on results with the EWC-TLS algorithm and quantitatively compared the performa nce with the well-known IV method. Although the recursive EWC algorithms presented in this chapter are fast converging and sample efficient, the complexity of O (N2) can be high for many applications involving low power designs. Additionally, the recursive algorithms can exhibit limited performance in non-stationary conditions if the forgetting factors are not chosen properly. This motivates us to expl ore stochastic gradient (and its variants) algorithms for estimating the optimal AEC solution. Chapter 5 will describe these algorithms and also highlight other benefits of the stochastic algorithms over their Newton type counterparts.

PAGE 74

59 CHAPTER 4 STOCHASTIC GRADIENT ALGORITHMS FOR AEC Introduction Stochastic gradient algorithms have been at the forefront in optimizing quadratic cost functions like the MSE. Owing to the presence of a global minimum in quadratic performance surfaces, gradient algorithms can elegantly accomplish the task of reaching the optimal solution at minimal computational cost. In this chapter, we will derive the stochastic gradient algorithms for the AEC. Since the AEC performance surface is a weighted sum of quadratics, we can expect that difficulties will arise. However, we will show that using some simple optimization tricks, we can overcome these difficulties in an elegant manner. Derivation of the Stochastic Gradient AEC-LMS Algorithm Assume that we have a noisy tr aining data set of the form )) ( ), ( ( n d n x where mn ) ( x is the input and ) ( n d is the output of a linear system with coefficient vector Tw. The goal is to estimate the parameter vector Tw using the augmented error criterion. We know that the AEC cost function is given by )) ( ( )) ( ( ) (2 2n e E n e E J w (4.1) where, ) ( ) ( ) ( L n e n e n e w is the estimate of the parameter vector and m L the size of the input vector. For convenience, we will restate the foll owing definitions, ) ( ) ( ) ( L n n n x x x, ) ( ) ( ) ( L n d n d n d )] ( ) ( [ n n ETx x R )] ( ) ( [ n n ETx x S )] ( ) ( [ n d n Ex P and )] ( ) ( [ n d n E x Q Using these definitions, we can rewrite the cost

PAGE 75

60 function in (4.1) as w Q P w S R w wT Tn d E n d E J ) ( 2 ) ( )] ( [ )] ( [ ) (2 2 (4.2) It is easy to see that both )] ( [2n e E and )] ( [2n e E have parabolic performance surfaces as their Hessians have positive eigenvalues. However, the value of can invert the performance surface of )] ( [2n e E For 0 the stationary point is always a global minimum and the gradient given by equation (4.2) can be written as the sum of the individual gradients as shown below. ) ( 2 ) ( 2 ) ( 2 ) ( 2 ) ( Q Sw P Rw Q P w S R w w J (4.3) The above gradient can be approximated by the stochastic instantaneous gradient by removing the expectation operators. ) ( ) ( ) ( ) ( ) ( )) ( ( n n e n n e n n J x x w w (4.4) The goal is to minimize the cost function and hence using steepest descent, we can write the weight update for the stoc hastic AEC-LMS algorithm for 0 as ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( n n e n n e n n nx x w w (4.5) where 0 ) ( n is a finite step-size parameter that controls c onvergence. For 0 the stationary point is still unique, but it can be a saddle point, global maximum or a global minimum depending on the value Evaluating the gradient as before and using the instantaneous gradient, we get the AEC-LMS algorithm for0 ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( n n e n n e n n nx x w w (4.6) where, ) ( n is again a small step-size. However, there is no guarantee that the above update rules will be stable for all choices of step-sizes. Although, equations (4.5) and

PAGE 76

61 (4.6) are identical, we will use in the update equation (4.6) to analyze the convergence of the algorithm specifically for0 The reason for the separate analysis is that the convergence characteristics of (4.5) and (4.6) are very different. Convergence Analysis of AEC-LMS Algorithm Theorem 4.1. The stochastic AEC algorithms asympt otically converge in the mean to the optimal solution given by 0 ), ( ) ( 0 ), ( ) (1 1 Q P S R w Q P S R w (4.7) We will make the following mild assumpti ons typically applied to stochastic approximation algorithms [79-81,84] that can be easily satisfied. 1. The input vectors ) (nx are derived from at least a wide sense stationary (WSS) colored random signal with positive definite autocorrelation matrix )] ( ) ( [ n n ETx x R 2. The matrix )] ( ) ( ) ( ) ( [ n L n L n n ET T Lx x x x R exists and has full rank 3. The sequence of weight vectors ) ( nw is bounded with probability 1 4. The update functions ) ( ) ( ) ( ) ( )) ( ( n n e n n e n hx x w for 0 and ) ( ) ( ) ( ) ( )) ( ( n n e n n e n h x x w for 0 exist and are continuously differentiable with respect to ) (nw and their derivatives are bounded in time. 5. Even if )) ( (n hw has some discontinuities a mean update vector ) ( lim ) ( n h E n hnw w exists. Assumption A.1 is easily satisfied. A.2 requires that the input signal have sufficient correlation with itself for at least L lags. Proof of AEC-LMS Convergence for > 0 We will first consider the update equation in (4.5) which is the stochastic AECLMS algorithm for 0 Without loss of generality, we will assume that the input

PAGE 77

62 vectors ) (nx and their corresponding desired responses ) (n d are noise-free. The mean update vector )) ( ( n h w is given by )) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )) ( (n n n n n n e n n e E dt t d n hQ Sw P Rw x x w w (4.8) The stationary point of the ordinary differential equation (ODE) in (4.8) is given by Q P S R w 1 (4.9) We will define the error vector at time instant n as ) ( ) (*n nw w Therefore ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( n n e n n e n n nx x (4.10) and the norm of the error vector at time n+1 is simply 2 2 2 2) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) ( ) 1 ( n n e n n e n n n e n n n e n n n nT Tx x x x (4.11) Imposing the condition that 2 2) ( ) 1 ( n n for all n, we get an upper bound on the time varying step-size parameter ) (n which is given by 2) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) (n n e n n e n n e n n n e n nT Tx x x x (4.12) Simplifying the above equa tion using the fact that ) ( ) ( ) ( n e n nT x and ) ( ) ( ) ( n e n nT x we get 2 2 2) ( ) ( ) ( ) ( ) ( ) ( 2 ) ( n n e n n e n e n e nx x (4.13) which is a more practical upper bound on the st ep-size as it can be directly estimated from the input and desired data. As an observation, notice that if 0 then, the bound in (4.13) reduces to

PAGE 78

63 2) ( 2 ) (n nx (4.14) which, when included in the update equation, reduces to a variant of the Normalized LMS (NLMS) algorithm [14]. In general, if the step-size parameter is chosen according to the bound given by (4.13), then the norm of the error vector ) (n is a monotonically decreasing sequence converging as ymptotically to zero, i.e., 0 ) ( lim2 nn which implies that *) ( lim w w nn (see Appendix G for details). In addition, th e upper bound on the step-size ensures that the weights are always bound with probability one satisfying the assumption A.3 we made befo re. Thus the weight vector ) (nw converges asymptotically to *w, which is the only stable stationary point of the ODE in (4.8). Note that (4.5) is an ) (m Oalgorithm. Proof of AEC-LMS Convergence for < 0 We analyze the convergence of the stochastic gradie nt algorithm for 0 in the presence of white noise because this is the relevant case (5 0 eliminates the bias due to noise added to the input). Fr om (4.6), the mean update vector )) ( ( n h w is given by, )) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )) ( (n n n n n n e n n e E dt t d n hQ Sw P Rw x x w w (4.15) As before, the stationary point of this ODE is Q P S R w 1 (4.16) The eigenvalues of S R decide the nature of the stat ionary point. If they are all positive, then we have a global minimum and if they are all negative, we have a global maximum. In these two cases, the stochastic gr adient algorithm in (4 .6) with proper fixed

PAGE 79

64 sign step-size would converge to the stationary point, which would be stable. However, we know that the eigenvalues of S R can also take both positive and negative values resulting in a saddle stationary point. Thus, the underlying dynamical system would have both stable and unstable modes making it impossi ble for the algorithm in (4.6) with fixed sign step-size to conver ge. This is well known in the literature [3,14] However, as will be shown next, this difficulty can be remove d for our case by appropriately utilizing the sign of the update equation (remember that this saddle point is the only stationary point of the quadratic performance surface). The gene ral idea is to use a vector step-size (one step-size per weight) having both positive and negative values. One unrealistic way (for an on-line algorithm) to achieve this goa l is to estimate the eigenvalues of S R. Alternatively, we can derive the conditions on th e step-size for guaranteed convergence. As before, we will define the error vector at time instant n as ) ( ) (*n nw w The norm of the error vector at time instant n+1 is given by 2 2 2 2) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) ( ) 1 ( n n e n n e n n n e n n n e n n n nT Tx x x x (4.17) Taking the expectations on both sides, we get 2 2 2 2) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) ( ) 1 ( n n e n n e E n n n e n n n e n E n n E n ET Tx x x x (4.18) The mean of the error vector norm will m onotonically decay to zero over time i.e., 2 2) ( ) 1 ( n E n E if and only if the step-size satisfies the following inequality. 2) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 2 ) ( n n e n n e E n n e n n n e n E nT Tx x x x (4.19)

PAGE 80

65 Let ) ( ) ( ~ ) ( n n n v x x and ) ( ) ( ~ ) ( n u n d n d where ) ( ~ n x and ) ( ~ n d be the clean input and desired data respecti vely. We will further assume that the input noise vector ) ( n v and the noise component in the desired signal ) ( n u to be uncorrelated. Also the noise signals are assumed to be independent of the clean input and desired signals. Furthermore, the lag L is chosen to be more than m the length of the filter under consideration. Since the noise is assumed to be purely white, 0 )] ( ) ( [ )] ( ) ( [ n L n E L n n ET Tv v v v and V v v )] ( ) ( [ n n ET. We have ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ) ( ) ( ) ( n n n n n n n u n d n n n e nT T T Tv x v w x w w w x (4.20) Simplifying this further and ta king the expectations, we get ) ( ) ( ) ( ) ( ) ( ~ ) ( ) ( ~ 2 ) ( ~ var )] ( ) ( ) ( [ n J n n n n n n n d n n e n ET MSE T T T T TVw w Vw w Vw w w R w w P x (4.21) where, )] ( ~ ) ( ~ [ ~ n n ETx x R )] ( ~ ) ( ~ [ ~ n d n E x P and ) ( ~ 2 ) ( ~ var ) ( ~ ) ( n n d n n JT T MSEw P w V R w (4.22) Similarly, we have L k L k k k T T T TL n L n n L n u L n d n n n n u n d n n n e n v x v x v x w v x w w w x ~ ~ ] ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ) ( ~ ) ( ) ( ) ( ~ [ )) ( ( ) ( ) ( ) ( (4.23) Evaluating the expectations on both sides of (4.23) and simplifying, we obtain ) ( 2 ) ( ) ( 2 ) ( ) ( 2 ) ( ~ ) ( ) ( ~ 2 ) ( ~ ) ( ~ var ) ( ) ( ) ( n J n n n n n n n L n d n d n n e n ET ENT T T T T TVw w Vw w Vw w w S w w Q x (4.24) where, we have used the definitions ] )) ( ~ ) ( ~ ))( ( ~ ) ( ~ [( ~ TL n n L n n E x x x x S,

PAGE 81

66 ))] ( ~ ) ( ~ ))( ( ~ ) ( ~ [( ~ L n d n d L n n E x x Q and ) ( ~ 2 ) ( ~ ) ( ~ var ) ( 2 ~ ) ( n L n d n d n n JT T ENTw Q w V S w (4.25) Using (4.21) and (4.24) in equation (4.19) we get an expression for the upper bound on the step-size as 2) ( ) ( ) ( ) ( (n) 2 1 2 ) ( n n e n n e E J J nT ENT MSEx x Vw w (4.26) This expression is not usable in practice as an upper bound because it depends on the optimal weight vector. However, for 5 0 the upper bound on the step-size reduces to 2) ( ) ( 5 0 ) ( ) ( 5 0 2 ) ( n n e n n e E J J nENT MSEx x (4.27) From (4.22) and (4.25), we know that MSEJ and ENTJ are positive quantities. However, ENT MSEJ J 5 0 can be negative. Also note that this upper bound is computed by evaluating the right hand si de of (4.27) with the current weight vector ) ( n w Thus as expected, it is very clear that the step-size at the nth iteration can take either positive or negative values based on ENT MSEJ J 5 0 ; therefore, )) ( sgn( n must be the same as ) 5 0 sgn(ENT MSEJ J evaluated at ) ( n w. Intuitively speaking, the term ENT MSEJ J 5 0 is the EWC cost computed with the current weights ) ( n w and 5 0 which tells us where we are on the performance surface and th e sign tells which way to go to reach the stationary point. It also means that the lo wer bound on the step-size is not positive as in traditional gradient algorithms. In general, if the step-size we choose satisfies (4.27), then, the mean error vector norm decreases asymptotically i.e., 2 2) ( ) 1 ( n E n E

PAGE 82

67 and eventually becomes zero, which implies that *)] ( [ lim w w n En. Thus the weight vector )] ( [ n E wconverges asymptotically to *w, which is the only stationary point of the ODE in (4.15). We conclude that the knowle dge of the eigenvalues is not needed to implement gradient descent in the EWC pe rformance surface, but (4.27) is still not appropriate for a simple LMS type algorithm. On-line Implementations of AEC-LMS for < 0 As mentioned before, computing ENT MSEJ J 5 0 at the current we ight vector would require reusing the entire past da ta at every iteration. As an alternative, we can extract the curvature at the operating point and include that information in the gradient algorithm. By doing so, we obtain the followi ng stochastic algorithm. ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( sgn ) ( ) 1 ( n n e n n e n n n n n nTx x w S R w w w (4.28) where, ) ( n R and ) ( n S are the estimates of R and S respectively at the nth time instant. Corollary : Given any quadratic surface ) (w J, the following gradient algorithm converges to its stationary point. ) ( )) ( ) ( sgn( ) ( ) 1 ( n J n n n nTw Hw w w w (4.29) Proof : Without loss of generality, suppose that we are given a quadratic surface of the form Hw w wTJ ) (, where m m H, and 1 mx w. H is restricted to be symmetric; therefore, it is the Hessian matrix of th is quadratic surface. The gradient of the performance surface with respect to the weights, evaluated at point 0w is 0 02 Hw w J, and the stationary point of ) (w J is the origin. Since the performance surface is quadratic, any cross-section passing thr ough the stationary point is a parabola. Consider the cross-

PAGE 83

68 section of ) (w J along the line defined by the local grad ient that passes through the point 0w In general, the Hessian matrix of this surface can be positive or negative definite; it might as well have mixed eigenvalues. The unique stationary point of ) (w J, which makes its gradient zero, can be reached by moving along the direction of the local gradient. The important issue is the selection of the sign, i. e., whether to move along or against the gradient direction to reach the st ationary point. The deci sion can be made by observing the local curvature of the cross-section of ) (w J along the gradient direction. The performance surface cross-sectio n along the gradient direction at 0w is 0 3 2 2 0 0 0 0 0) 4 4 ( ) 2 ( ) 2 ( ) 2 ( w H H H w w H H H w Hw w T T TI I J (4.30) From this, we deduce that the local curv ature of the parabolic cross-section at 0w is 0 3 04 w H wT. If the performance surface is locally convex, then this curvature is positive. If the performance surface is locally concave, this curvature is negative. Also, note that ) sgn( ) 4 sgn(0 0 0 3 0Hw w w H wT T. Thus, the update equati on with the curvature information in (4.30) converges to the sta tionary point of the quadratic cost function ) (w J irrespective of the nature of the stationary point. From the above corollary and uti lizing the fact that the matrix S R is symmetric, we can conclude th at the update equation in (4.2 9) asymptotically converges to the stationary point Q P S R w 1 *. On the down side however, the update equation in (4.28) requires ) (2m O computations, which makes the algorithm unwieldy for real-world applications. Also, we can use the REW algorithm instead, which has a similar complexity.

PAGE 84

69 For an ) ( m O algorithm, we have to go back to the update rule in (4.6). We will discuss only the simple case of 5 0 which turns out to be also the more useful. We propose to use an instantaneous estimate of th e sign with the current weights given by )] ( ) ( 5 0 ) ( ) ( ))[ ( 5 0 ) ( sgn( ) ( ) ( ) 1 (2 2n n e n n e n e n e n n n x x w w (4.31) where, where 0 ) ( n and is bound by (4.27). It is possible to make mistakes in the sign estimation when (4.31) is utilized, which will not affect the converg ence in the mean, but will penalize the misadjustment. Excess Error Correlation Bound for EWC-LMS In the next theorem, we will show that the asymptotic excess error correlation at lags m L is always bounded from above and can be arbitrarily redu ced by controlling the step-size. Theorem 4.2: With 2 / 1 the steady state excess error autocorrelation at lag m L i.e., ) (ˆLe is always bound by ] )) ( ( ][ ) ( [ 2 ) (2 2 2 2 2 ˆ v u a v ek e E Tr L w I R (4.32) where ] [T k kE x x R and ) ( Tr denotes the matrix trace. The term )) ( (2k e Ea denotes the excess MSE which is ) ( ) (T T Tw w R w w The noise variances in the input and desired signals are represented by 2 v and 2 u respectively. Note that the term 2 w is always bound because of the step-size bound. Proof : For convenience, we will adopt the subscript k to denote the time or iteration index. With this convention, the weight vector at the kth iteration is denoted by wk Further, the error vector (diffe rence between the true vector wT and the adaptive estimate

PAGE 85

70 at time k ) is denoted by k T kw w ˆ Throughout the rest of the proof, we will use the following notations: the noisy input vector kx ˆ the noise-free input vector kx and the input noise vector kv obey k k kv x x ˆ ; the noisy desired signal kd ˆ, the noise-free desired signal kd and noise ku are related by k k ku d d ˆ.We will start from the equation describing the dynamics of th e error vector norm given below. ) ˆ ˆ ˆ ˆ ( ˆ ) ˆ ˆ ( 2 ˆ ˆ2 2 2 2 1k k k k T k k k k ke e e e sign x x 2 2ˆ ˆ ˆ ˆk k k ke e x x (4.33) In (4.33), we have assumed a constant step -size which satisfies the upper bound in (4.27). Letting 2 2 1ˆ ˆk kE E as k, we see that ) ˆ ˆ ˆ ˆ ( ˆ ˆ ˆ ˆ ˆ 22 k k k k T k k k k ke e E e e E x x x x (4.34) We now invoke the Jensen’s inequality fo r convex functions [130] to reduce (4.34) further, yielding )] ˆ ˆ ˆ ˆ ( ˆ [ ˆ ˆ ˆ ˆ 22 k k k k T k k k k ke e E e e E x x x x (4.35) The noisy error term is given by k T k a ku k e e v w ) ( ˆ where the excess error k T ak e x ) ( Using the expressions k T k k T k k k T ke E e E Vw w Vw w x 2) ( ] ˆ ˆ ˆ [ k T k k T k k k T ke E e E Vw w Vw w x 2 2 ) ( ] ˆ ˆ ˆ [* 2 and 5 0 we can immediately recognize that the RHS of (4.35) is simply the steady state excess error autocorrelation at lag m L i.e., ) (ˆLe. In order to evaluate the LHS of (4 .35), we will assume that the terms that 2ˆkx and 2ˆke are uncorrelated in the steady state. Using, this assumption, we can write

PAGE 86

71 ] ) ( )[ ˆ ( 2 ˆ ˆ 5 0 ˆ ˆ 22 2 2I R x xv k k k k kTr e E e e E (4.36) where, 2 2 2 2 2)) ( ( ) ˆ (v u a kk e E e E w Using (4.36) in equation (4.35), we get the inequality in (4.32). This assumption (more relaxed than the i ndependence assumptions [11,14]) is used in computing the steady state excess-MSE for stochastic LMS algorithm [131,132] and becomes more realistic for l ong filters. In the estimation of the excess MSE for the LMS algorithm, Price’s theorem [133 ] for Gaussian random variable s can be invoked to derive closed form expressions. However, even th e Gaussianity assumpti on is questionable as discussed by Eweda [134] who proposed addi tional reasonable cons traints on the noise pdf to overcome the Gaussianity and indepe ndence assumptions that lead to a more generic treatment for the sign-LMS algorithm. It is important to realize at this point that in the analysis presented here, no explicit Gaussianity assumptions have been made. As a special case, consider 0 L and noise free input. Then, (4.32) is true with the equality sign and also ) (ˆLe will be the same as )) ( (2k e Ea which is nothing but the excess MSE (as k) of the LMS algorithm. In other words, (4.32) reduces to ] )) ( ( )[ ( 2 )) ( (2 2 2u a ak e E Tr k e E R (4.37) From (4.37), the excess MSE for the LM S algorithm [14] can be deduced as ) ( 2 ) ( )) ( (2 2R R Tr Tr k e Eu a (4.38) which will become to 2 ) (2R Tru for very small step-sizes. If the adaptive filter is long enough, the excess error ) (k ea will be Gaussian and we can easily show that the excess

PAGE 87

72 MSE is bounded by 4 ] [ ) (2 0 R E Tr where, 0 denotes the error due to the initial condition [131]. Other Variants of the AEC-LMS Algorithms It is easy to see that for converg ence of the mean, the condition is 1 ) ( S R kI for all k, where ) ( S R k denotes the kth eigenvalue of the matrix ) ( S R This gives an upper bound on the step-size as ) ( 2maxS R From the triangle inequality [8], ) ( ) (max max 2 2S R S R where, 2 denotes the matrix norm. Since, both R and S are positive-definite matrices, we can write 2 2 2 2) ( ) ( ) ( ) ( n E n E Tr Tr x x S R S R (4.39) In a stochastic framework, we can include this in the AEC-LMS upda te equation to result in a step-size normalized EWC-LMS update rule given by 2 2 2) ( ) ( )) ( ) ( ) ( ) ( ))( ( ) ( ( ) ( ) 1 (n n n n e n n e n e n e sign n nx x x x w w (4.40) Note that when 0 (4.40) reduces to the well-known normalized LMS (NLMS) algorithm [14]. Alternatively, we can normali ze by the norm squared of the gradient and this gives the following modified update rule. 2 2 2) ( ) ( ) ( ) ( )) ( ) ( ) ( ) ( ))( ( ) ( ( 2 ) ( ) 1 (n n e n n e n n e n n e n e n e n nx x x x w w (4.41) The term a small positive constant compensates for the numerical in stabilities when the signal has zero power or when the error goes to zero, which can happen in the noiseless case even with finite number of samples. On ce again, we would like to state that with 0 (4.41) defaults to NLMS algorithm. Howe ver, the caveat is that, both (4.40) and

PAGE 88

73 (4.41) do not satisfy th e principle of minimum disturbance unlike the NLMS3 [14]. But nevertheless, the algorithms in (4.40) a nd (4.41) can be used to provide faster convergence at the expense of increased misadj ustment (in the error co rrelation sense) in the final solution. AEC-LMS Algorithm with Multiple Lags In the previous chapter, we discussed a recursive Newton type algorithm that included more than one lag in the cost function. With decreasing SNR at the input, the Hessian matrix H = R + S is mostly determined by the noise covariance matrix. This can degrade the performance and we might be forced to use very small step-sizes (slow convergence) to achieve good results. One wa y of alleviating this problem is to incorporate multiple lags in the AEC cost function. The stochastic gradient AEC-LMS algorithm for the multiple lag case is simply given by max1 2 2)) ( ) ( ) ( ) ( ))( ( ) ( ( ) ( ) 1 (L L L L L Ln n e n n e n e n e sign n n x x w w (4.42) Lmax is the total number of lags (constraints) used in the AEC cost function. The additional robustness of using multiple lags comes at an increase in the computational cost and in the case when the number of lags becomes equal to the le ngth of the adaptive filter, the complexity will approach that of the recursive Newton type algorithms. The stochastic AEC algorithms have linear complexity in comparison with the O(N2) algorithm of the recursive Newton type algorithms discussed in the previous chapter. At the same time, since the al gorithms are all based on the instantaneous 3 The NLMS algorithm is also called the minimum norm update algorithm. It can be formulated as a constrained minimization problem wherein the actual cost function is the norm of the update, viz., || w ( n )w ( n -1)||2 and the constraint is the error e( n ) with the weights w ( n ) must be zero.

PAGE 89

74 gradients, these algorithms have better tr acking abilities when compared with their Newton counterparts. Hence these algorithms can be expected to perform better in nonstationary conditions. Simulation Results Estimation of System Parameters in White Noise The experimental setup is the same as the one used to test the REW algorithm. We varied the Signal-to-Noise -Ratio (SNR) between –10dB to +10dB and changed the number of filter parameters from 4 to 12. We set 5 0 and used the update equation in (4.31) for the EWC-LMS algorithm. A time varying step-size magnitude was chosen in accordance with the upper bound given by (4.27) without the expectation operators. This greatly reduces the computa tional burden but makes the al gorithm noisier. However, since we are using 50,000 samples for estimating the parameters, we can expect the errors to average out over iterations. For the LMS algorithm, we chose the step-size that gave the least error in each tria l. Totally 100 Monte Carlo trials were performed and histograms of normalized error vector norms were plotted. It is possible to use other statistical measures inst ead of the error norm, but this is sufficient to demonstrate the bias removal ability of EWC-LMS. For comparison purposes, we computed the solutions with LMS as well as the numerical TLS (regular TLS) methods. Figure 4-1 shows the error histograms for all the three methods. The inset plots in Figure 4-1 show the summary of the histograms for each method. EWC-LMS pe rforms significantly better than LMS at low SNR values (-10dB and 0dB), while performing equally well for 10dB SNR. The input noise variances for -10dB, 0dB, and 10dB SNR values are 10, 1, and 0.1, respectively. Thus, we expect (and observe) TLS results to be worst for -10dB and best

PAGE 90

75 for 10dB. As per theory, we observe that TLS performance drops when the noise variances are not the same in the input and desired signals. Figure 4-1. Histogram plot s showing the error vector norm for EWC-LMS, LMS algorithms and the numerical TLS solution. Figure 4-2 shows a sample comparison betw een the stochastic and the recursive algorithms for 0dB SNR and 4 filter taps. In terestingly, the performance of the EWCLMS algorithm is better than the REW algorithm in the presence of noise. Similarly, the LMS algorithm is much better than the RLS algorithm. This tells us that the stochastic

PAGE 91

76 algorithms presumably reject more noise th an the fixed-point algorithms. Researchers have made this observation before, although no concrete arguments exist to account for the smartness of the adaptive algorithms [135]. Similar conclusions can be drawn in our case for EWC-LMS and REW. Figure 4-2. Comparison of stochas tic versus recursive algorithms. Weight Tracks and Convergence The steady state performance of a stochastic gradient algorithm is a matter of great importance. We will now experimentally veri fy the steady state behavior of the EWCLMS algorithm. The SNR of the input signal is set to 10dB and the number of filter taps is fixed to two for display convenience. Figure 4-3 shows th e contour plot of the EWC cost function with noisy input data. Clearly, the Hessian of this performance surface has both positive and negative eige nvalues thus making the sta tionary point an undesirable saddle point. On the same plot, we have shown the weight tracks of the EWC-LMS algorithm with 5 0 Also, we used a fixed value of 0.001 for the step-size. From

PAGE 92

77 the figure, it is clear that the EWC-LMS algor ithm converges stably to the saddle point solution, which is theoretically unstable when a single sign step-size is used. Notice that due to the constant step-size, there is misadj ustment in the final solution. In Figure 4-4, we show the individual weight tracks for the EWC-LMS algorithm. The weights converge to the vicinity of the true filter parameters, which are -0.2 and 0.5 respectively within 1000 samples. Figure 4-3. Contour plots with the weight tracks showing convergen ce to saddle point. Figure 4-4. Weight tracks for the stochastic algorithm.

PAGE 93

78 In order to see if the algorithm converges to the saddle point solution in a robust manner, we ran the same experiment using different initial condi tions on the contours. Figure 4-5 shows a few plots of the weight tracks originating from different initial values over the contours of the perfor mance surface. In every case, the algorithm converged to Figure 4-5. Contour plot with weight tracks for different initial values for the weights. the saddle point in a stable manner. Note that the misadjustment in each case is almost the same. Finally, to quantify the effect of re ducing the SNR, we repeated the experiment with 0dB SNR. Figure 4-6 (left) shows the we ight tracks over the contour and we can see that the misadjustment has increased owing to decrease in the SNR. This is a typical

PAGE 94

79 phenomenon observed with most of the stoc hastic gradient algorithms. However, the misadjustment is proportional to the step-size. Therefore, by using sm aller step-sizes, the misadjustment can be controlled to be with in acceptable values. Th e drawback is slow convergence to the optimal solution. Figure 46 (right) shows the weight tracks when the algorithm is used without the sign information for the step-size. Note that convergence is Figure 4-6. Contour plot with weight tracks for EWC-LMS algorithm with sign information (left) and without sign information (right). not achieved in this case which substantiates our previous argument that a fixed sign stepsize will never converge to a saddle point. To further substantiate this fact, we removed the noise from the input and ran the EWC-LMS algorithm with and without the sign term. Figure 4-7 (left) shows the noise-free EWC performance surface and Figure 4-7 (right) shows the weight track s (with and without the sign information) on the contours. Clearly, the weights do not converge to the desired sadd le point even in the absence of noise. On

PAGE 95

80 the other hand, using the sign information lead s the weights to the sa ddle point in a stable manner. Since this is the noise-free ca se, the final misadjustment becomes zero. Figure 4-7. EWC performance surface (left) and weight tracks for the noise-free case with and without sign information (right). Inverse Modeling and Controller Design Using EWC System identification is the first step in the design of an inverse controller. Specifically, we wish to design a system that controls the plant to produce a predefined output. Figure 4-8 shows a block diagram of model reference invers e control [136]. In this case, the adaptive controller is designed so that the controller-p lant pair would track the response generated by the reference m odel for any given input (command). Clearly, we require the plant parameters (which are typically unknown) to devise the controller. Once we have a model for the plant, the controller can be easily designed using conventional MSE minimization techniques. In this example, we will assume that the plant is an all-pole system with transfer function) 3 0 5 0 8 0 1 /( 1 ) (3 2 1 z z z z P. The reference model is chosen to be an FIR filter with 5 taps. The block diagram for the plant identification is shown in Figure 4-9. Notice that the output of the plant is corrupted

PAGE 96

81 with additive white noise due to measurement errors. The SNR at the plant output was set to 0dB. We then ran the EWC-LMS and LMS algorithms to estimate the model parameters given noisy input and desired signals. The model parameters thus obtained are used to derive the controller (see Figure 4-8) using standard backpropa gation of error. We then tested the adaptive controller-plant pair for trajectory tracking by feeding the controller-plant pair with a non-linear time series and observing the responses. Ideally, the controller-plant pair must follow the tr ajectory generated by the reference model. Figure 4-10 (top) shows the tracki ng results for both controller-plant pairs along with the reference output. Figure 4-10 (bottom) shows a histogram of the tracking errors. Note Figure 4-8. Block diag ram for model reference inverse control. Figure 4-9. Block diagra m for inverse modeling. that the errors with EWC-LMS controller are all concentrated around zero, which is desirable. In contrast, the errors produced with the MSE based c ontroller are significant input desired Adaptive Controller Plant (Model) Reference Model noise Plant Model

PAGE 97

82 and this can be worse if the SNR levels dr op further. Figure 4-11 shows the magnitude and phase responses of the reference models along with the generated controller-model Figure 4-10. Plot of tracking results and error histograms. Figure 4-11. Magnitude and phase responses of the reference model and designed modelcontroller pairs.

PAGE 98

83 pairs. Note that, the EWC controller-model pa ir matches very closely with the desired transfer function, whereas MSE controller-m odel pair produces a significantly different transfer function. This cl early demonstrates the advantages offered by EWC. More details on the applications of EWC-LMS in system identification and controller design problems can be found in [137-139]. Summary In this chapter, we proposed online sa mple-by-sample stochastic gradient algorithms for estimating the optimal AEC solution. The detailed derivations of the update rules were presented and the convergen ce was proved rigorousl y using stochastic approximation theory. We also derived th e step-size upper bounds for convergence with probability one. Further, the theoretical uppe r bound on the excess error correlation in the case of EWC-LMS was derived. The AEC stochastic algorithms include the LMS algorithm for MSE as a special case. Owing to the complexities of the EWC performance surface (see Chapter 2), additional information like the sign of the instantaneous cost is required for guaranteed convergence to the uniqu e optimal AEC solution. In this context, the AEC optimization problem can be pursued as a root-finding problem and the popular Robbins-Munro method [140] can be adopted to solve for the optimal solution. We have not explored this method yet for the AEC criterion. We also presented several variants of the AEC-LMS algorithm. As a special case, the normalized AEC-LMS algorithm in equati on (4.40) reduces to the well-known NLMS algorithm for MSE. The gradient normalized AEC-LMS algorithm in equation (4.41) has shown better performance over the simple AE C-LMS algorithm in our simulation studies. We then presented simulation results to show the noise rejection capability of the EWC-LMS algorithm. Experiments were also co nducted to verify some of the properties

PAGE 99

84 of the proposed gradient algorithms. In part icular, we observed the weight tracks and the verified that the algorithm converges in a stab le manner to even saddl e stationary points. This is achieved mainly by utilizing the sign information in the gradient update. We also showed the amount of misadjustment can be cont rolled by the step-size parameter. This is in conformance with the general theory behind stochastic gradient algorithms. Lastly, we demonstrated the application of EWC in the design of a model-reference inverse controller. We compared the perfor mance of the EWC controller with the MSE derived controller and verified the superiority of the former.

PAGE 100

85 CHAPTER 5 LINEAR PARAMETER ESTIMATI ON IN CORRELATED NOISE Introduction In the previous chapters we discusse d a new criterion titled augmented error criterion (AEC) that can potenti ally replace the popular MSE crit erion. In fact we showed that a special case of the AEC called the e rror whitening criterion (EWC) can solve the problem of estimating the parameters of a lin ear system in the presence of input noise. We showed extensive simulation results with different EWC adaptation algorithms that proved beyond doubt, the usefulness of this criterion in solv ing system identification and controller design problems. Two crucial assumptions were made in the theory behind the error whitening criterion. Firstly, we assumed that the input noise is uncorrelated with itself or is white. Although, in most problems, we assume that the noise is white, th is assumption can be certainly restrictive in many applications. From the theory we discussed in the previous chapters, it is easy to conclude that EWC fa ils to remove the bias in the parameter estimates when the noise is correlated or colored. Secondly, we assumed full knowledge of the model order of the unknown system. This is not just native to the proposed method as most of the competing methods including Total Least-Squares (TLS) assume exact model order. To the best of our knowledge, there is no existing solution to the problem of system identification in the presence of input noise in cases when the model order is unknown. However, till this point, we have not dealt with the implicat ions of using the proposed EWC when the

PAGE 101

86 model order is not known. We will address this im portant issue in the next chapter. In this chapter, we will focus on solving the probl em of linear parameter estimation in the presence of correlated noise in the input and de sired data. Most of the material covered in this chapter can be found in [141]. Existing Solutions This problem of parameter estimation with noisy data has been a well researched topic although the solutions are not satisfact ory. First and foremost, the MSE criterion does not provide accurate estimates in the presence of correlated noise. The regular TLS method assumes i.i.d. noise and hence fails when the noise is correlate d. Further, there is the additional restriction of having equal noise variances in the input and desired data. The extension of TLS called the Generalized TLS (GTLS) discussed in Chapter 1 can be used for the correlated noise case. However, a major drawback is that we require an exact knowledge of the noise covarian ce matrix. Regalia gave a c onceptual treatment for the IIR filter estimation based on equation-error techniques with the monic constraint replaced by a unit-norm constraint [92]. Douglas et al. extended the work to colored noise case in [142]. However, these methods require estimation of the noise covariances from the data, which is again not feasible. The Instrumental Variables (IV) method is traditionally limited to white noise. Extensions to the colored noise case are usually accomplished by introducing whitening filters [93]. The usual approach is to assume that the correlated noise is produced by filtering a white process by an AR model with known order. Thus, the problem then reduces to finding more parameters (the AR model parameters plus the actual system parameters ) than usual, assuming white noise in the input. However, there are many loopholes in this technique. Most importantly, it is impossible to figure out the exact or der of the noise AR process.

PAGE 102

87 Criterion for Estimating the Parameters in Correlated Noise Our goal is to propose a method to esti mate the unknown system parameters without computing the input noise covariance matrices. We will approach this problem in two steps. In the first step, we will allow the noise in the input to be correlated, but we will restrict the noise on the desired signal to be white. The reasoning behind this step will be clear later. Once the al gorithm is presented, we will introduce modifications in the algorithm that would enable us to remove th e whiteness restriction on the noise in the desired signal. Further, in this research, we have restricted the unknown linear systems to be FIR filters. Generalizations to the IIR fi lter estimation and the associated stability issues are topics for further research. A traditional setting of the system identif ication problem is shown in Figure 5-1. Suppose noisy training data pair ) ˆ ˆ (k kd x is provided, where k k N kv x x ˆ and k k ku d d 1ˆ with kx as the noise-free input vect or at discrete time index k, kv the additive noise vector with arbitrary covariance ] [T k kEv v V on the input, kd being the noise-free desired signal and ku being the additive white noise added to the desired signal. We further assume that the noises kv and ku are independent from the data pair and also independent from each other. Let the weight vector (filter) that generated the noise-free data pair ) (k kdx be Tw, of dimension N. We will assume that the length of w, the estimated weight vector is N (sufficient order case). Then, the error sample keˆ is simply given byk T k kd e x w ˆ ˆ ˆ Consider the cost func tion in equation (5.1). N k k k kd e d e E J1] ˆ ˆ ˆ ˆ [ ) ( w (5.1)

PAGE 103

88 Figure 5-1. System identification block diagram showing data signals and noise. Consider a single term in the above equation. It is easy to see that the cross products ] ˆ ˆ [ k kd e E and ] ˆ ˆ [k kd e E are given by ] [ ] [ ] [ ] ˆ ˆ [ ] [ ] [ ] [ ] ˆ ˆ [k k T k k T T T T k k T T k k k k T k k T T T T k k T T k ku u E E E d e E u u E E E d e E w x x w w x x w w x x w w x x w (5.2) If we assume that the noise uk is white, then 0 ] [ k ku u E, and (5.2) reduces to functions of only the clean input and the wei ghts. The input noise never multiplies itself; hence it gets eliminated. Further, th e cost function in (5.1) simplifies to N T T T T TJ1) ( w R w w R w w (5.3) where, the matrix Ris, ] [T k k T k kEx x x x R (5.4) The matrix R is symmetric, but indefinite and he nce can have mixed eigenvalues. Also, observe that the cost function in (5.3) is linear in the weights w If, for instance, we had a single term in the summation, and we force 0 ) ( wJ, then it is easy to see that one of the solutions for w will be the true parameter vector Tw. However, when the number of kd ˆ kx ˆ + + kd kx kv Model Plant

PAGE 104

89 terms in the summation becomes equal to the length of our estimated filter, there is always a unique solution for w which will be the true vector Tw. Lemma 1: For suitable choices of la gs, there is a unique solution *w for the equation 0 ) (* w J and Tw w *. Proof: For 0 ) ( w J, w R w w R w T T T T T must be zero for all selected For simplicity assume N,..., 1 Therefore, we have N linear equations in w given by, T T T T Tw R w w R w ] [. This system of equations can be compactly written as ] [ .... ] [ ] [ 2 ....2 1 2 1 N k k k k k k N T T T T T Td d E d d E d d E w R w R w R w (5.5) If the rows of the composite matrix on the left of w in (5.5) are linearly independent (fullrank matrix), then there is a unique inverse and hence 0 ) ( wJ has a unique solution. We will prove that this unique solution has to be Tw by contradiction. Let the true solution be w w T *. Then, 0 ) (* w J implies 0 R wT T for all which is possible only when 0 and this completes the proof. Note that each term inside the summation of equation (5.1) can be perceived as a constraint on the cross corre lation between the desired res ponse and the error signal. By forcing these sums of cross correlations at N different lags to simultaneously approach zero, we can obtain an unbiased estimate of the true filter. The optimal solution for the proposed criter ion in terms of the noisy input and the desired responses is given in (5.6). Each ro w of the composite matrix can be estimated using simple correlators having linear comple xity. Also, a recursive relationship for the

PAGE 105

90 ] ˆ ˆ [ .... ] ˆ ˆ [ ] ˆ ˆ [ ] ˆ ˆ ˆ ˆ [ .... ] ˆ ˆ ˆ ˆ [ ] ˆ ˆ ˆ ˆ [ 22 1 1 2 2 1 1 N k k k k k k T k N k T N k k T k k T k k T k k T k kd d E d d E d d E d d E d d E d d Ex x x x x x w (5.6) evolution of this matrix over iterations can be easily derive d. However, this recursion does not involve simple reduced rank updates and hence it is not possible to use the convenient matrix inversion lemma [8] effici ently to reduce the co mplexity of matrix inversion. The overall complexity of th e recursive solution in equation (5.6) is O(N3). This motivates the development of a low cost stochastic algorithm to compute and track the optimal solution given by equation (5.6). Th e derivation of the stochastic gradient algorithm is similar to that of the AEC-LMS algorithm. Stochastic Gradient Al gorithm and Analysis Taking the expectation operato r out of the cost function in (5.1), we obtain an instantaneous cost given by N k k k k kd e d e J1ˆ ˆ ˆ ˆ ) ( w (5.7) Again, we want to find the minimum of this co st function. Notice that the direction of the stochastic gradient in (5.7) depends on the in stantaneous cost itself. The resulting weight update equation is given by ) ˆ ˆ ˆ ˆ ( ) ˆ ˆ ˆ ˆ (1 1 k k k k N k k k k k kd d d e d e sign x x w w (5.8) where, 0 is a small step-size. The step-size has been chosen to be a constant in the above update equation. However, we can also have a time-varying step-size as before. Owing to the presence of multiple terms (constraints) in the gradient, the complexity of the update is ) (2N O which is higher than that of the regular LMS type stochastic

PAGE 106

91 updates. However, the complexity is lower th an that of the recurs ive solution given by (5.7). We will now briefly discuss the convergen ce of this stochastic gradient algorithm to the optimal solution both in the noisy as well as noise-free scenarios. Theorem 5.1: In the noise-free case, (5.8 ) converges to the stationary point Tw w provided that the step size satisfies the following inequality at every iteration. 2) ( ) ( 2 0k kJ J w w (5.9) Proof : It is obvious from the previous discussi ons that the cost function in (5.7) has a single stationary point Tw w *. The weight update becomes zero only when the cost goes to zero thereby zeroing the gradient. Cons ider the weight error vector defined as k kw w *. From (5.8), we get ) ( ) (1 1 k k k k N k k k k k kd d d e d e sign x x (5.10) Taking the norm of this error vector on both sides gives 2 2 1 2 2 1) ( ) ( ) ( 2k k k k k T k N k k k k k kJ d d d e d e signw x x (5.11) Observe that in the noiseless case, k k T ke x and k k T kex Hence (5.11) can be simplified to, 2 2 1 2 2 1) ( ) ( 2k N k k k k k kJ d e d ew (5.12) If we allow the error vector norm to decay asymptotically by forcing 2 2 1 k k we obtain the bound in (5.9). The error vector will eventually converge to zer o by design, and since the gradient become s null at the true solution: 0 lim2 k k and hence

PAGE 107

92 T k kw w w *lim. This completes the proof. Theorem 5.2: In the noisy data case, the stochastic algorithm in (5.8) converges to the stationary point Tw w in the mean provided that the step size is bound by the inequality given by 2 1) ( ] ˆ ˆ ˆ ˆ [ 2k N k k k kJ E d e d e E w (5.13) Proof: Again, the facts about the uniqueness of th e stationary point and it being equal to the true filter hold even for the noisy data case. The convergence to this stationary point in a stable manner will be proved in this theorem. Following the same steps as in the proof of the previous lemma, th e dynamics of the error vector norm can be determined by the difference equation, 2 2 1 2 2 1) ( ) ˆ ˆ ˆ ˆ ( ) ˆ ( 2k k k k k N T k k k kJ d d z sign w x x (5.14) where k k k k kd e d e zˆ ˆ ˆ ˆ ˆ, Applying the expectation oper ator on both sides of (5.14) and letting 2 2 1 k kE E as in the previous case resu lts in the following inequality. ) ˆ ( ) ˆ ˆ ˆ ˆ ( 2 ) (, 1 2 k k k k k N T k kz sign d d E J E x x w (5.15) Simplifying further, we get N k k k k kd e d e E J E1 2ˆ ˆ ˆ ˆ 2 ) ( w (5.16) Using Jensen’s inequality, (5.16) can be reduced further to result in a loose upper bound on the step-size.

PAGE 108

93 N k k k k kd e d e E J E1 2] ˆ ˆ ˆ ˆ [ 2 ) ( w (5.17) Notice that the RHS of (5.17) now resembles the cost function in (5.1). Rearranging the terms, we get the upper bound in (5.13). The important point is that the bound is pr actical as it can be numerically computed without any knowledge of the ac tual filter or the noise statistics. Further, the upper bound itself can be included in the upda te equation to result in a no rmalized stochastic gradient algorithm. In general, normalization can improve speed of convergence. Simulation Results We will show simulation results with corre lated input noise and white noise in the desired data. The framework for the simulation study is the same as we used for AEC. System Identification with the Analytical Solution The experimental setup is similar to th e block diagram shown in Figure 5-1. We generated 50000 samples of correlated clean input signal and pa ssed it through an unknown random FIR filter to create a clean de sired signal. Gaussian random noise was passed through a random coloring f ilter (FIR filter with 400 taps) and then added to the clean input signal. Three differe nt input SNR values of 5, 0 and -10dB and three different true filter lengths of 5, 10 and 15 taps were used in the experiment. For each combination of SNR value and number of taps, 100 Mont e Carlo runs were performed. During each trial, a different random coloring filter as we ll as input/desired data was generated. We computed the Wiener solution for MSE as we ll as the optimal solution given by equation (5.6). The performance measure for the compar ison was chosen as the error vector norm given by *10 log 20 w w Tnorm error (5.18)

PAGE 109

94 Figure 5-2. Histogram plots s howing the error vector norm in dB for the proposed and MSE criteria. where, *w is the optimal solution estimated using samples and Tw is the true weight vector. Figure 5-2 shows the histograms of the error vector norms for the proposed method as well as MSE. The inset plots in th e figure show the summary of the histograms for each method. Clearly, the performance of the new criterion is superior in every experiment given the fact th at the criterion neither requi res any knowledge of the noise statistics nor does it try to estimate the same from data.

PAGE 110

95 System Identification with Stochastic Gradient Algorithm We will use the stochastic gradient algor ithm given by equation (5.8) to identify the parameters of a FIR filter in the presence of correlated input noise. A random four tap FIR filter was chosen as the true system. The input SNR (colored noise) was fixed at 5dB and the output SNR (white noise) was chosen to be 10dB. The step-sizes for the proposed method and the classical LMS algorithm were fixed at 1e-5 and 8e-4 respectively. One hundred Monte Carlo runs were performed and the averaged weight tracks over iterations are plotted for both algorithms in Figure 5-3. Note that our method gi ves a better estimate of the true parameters (shown by the squa re markers) than the LMS algorithm. The weight tracks of the proposed gradient met hod are noisier compared to those of LMS. One of the difficulties with the stochastic grad ient method is the right selection of stepsize. We have observed that in cases when th e noise levels are very high, we require a very small step-size and hence the convergen ce time can be high. Additional gradient normalizations can be done to speed up the c onvergence. Also, the shape of the performance surface is dependent on the correl ations of the input a nd the desired signals at different lags. If the performance surface is relatively flat around the optimal solution, we have observed that including a trivia l momentum term in the update equation increases the speed of convergence. Verification of the Local Stab ility of the Gradient Algorithm In order to verify the local stability of the stochastic algorithm, we performed another experiment. This time, the four taps of the true FIR system were [0.5, -0.5, 1, -1]. The initial weights for both LMS and the gradient algorithm in (5.8) were set to the true weights. Both input an d output SNR levels were kept at 10dB and the step-sizes were the same as in the previous experiment. Ideally, the algorithm in (5.8) should not move away

PAGE 111

96 Figure 5-3. Weight tracks for LMS and the stochastic gradie nt algorithm in the system identification example. from this solution as it is the global op timum. Figure 5-4 shows the weight tracks for LMS and the proposed gradient algorithm. No tice that LMS diverges from this point Figure 5-4. Weight tracks for LMS and the stochastic gradient algorithm showing stability around the optimal solution. immediately and converges to a wrong (biase d) solution. In comp arison, the proposed algorithm shows very little displacement from the optimal solution (stable stationary

PAGE 112

97 point). This proves that the al gorithm is stable around the optimal solu tion and in effect does not diverge from this point. So far, we have dealt with the problem of parameter estimation with correlated input noise and white noise in the desired. We will now go beyond this limitation and propose an extension to the stochastic algor ithm in (5.8) to handle correlated noise in both the input and desired data. Extensions to Correlated Noise in the Desired Data There are a couple of approaches to solv e this problem. We can assume that the noise signal in the desired data is generated by filtering a white process with an AR model (whose order is assumed to be known). This approach is similar to the IV method for colored noise. Once this assumption holds the problem reduces to the case with correlated input noise and white noise in the desired. Then, th e algorithm in (5.8) can be efficiently adopted to estimate the parameters. Notice that, in the process, we will be computing the AR coefficients that modeled the noise in the desired data as well. However, it is intuitive enough to foresee failu res with this approach as the AR modeling assumption is too strong. For this reason, we will not pursue this approach further. In order to motivate the second approach, we will first show as to why the algorithm in (5.8) cannot be used wh en the noise in the desired data, uk is correlated. Adding the two terms ] ˆ ˆ [ k kd e E and ] ˆ ˆ [k kd e E we get ] [ 2 ] ˆ ˆ [ ] ˆ ˆ [ k k T T T T T k k k ku u E d e E d e E w R w w R w (5.19) Observe that the last term in (5. 19), which is the correlation at lag is not zero for colored uk. Therefore, the previous algorithms ( both recursive and st ochastic) cannot be used unless the noise correlations are known. Since the correlation structure of the noise

PAGE 113

98 is not known apriori, we will in clude an estimate of the same in our solution. Realize that at the optimal solution wT, we can constrain the cost func tion such that the individual terms ] ˆ ˆ [ k kd e E and ] ˆ ˆ [k kd e E converge to estimated value of] [ 2 k ku u E. In order to do so, we have to modify the cost function to include additional penalty terms. Define k k k kd e d e k z ) ( (5.20) The modified cost function is then given by N N N N N Nk z k z J1 2 1 2 2 1 1 2 1 1] ) ( [ ) ( ) ... ... ... ... ( w (5.21) The first term Nk z1 2) ( is similar to the original cost function in (5.1) except for the squaring of the individual ) (k z instead of the absolute va lue operator. Th e second term is the constraint with the Lagrangian multipliers defined for all lags from 1 to N. The variable is an estimate of the noise correlation ] [ 2 k ku u E. Ideally, this would be substituted by the noise correla tion if we had apriori knowledge of the same. Notice that it is impossible to estimate the Lagrangian multipliers directly by using the constraints and the gradient of (5.21). Therefore, we have to adaptively estimate these Lagrangian multipliers too. So, the problem now becomes one of estimating a larger set of parameters } ... ... ... ... {1 1 N N w given the input and output da ta. Further, we have to explicitly constrain the values of { } = 1, …, N so that they remain bound from above. This can be achieved by including additional st abilization terms in the cost function. In (5.21), the third and fourth terms stabili ze the Lagrangian multipliers and the noise correlation estimates. The constants and are positive real numbers that control the

PAGE 114

99 stability. In vector space op timization theory, this method of imposing penalty constraints is studied unde r the heading Augmented Lagrangian techniques [16]. Now that we fully understand the structure of the cost function and the principles behind it, the next step is to the derive update rules to estimate the parameter set } ... ... ... ... {1 1 N N w For compactness, we will refe r to the parameter set as { w , }, where ={1…N} and similarly, ={1… N} We compute the following three gradients as shown. ) ]( ) ( [ 2 ) )( ( 2 ) , (1 1k k k k N N k k k kd d k z d d k z J x x x x w w (5.22) 2 ] ) ( [ ) , (2k z J w (5.23) 2 ] ) ( [ 2 ) , ( k z J w (5.24) The optimum parameter set { w , }, can be obtained by, Minimization with respect to w Maximization with respect to Minimization with respect to The resulting update equations are given below. k k k k k k k k k k k k k k k k k kJ J J ) , ( ) , ( ) , (, 1 , 1 1 w w w w w ww (5.25) In the above equation, the terms ,w are small positive step-sizes. At the optimal solution, ideally, we would have ] [ 2* k ku u E, 0* and w* = wT. A rigorous proof

PAGE 115

100 of convergence does not exist at this time owing to the comp lexity of the cost function and the interaction between th e various parameters. But, we have shown through Monte Carlo simulations that the pr oposed algorithm approximates th e actual system parameters much better than the Wiener soluti on for MSE or any other methods. Experimental Results System Identification As we have done in the past, we will ve rify the working of the method in the problem of system identification. Consider a FIR system with 15 taps that needs to be estimated from noisy input (SNR = -10dB) a nd noisy output time series (SNR = 10dB) of length 20,000 samples. The noise terms are su fficiently colored and are assumed to be uncorrelated with each other as well as the data. One hun dred Monte Carlo runs were performed and each time, the optimal Wiener MSE solution was computed for the sake of comparison. Figure 5-5 shows the error histogra ms computed as before for the proposed method as well as the Wiener MSE solution. Clearly, the proposed method was able to better approximate the true parameters of the FIR system. Stochastic Algorithm Performance We will show the performance of the stochastic gradient algorithm given by (5.25) in the presence of correlate d input and output noise. Corre lated noise signals were generated by randomly filtering a white process with a 1000 tap FIR filter. These were added to the clean input (sufficiently colo red) and the clean desired signal that was generated by a 4-tap random FI R filter. Input and output SN R values were set to 0dB. The parameters and were both set to uni ty. The step-sizes ,w were 1e-4, 1e-3

PAGE 116

101 Figure 5-5. Histogram plots of the error norms for the proposed method and MSE. and 1e-3 respectively. Figure 5-6 shows the aver aged weight tracks over 20,000 samples. The actual FIR weights are indi cated in the figure using asterisks (*). Notice that the weights estimated using the proposed stochastic algorithm converged to values very close to the true weights. Summary A serious limitation of the error whiteni ng criterion discusse d in the previous chapters was the assumption th at the noise terms must be white. In this chapter, we presented an alternative criterion (extended the ideas of the error whit ening criterion) that overcomes the limitation of the error whitening criterion. In principle, the new criterion exploits the crosscorrelation st ructure of the error and the desired signals in a novel way that results in the noise terms dropping out of the optimal solution. We solved the

PAGE 117

102 Figure 5-6. Weight tracks show ing the convergence of the stochastic gradient algorithm. problem of parameter estimation in the presence of colored noise in two steps. In the first step, we allowed only the input noise to be correlated. A ne w cost function was formulated, its analytical solution was derived and we also proposed a stochastic gradient algorithm. The convergence to the optimal analytical solution was mathematically established. To extend the criterion to handle colored noise in the desired data, we introduced penalty functions in the original cost. Corre lators for estimating the noise correlations noise in the desired signal) were embedded in the cost fu nction. The principles from Augmented Lagrangian methods were utilized to derive stochastic gradient algorithms.

PAGE 118

103 We showed simulation results in a system identification framework and verified the superiority of the proposed algorithms over ot her methods. In the next chapter, we will address the important issues of the performance of these new criteria when the model order of the unknown system is not known apriori.

PAGE 119

104 CHAPTER 6 ON UNDERMODELING AND OVERESTIMATION ISSUES IN LINEAR SYSTEM ADAPTATION Introduction Till now, the objective of this dissertati on has been to propose new criteria for training linear adaptive systems with noisy data. Specifically, the error whitening criterion received the major focus owing to its ability to com pute unbiased parameter estimates of an unknown linear system with noisy data. In the development of the theory behind the working of this criter ion, we assumed that our estimated filter is longer (has more parameters) than the actual system. In a strict sense, we always assumed a sufficient order for the adaptive filter. Conventionally, system identification and model-order selection have been dealt as two separate problems. In a majority of the existing work, identification has always been based on the assumption that an appropriate model order has been obtained using approaches like Ak aike Information Criterion (AIC), Minimum Description Length (MDL) and many others [119,120]. However, these model-order determination methods require the data to be noise-free. In general, the problem of system identification with noisy data wit hout apriori information about either the unknown system (model-order, line ar or non-linear) or the noise statistics is ill-posed. In fact, even if we restrict the class of models to linear FIR type, there are no methods that can accurately predict the model-order with noisy data. Therefore, the sufficient order assumption may be costly at times dependi ng on the application. Hence, it becomes imperative to study the performance of the criteria and the associated algorithms in

PAGE 120

105 situations where the model-order is not exactly known. That is precise ly our goal in this chapter: to quantify the behavi or of the new criteria proposed in the previous chapters in situations where the adaptive filter has fewer parameters (undermodeling) and also in cases where it has more parameters (overestimation). Undermodeling Effects We will focus on the issues with undermodeling first. Let us consider an unknown linear system with N taps. Suppose we want to estimate the parameters of this system with an adaptive filter whose length is M < N, given noisy input and ou tput data. The first obvious observation is that the adaptive system will never be able to approximate the true system as it does not have enough degrees of freedom. However, the interesting questions are as follows. 1. Will the reduced order adaptive filter coe fficients exactly match with the first M taps of the true system? If so, under what conditions will this be happen? 2. In what sense will the reduced set of co efficients describe the true system? 3. As M approaches N, will the adaptive system response get closer to the true system response? 4. How do the answers to the above questions change if there is noise in the data? We will now answer the above questions with relevance to the solutions obtained using both the MSE and the EWC criteria. Consider the noise-free data case with MSE criterion for filter estimation. The classical Wiener equation gives the minimum MSE solution. It is well-known that exact coefficient matching will only occur when the input data is a white process [14]. It is very simple to mathematically prove this asserti on. However, in practice, the input data is seldom white and therefore the exact coefficient matching will never occur. The coefficients of the reduced order model will try to best approximate the actual system in

PAGE 121

106 the mean-squared error sense. As the length of the adaptive filter is increased, the matching between the responses of the actual and the estimated system also increases. Further, the model-mismatch between the act ual system’s response and the adaptive filter response decreases monotonically with increasing filter lengths. This is a very desirable property of the Wiener solution. Another interesting aspect of the Wiener solution is the principle of orthogonality [14] wh ich states that the error sign al is orthogonal to the input. Mathematically, this means 0 x ] [k ke E (6.1) Assuming zero mean data, the above equation also implies that error signal and the input are uncorrelated. In the sufficient model order case, (6.1) is true for all lags of the crosscorrelation between the error and input. In the case of undermodeling, the error signal (6.1) holds only for the lags smaller than the filter length M. Now let us consider the noisy data case. Once again, there is no exact matching of coefficients between the actual system and the estimated filter. Furthe r, we know that the Wiener solution produces a biased estimate with noisy data. Moreover, this bias will not decrease by increasing the length of the filter. In fact, it is possible for the bias to increase as the number of coefficients increases. Although the mean-square d error monotonically decreases with increasing M, the model-mismatch does not. Thus, the nice property of the Wiener-MSE solution no longer holds in the presence of noise. Yet another downfall of Wiener solution is the fact that it changes w ith changing noise variance. This can lead to potentially severe situations especially in the design of inverse controllers. We will now address the questions put forth earlier, when the parameter estimates are obtained with EWC. As in the case of Wiener solution, the optimal EWC solution

PAGE 122

107 does not show exact parameter matching for real world data. Further, even in the case of undermodeling, the optimal EWC so lution will tend to partially whiten the error signal. Thus the error whitening property also exte nds to the undermodeling scenario. However, the caveat is that, the error signal correlation is zero (or very close to zero) only at the specified lag. The uncorrelatedness of the error signal is true for all lags L > M only when M > N. Therefore, as the length M approaches the true filter le ngth, the higher order error correlations tend to get smaller. However, there is no guarantee that there will be a monotonic reduction in the error correlations with increasing M. Recall that in comparison to the orthogonality principle of the Wiener solution, the EWC solution satisfies a generalized orthogonali ty principle stated below. 0 x x ] [k L k L k ke e E (6.2) Just like the Wiener solution, this pr operty holds good only for the specified lag L in the undermodeling scenario. Further, the result beco mes true for all lags when the filter order is increased beyond N. Now consider the noisy data case. We ha ve shown before that the optimal EWC solution will be unbiased only when the length of the estimated filter is greater than or equal to the true filter. However, EWC will still try to decorrelate the error at the specified lag. But, the error correlations at higher lags are still non-zero. When the length of the filter is increased, the values of the error correlations at the higher lags decay to zero and the model mismatch also decreases. This is a very nice property of the EWC solution that can perhaps be exploited to determine the correct model order of an unknown system. As the filter order is increase d, EWC will make the error orthogonal to the input at all lags greater than the chosen lag L. This is again anot her property of EWC

PAGE 123

108 solution which is not matched by Wiener. Perh aps the most important aspect of the EWC solution is the fact that it does not change with changes in noise variance. Thus, we conclude that most of the nice properties of Wiener MSE solution are carried over by optimal EWC solution (in a slightly different framework) even in the presence of input noise. Overestimation Effects The discussion on overestimation effects will be brief as most of the details are similar to the effects of undermodeling. Ov erestimation refers to the case when the adaptive filter length is greater than the actual filter, i.e., M > N. One of the major ill effects of overestimation is poor generalization even if we rest rict the class of systems to simple linear FIR filters. Consider the Wiener solution with noise -free data. The weights of the Wiener solution will contain exactly M-N zeros which is very good from the point of view of generalization. However, with noise in the input, the Wien er MSE solution will produce non-zero coefficients with noise introduced bias in every coefficient. This is because, the additional weights (in fact, the overall weight vector) tries to learn the noise in the data. Thus, generalization suffers as the estimated parameters have no physical meaning to the actual system coefficients. In the case of EWC, we have proved that the optimal EWC solution will be unbiased in the case of overestimation. Furthe r, we even showed that the unwanted taps are automatically zeroed out by the EWC. Th is is a remarkable feature of the optimal EWC solution. Further, as we said before, the solution is never dependent on the variance of the noise. We will show some simulation re sults highlighting some of the interesting aspects we discussed in the undermode ling and overestimation scenarios.

PAGE 124

109 Experimental Results Consider an undermodeling system iden tification scenario. The unknown system was a FIR filter with 4-taps (model-order 3). We used both the Wiener solution and the optimal EWC solution to estimate the parameters of this system. The input (white noise) SNR was set to 5dB and 0dB respectively. Th e desired signal was noise-free. Figure 61(left) and Figure 6-1(right) show the averaged error norm s of the estimates produced by EWC and MSE criteria for 0dB and 5dB SNRs respectively. We can clearly see that both EWC and MSE solutions are biased when the number of filter taps is less than four. For four or more taps, EWC produces a much be tter estimate than the Wiener (the minor variations in the error norms are because of different input/output pairs used in the Monte-Carlo trials), whereas the bias in the Wiener solution does not decrease by increasing the order (unlik e the noiseless case). Figure 6-1. Undermodeling effects with input SNR = 0dB (left) and input SNR = 5dB (right).

PAGE 125

110 We then considered another example of an unknown system with 6 taps and tried to model this system using only 2 taps. The i nput SNR is fixed at 0 dB. Figure 6-2 shows the plot of the crosscorrelati on between input and the error. Note that the crosscorrelation is zero for only two lags in the case of MSE and with EW C, the error an d the input are orthogonal only at the specified lag L=5 (arbitrarily chosen) in this example. MSE EWC Figure 6-2. Crosscorre lation plots for EWC and MSE for undermodeling. Figure 6-3 shows the same plot in an ove restimation scenario. The key observation is that, with the MSE criterion, the error is uncorrelated with the input only for a few lags whereas in the case of EWC, error and the input are uncorrelated for all lags greater than filter length. Figure 6-4 shows the normalized error autocorrelation at higher lags in the overestimation case for both EWC and MSE. No tice that the error au tocorrelations for EWC are very small for higher lags.

PAGE 126

111 Figure 6-3. Crosscorrela tion plots for EWC and MSE for overestimation. Figure 6-4. Power normalized error cr osscorrelation for EWC and MSE with overestimation. We will now verify the performance of the modified criterion discussed in Chapter 5 in the case of undermodeling. With overestimation, it is trivial to show that the additional MSE EWC

PAGE 127

112 weights approach zero even in the presence of correlated noise. In order to understand the behavior of the proposed method in the und ermodeling case, we performed a simple experiment. We chose a 4-tap FIR system a nd tried to model it with a 2-tap adaptive filter. Figure 6-5 shows the weight tracks for both LMS and the stochastic gradient algorithm in equation (5.8). The gradient algo rithm converged to a solution that matched closely with the first two coefficients of th e actual system (denoted by in the figure). The LMS algorithm converged to an arbitrar y solution that produced the minimum MSE with noisy data. This encourages us to state (speculatively) that the criterion will try to find a solution that matches the actual system in a meaningful sense. However, there is still not enough evidence to claim that the proposed method can provide exact “coefficient matching.” Figure 6-5. Weight tracks for LM S and the stochastic gradient algorithm in the case of undermodeling.

PAGE 128

113 Summary In this chapter, we summarized the effects of undermodeling and overestimation with EWC and MSE criteria. In essence, the error whitening criterion consistently shows good properties with and without white noise in the data, whereas the ni ceties of the MSE criterion are lost once noise is added to the data. One of major draw backs of the Wiener MSE solution is its dependence on the variance of the noise wh ereas the same is not true for the optimal EWC solution. We showed simulation results that quantifie d the observations made in this chapter. Further work is required to verify the unde rmodeling and overestimation performance of the modified criterion in the pr esence of correlated noise in the input a nd desired signals.

PAGE 129

114 CHAPTER 7 CONCLUSIONS AND FUTURE DIRECTIONS Conclusions The mean-squared error criterion is by far the most widely used criterion for training adaptive systems. The existence of simple learning al gorithms like LMS and RLS has promoted the applicability of this criterion to many adaptive engineering solutions. There are alternatives and enhancem ents to MSE that have been proposed in order to improve the robustne ss of learning algorithms in th e presence of noisy training data. In FIR filter adaptation, noise present in the input signal is especially problematic since MSE cannot eliminate this factor. A powerful enhancement technique, total least squares, on one hand, fails to wo rk if the noise leve ls in the input and output signals are not identically equal. The alternative method of subspace Wiener filtering, on the other hand, requires the noise power to be strictly smaller than the signal power to improve SNR. We have proposed in this dissertation an extension to the traditional MSE criterion in filter adaptation, which we have named the augmented error criterion (AEC). The AEC includes MSE as a special case. Another interesting special ca se of the AEC is the error whitening criterion. This new criterion is inspired from the observations made on the properties of the error autocorrelation functi on. Specifically, we ha ve shown that using non-zero lags of the error au tocorrelation function, it is possible to obtain unbiased estimates of the model parameters even in the presence of white noise on the training data.

PAGE 130

115 The AEC criterion offers a parametric family of optimal solutions. The classical Wiener solution remains a special case corresponding to the choice 0 whereas total noise rejection is achieved for the special choice of 2 / 1 (EWC). We have shown that the optimal solution yields an error signa l uncorrelated with the predicted next value of the input vector, based on analogies with Newtonian mechanics of motion. On the other hand, the relationship with entropy thr ough the stochastic approximation reveals a clearer understanding of the behavi or of this optimal solution; the true weight vector that generated the training data marks the lags at which the error autocorrelation will become zero. We have exploited this fact to optimi ze the adaptive filter weights without being affected by noise. The theoretical analysis has also been complemented by on-line algorithms that search on a sample by sample basis the optim um of the AEC. We have shown that the AEC may have a maximum, a minimum or a saddle point solu tion for the more interesting case of0 Searching such surfaces brings di fficulties for gradient descent, but search methods that use the information of the curvature work w ithout difficulty. We have presented a recursive algorithm to find the optimum of the AEC, which is called the recursive error whitening (REW). The REW has the same structure and complexity as the RLS algorithm. We also presented gradient based algorithms to search the EWC function called EWC-LMS (and its variants) which has linear complexity ) (m O and requires the estimation of the sign of the update for the case5 0 Theoretical conditions including step-size upper bound were derived for guaranteed convergence. Further, we showed that the gradient algorithm produces an excess error correlation that is bound from above where, the limit can be reduced by decreasing the step-size.

PAGE 131

116 The optimal EWC solution is unbiased onl y when the input noise is white. We presented modified cost functi ons to handle arbitrarily correl ated noise in the input and desired data. The theoretical foundations were laid and stoc hastic gradient algorithms were derived using Augmented Lagrangian methods. Convergence to the desired optimal solution was mathematically proven for a speci al case when only the input is allowed to have correlated noise. Finally, we briefl y discussed the effects of undermodeling and overestimation with the proposed criteria. Future Research Directions Accurate parameter estimation with noisy data is a hard problem that has been tackled by many researchers in the past, bu t the resulting solutions are far from satisfactory. In this research, we proposed ne w criteria and algorithms to derive optimal parameter estimates. However, the methods can be effectively applied to linear feedforward systems only. Extension to nonlinear systems is not a trivial task and might require further modifications to the cost functions. It would be worthwhile to explore the advantages of using error correlation based cost functions in othe r engineering problems like prediction and unsupervised learning. A key part of the parameter estimation problem is the accurate determination of the model-order (linear systems only). This is a tough problem especially, with correlated noise in the data. The propos ed criterion along with some sparseness constraints can be pr obably utilized to determine the model-order for linear systems [143,144]. For nonlinear systems, explic it regularization must be incorporated [144,145]. Instead of directly trying to derive gl obal nonlinear models, emerging trends utilize the concept of divide and conquer to design multiple local linear filters that model the

PAGE 132

117 nonlinear system in a piecewise manner [146-1 50]. The proposed criteria can be utilized to design these local models in cases when the data is noisy. The present line of research still has ope n theoretical problems. Rigorous proof of convergence for the stochastic gradient algorithm outlined in equation (5.25) is yet to be provided. Further mathematical quantification of undermodeling and overestimation effects with the crite ria discussed in Chapter 5 is required for a better theoretical understanding.

PAGE 133

118 APPENDIX A FAST PRINCIPAL COMPONENTS ANALYSIS (PCA) ALGORITHMS Introduction Principal component analysis (PCA) is a widely used statistical te chnique in various signal-processing appl ications like feature extraction, signal estimation and also detection [53-55]. There are several analytic al techniques for solving the eigenvalue problem that lead to PCA [8]. These analytical te chniques are block-based, computationally intensive, and are not yet appropriate for real time applications. Moreover, for many applications such as track ing where the signal statistics change over time, online solutions are more desirable. Recent research in neural networks has produced numerous iterative algori thms to solve PCA. Sanger’s rule or the generalized Hebbian algorithm (GHA) [56], the Rubner-Tavan model [57,58] and the Adaptive Principal Component Extraction (APEX) mode l [59] which is a variation of RubnerTavan model are a few of them. Most of thes e algorithms are based on either gradient search methods or Hebbian and anti-He bbian learning. Some lead to local implementations (APEX), which enhance their biological plausibility. The signal processing community has also been interested in iterative procedures to solve PCA. The power method, a subspace analysis technique has received a lot of attention because it estimates accurately and with fast converg ence the principal eigencomponent [8,60]. Although the convergence characte ristics of these methods are excellent, the update rules are non-local and computationally intensive. The PASTd [61] is another algorithm for PCA based on gradient subspa ce search. This algorithm is on-line and comparatively

PAGE 134

119 faster than Oja’s rule [62] as it uses a normalized step size. The estimation of eigenvectors and eigenvalues is therefore a well establishe d and researched area, with many powerful results. Brief Review of Existing Methods Large number of existing PCA algorithms fall into one of the three categories Gradient based methods Hebbian and anti-Hebbian learning Subspace decompositions Numerous cost functions are formulated and optimization techniques are applied to minimize or maximize the cost functions. The cla ssical Oja’s rule [62] is one of the first on-line rules for PCA based on Hebbian and anti-Hebbian learning. If ) ( n x denotes the input data and ) ( n y the output after a linear tran sformation by the synaptic weights w, Oja’s rule for the first prin cipal component is given by )) ( ) ( ) ( ) ( ( ) ( ) 1 (2n n y n n y n nw x w w (A.1) However, Oja’s rule can produce only th e maximum eigencomponent. Sanger [56] proposed the usage of deflation along with Oja’s rule to estimate all the principal components. For a fully connected neural network, with synaptic weights jiw, where, i is the input node and j is the output node, the update rule is j k k ki j i j jin y n w n y n n y n w1) ( ) ( ) ( ) ( ) ( ) (x (A.2) Rubner and Tavan [57,58] proposed an asymme trical single-layer model with a lateral network among the output units. The feedfo rward weights are trained using the normalized Hebbian rule and lateral weights are trained using antiHebbian rule. This asymmetrical network performs implicit deflat ion. A variation of th is algorithm is the

PAGE 135

120 APEX algorithm proposed by Kung and Diaman taras [59]. Another widely used cost function is the reconstruction e rror at the output of a twolayered neural network [63], given by k i i T i i kJ1 2) ( x ww x w (A.3) The scalar factor is called the forgetting factor and1 This is used to handle non-stationary data. Xu [64] has described an adaptive Principal Subspace Analysis (PSA) algorithm based on the above cost func tion. Xu also presents a technique to convert PSA into PCA using a symmetrical network without deflat ion using a scalar amplification matrix [64]. Ya ng [61] proposes an RLS version of the PSA technique, but uses the deflation technique in stead of the scalar gain ma trix. Chatterjee et.al [65] proposed a cost function sim ilar to the one proposed by Xu, but they adopt advanced optimization techniques to solve the problem Recently, we proposed a gradient based algorithm (and some variants) for simultaneous extraction of principa l components called SIPEX [66-68]. The algorithm uses Givens rota tions [8] and reduces the search space to orthonormal matrices only. Alt hough the algorithm is fast c onverging, the complexity is too high. In most of the above-mentioned gradie nt algorithms, there is a time varying step size involved in the update equation. Th e convergence and the accuracy of these algorithms heavily depend on the step-sizes, which are dependent on the eigenvalues of the data. Usually, there is an upper limit on the value of the step-size as shown in [65] and [69]. It is a non-trivial task to choose a proper step-si ze that is lesser than a data dependent upper bound. Subspace methods have al so been used to solve PCA. Miao and Hua [70] proposed a cost func tion based on an informationtheoretic criterion. They present a PSA algorithm, which can be used to solve the PCA problem using the standard

PAGE 136

121 deflation technique. Similarly, the power method has been adopted to solve both PCA and PSA [60,71]. The power method is known to converge faster and does not involve a step-size. However, the computational burde n of the power method increases with the dimensionality of the data [8]. In this appendix, we present a family of algorithms, which are as computationally tractable as the simple gradient algorithms a nd at the same time with the convergence rate of the subspace based algorithms. These belong to a class of fixed-point algorithms. We will first derive a new set of rules to extrac t the principal component and then present a rigorous convergence analys is using stochastic approximation theory. The minor components are at first estimated using the conventional deflation technique. At a later stage, we will formulate an alternative appr oach to estimate the minor components using robust fixed-point algorithms. Currently, th e proof of convergence for the combined algorithm is under investigation. We will not provide simulation results and applications where these applications have been utilized. Eff ectively, the material in this appendix is a condensed version of our algorithmic cont ributions in the field of PCA [71-77]. Derivation of the Fixe d-Point PCA Algorithm Mathematically speaking, an eigendecomposition is the solution of the equation W RW, where R is any real square matrix [8 ]. From the signal processing perspective R is the full covariance matrix of a zero-mean stationary random signal, W is the eigenvector matrix and is the diagonal eigenvalue matrix. Without loss of generality, we will assume a zero-mean stationary signal ) (nx with a covariance matrix ) (T k kEx x R From the Rayleigh-Ritz theorem [8], the maximum eigenvalue is a stationary point of the Rayleigh quotient.

PAGE 137

122 w w Rw w wT Tr ) ( (A.4) where w is the first principal eigenvector. Indeed, wis a stationary point if and only if 0 w w w Rw w Rw w w T Tr ) (, which implies, w w w Rw w RwT T. Assuming 1 w wT which is a property of any eigenve ctor, we can write, w Rw w RwT or equivalently w w w Rw w RwT T (A.5) Note that Rw wT is a scalar. Equation (A.5) basically states that there is a scalar relationship between w and its rotated version by R Both the numerator and the denominator can be computed as a vector matrix multiply, which is of complexity ) ( N O. Let the weight vector at iteration n, ) ( n w be the estimate of the maximum eigenvector. Then, the estimate of the new weight vector at iteration ) 1 ( n according to (A.5) is ) ( ) ( ) ( ) ( ) ( ) 1 (n n n n n nTw R w w R w (A.6) where n k Tk k n n1) ( ) ( 1 ) ( x x R is an estimate of the covariance matrix at the time step n. As a drawback, the update rule for ) 1 ( nw in (A.6) tracks the eigenvalue equation assuming 1 ) 1 ( ) 1 ( n nTw w at every time step. Howe ver, note that we do not explicitly enforce this condition in (A.6). E xperimental results shown in [44, 74] have proved that if we directly us e (A.6) to estimate the princi pal component, then, we obtain convergence to a limit cycle. The norm of ) (nw starts off initially with bounded random values, and when the weight vector a pproaches the eigenvector i.e., when ) ( V w nwhere is a scalar, the norm oscillates between and 1. A brief

PAGE 138

123 mathematical analysis of the update equation in (A.6) is presented next to get a better grasp of its behavior. Mathematical Analysis of th e Fixed-Point PCA Algorithm In order to analyze the beha vior of (A.6), we resort to the well-known stochastic approximation tools proposed by Ljung [78] a nd also by Kushner and Clark [79]. The idea is to associate the discre te-time adaptation rule to an ordinary differential equation (ODE). The behavior of the discrete-time al gorithm is strongly or weakly tied to the stability of the ODE. Equation (A.6) is a special case of the generic stochastic approximation algorithm ) ( ), ( ) ( ) ( ) 1 (n n n n n x w h w w In order to apply the approximation theory, some assumptions need to be made [78-81]. We would also like to point out that (A.6) doe s not belong to the vanishing gain type algorithms in which the value of n is a monotonically decreas ing sequence that would eventually go to zero. Equation (A.6) can be cons idered as a constant ga in algorithm. Benveniste et al [80] have discussed the analysis of constant gain al gorithms. Accordingly, the ODE analysis can still be applied to the constant gain algorithms with further restrictions. 5. The input is at least wide sense stationary (WSS) random process with a positive definite autocorrelation matrix R whose eigenvalues are distinct, positive and arranged in descending order of magnitude 6. The sequence of weight vectors ) (nw is bounded with probability 1 7. The update function )) ( ), ( (n nx w h is continuously different iable with respect to ) (nw and ) (nx and its derivatives are bounded in time 8. Even if )) ( ), ( (n nx w h has some discontinuities a mean vector field ) ( ), ( lim n n Enx w h x w h exists and is regular 9. There is a locally stable solution in the Lyapunov sense to the ODE. In other words, the ODE has an attractor *w whose domain of attraction is ) (*wD

PAGE 139

124 10. The weight vector ) (nw enters a compact subset M of the basin of attraction ) (*wD infinitely often, with probability 1 The ODE corresponding to the update equation in (A.6) is ) ( ) ( ) ( ) ( ) ( ) 1 ( ) ( )) ( (1t t t t T n n dt t d tT T nT tw Rw w Rw w w w w h (A.7) Note that the factor T appears as a sampling interval for the forward difference approximation of continuous time derivative. This plays a crucial role in the behavior of this update equation. Theorem 1: Consider the ODE in (A.7) and let the assumptions A.1, A.3 and A.4 hold. Then, maxv w where maxv is the eigenvector associated with the largest eigenvalue max as n(asymptotically). Proof: Refer to [44] for a detailed proof. Theorem 2: The norm of the weight vector is always bounded with probability 1. Proof: With little effort, we can see th at the derivative of the norm of ) ( t w is given by ) ( ) ( 1 ) ( ) ( ) ( ) (t t dt t d t dt t t dT T Tw w w w w w (A.8) Therefore, ) ) ( 1 ( 2 ) (2 2t dt t dw w We can easily solve this first order differential equation to get, te t2 2 2) 0 ( 1 1 ) ( w w Therefore, when 1 ) 0 (2 w 2) (tw is always a monotonically decreasing function and reaches unity as t. If 1 ) 0 (2 w then 2) (tw will increase and stabilize when the norm is one. Thus, 1 ) (2tw as t as long as 2) 0 ( w is bounded. It is very in teresting to note that if 1 ) 0 (2 w then 1 ) (2tw for all t and hence the norm becomes invariant. This might be a desirable

PAGE 140

125 property for hardware implemen tation of the algorithm. Theorem 3: The weight vector ) (tw enters a compact subset M of the basin of attraction ) (*wD infinitely often, with probability 1. Proof: From theorem 1, we know that the ODE converges to the stationary point max *v w Also, it is easy to show that all the other stationary points (other eigenvectors) are unstable [44]. But, this onl y tells us about the lo cal behavior around the stationary points. To complete our analysis, we have to identify the domain of attraction for the stable stationary point *w. It is impossible in most cas es to find out the domain of attraction that will span the whole weight sp ace [82]. In order to find out the domain of attraction, we will resort to the Lyapunov function method [82]. Let 1 ) ( ) ( 5 0 )) ( ( t t t LTw w w be a Lyapunov function defined on a region M consisting of all vectors such that c t L )) ( ( w, where 0 c. It is easy to see that 2) ( 1 )) ( (t t t Lw w and hence 0 )) ( ( *w wwt t L. Also, if we put the constraint that 1 ) 0 (2 w then for all 0 t, 0 )) ( ( t L w since the minimum value of 1 ) (2tw So, for all *) ( w w t in M, 0 )) ( ( t t L w. Thus, the stationary point max *v w is globally asymptotica lly stable with a domain of attraction M. Thus, the domain of attraction M includes all the vectors such that 0 1 2 ) ( 12 c c t w Obviously the stable attractor has a norm of unity and is enclosed inside M. Hence, as the number of iterations increases, ) (tw will be within the set M and will remain inside with probability 1. From theorems 1-3, assumptions A.2, A. 5 and A.6 are satisfied. We can hence deduce from the theory of stochastic approximation for adaptive algorithms with constant gains

PAGE 141

126 [80] that, T C t Pt max) ( sup lim v w where 0 and T C is a small constant which becomes zero when 0 T. See chapter 2 in [80] for the actual theorem statement and proof. Earlier, we mentioned that the update e quation in (A.6) enters a limit cycle when max) ( v w n (near convergence). However, from all the above theorems, it seems like we can conclude that the update equa tion reaches the domain of attraction ) (*wD with probability 1. The contradiction arises due to the fact that we have used continuous time ODE analysis to understand the behavior of a discrete-time update equation. If the sampling interval T in (A.7) is not chosen to be sufficiently small, then, the ODE does not accurately represent the equation (A.6). Ther efore discrepancies arise and have to be mathematically understood be fore being rectified. Theorem 4: The discrete-time update equation in (A.6) enters a li mit cycle when max) ( v w n, where is any scalar constant. Proof: As, max) ( v w n, 2 max1 ) ( ) ( v w wnT tdt t d n, where, ) (nwdenotes the discrete time derivative of ) (nw Therefore, we can easily see that, the next value of the weights will be max 2 max max1 ) 1 ( v v v w n, which is nothing but another scaled version of maxv The derivative at this instant in time is 2 max ) 1 (1 ) ( ) 1 ( v w wT n tdt t d n. Notice that the derivatives at instants n and ) 1 ( n are the same in magnitude a nd opposite in sign. Therefore ) ( ) 2 (maxn nw v w Thus, the weight vector oscill ates between two values, which is clearly a case of limit cycle oscillations. To prove our point further, consider that the

PAGE 142

127 non-linear function )) ( (tw h in (A.7) is smooth enough for it to be linear ized in the neighborhood of the stationary point, maxv w where maxv is the eigenvector corresponding to the maximum eigenvalue of R Thus, ) ( ) (t tw w w where, ) (tw is a small perturbation. Then, using Taylor series expansion and retaining only the first two terms, we get ) ( ) ( )) ( (t dt t d tw A w w w h The matrix A is the Jacobian of the non-linear function )) ( (tw h computed at the st ationary point, as w ww w A )) ( ( ) (t h t. Therefore, ) ( ) (t t dt dw A w where A is given by T T T T max max max 22 2maxv v I R Rw w R Rww I Rw w R Av w (A.9) The nature of the equilibrium point is e ssentially determined by the eigenvalues of A k T T k k T k kq v v q Rq qAmax max max2 1 (A.10) Obviously, kq should be an eigenvector of R Hence, the eigenvalues of A are given by 1 ..... 1 1 1 2max max 4 max 3 max 2 pdiagA (A.11) Note that all the eigenvalues are less than zero. So, the equilibrium point maxv w is locally stable. However, the corresponding z-domain poles with 1 T are given by the transformation 1 1 s z s z. max max 4 max 3 max 2.... , 1 mz (A.12) Clearly, a stable pole in the s-domain is mapped onto to the unit circle at 1 z. All other poles are inside the unit circle. Thus, all ot her modes except the mode

PAGE 143

128 corresponding to the eigenvector maxv converge asymptotic ally to their st able stationary points. The pole at 1 z takes the discrete-time update equation in (A.6) into a limit cycle. If sampled at a higher rate, then 1 T and all the z-poles are mapped inside the unit circle removing the limit cycling behavior. Observe that from (A.7), when the sampling interval 1 T the update equation in (A.6) generalizes to ) ( ) ( ) ( ) ( ) 1 ( ) 1 (n n n T n T nTRw w Rw w w (A.13) It is to be noted that any value of T less than unity will work. Equation (A.13) is essentially a fast, fixed-point type PCA algorithm that successfully estimates the first principal component. However, the convergence speed of the algorithm is affected by the value of T. As the value of T decreases, so does the convergence speed. It will suffice to say here that the parameter T determines how well the di fference equation approximates the ODE. Hence this parameter creates a trade-off between tracking and convergence with sufficient accuracy. Self-Stabilizing Fixed-Point PCA Algorithm The rate of convergence of (A. 13) is affected by the factor T that creates an undesirable tradeoff between the speed of conve rgence and accuracy of the result. In this section, we will explore a variation of the algorithm given by (A.13) and present an update rule that is se lf-stabilizing without hu rting the rate of c onvergence. We propose the modified update rule for the extracti on of the first principal component as ) ( ) ( ) ( 1 ) ( ) ( ) ( ) 1 (n n n n n n nTw R w w R w w (A.14) Comparing the equations (A.13) and (A.14) we can say that both are fixed-point type algorithms that track the eigenvalue equation at every time-step. However, (A.14) does

PAGE 144

129 not involve any external parameter. Typically, R is unknown and it has to estimated from the data. If ) ( ) ( ) (n n n yTx w then the rule in (A.14) can be further simplified resulting in an on-line implemen tation as (assuming stationarity) ) ( 1 ) ( ) ( ) 1 (n Q n n n P w w (A.15) where, ) ( ) ( 1 ) 1 ( 1 1 ) (n y n n n n nx P P and ) ( 1 ) 1 ( 1 1 ) (2n y n n Q n n Q With these recursive estimators, (A.15) can be easily implemented locally. For handling nonstationary cases, a forgetting factor can be in corporated in the above recursive estimators at no additional computational cost. The overall computational complexity will still be linear in the weights i.e., O(N). The self-stabilizing feature of this algorithm can be understood by analyzing its conver gence. We can adopt the same techniques that we used for the analysis of (A.6). Mathematical Analysis of the Self-Stabilizing Fixed-Point PCA Algorithm We will make the same set of assumptions we had before (A.1 to A.6). Again, we will let assumptions A.1, A.3 and A.4 hold without further arguments. We will now state and prove the following theorems. Theorem 5: The only stable stationary point of the update equation in (A.14) is the principal eigenvector. Proof: The ODE corresponding to the update equation (A.14) is given by ) ( ) ( 1 ) ( ) ( ) ( ) ( ) ( ) 1 ( ) ( )) ( (t t t t t t T n n dt t d tT TRw w Rw w w Rw w w w w h (A.16) ) (tw can be expanded in terms of the comp lete set of orthonor mal vectors (basis

PAGE 145

130 vectors), as n k k kt t1) ( ) ( q w. Substituting this in (A.16) and simplifying n l l l n l l l k k k kt t t t dt t d1 2 1 2) ( 1 ) ( ) ( ) ( ) ( (A.17) In (A.17), k kq denote the kth eigenvalue and eigenvector of R respectively and ) (tk is the kth time varying projection. The dynamics of the ODE in (A.17) can be analyzed in two separate cases. In the first case, we consider 1 k. Let, ) ( ) ( ) (1t t tk k assuming that 0 ) (1 t Differentiating this wrt t, we get, dt t d t t dt t d t dt t dk k k) ( ) ( ) ( ) ( ) ( 1 ) (1 1 1 Using (A.17) n l l l k k kt t dt t d1 2 1) ( 1 ) ( ) ( (A.18) Since, the multiplier of ) (tk in (A.18) is always positive, the fixed-point of this ODE is zero for all k. In other words, 0 ) ( tk as t for 1 k. For the case when 1 k, the derivative of the time varying projection is given by n l l lt t t dt t d1 2 2 1 1 1 1) ( 1 )] ( 1 )[ ( ) ( (A.19) It is not easy to find the analytical solution for ) (1t from (A.19). However, we are interested only in the stea dy state solution of the ODE. To derive this, we will use a Lyapunov function ) ( t V as 2 2 11 ) ( ) ( t t V. Note that 0 ) (t V for all t. Then, the derivative of ) ( t V is simply

PAGE 146

131 0 ) ( 1 ) ( 1 ) ( 4 ) ( ) ( ) ( 1 4 ) (2 1 1 2 2 1 2 1 1 1 1 2 1 t t t dt t d t t dt t dV (A.20) Hence (A.20) is stable and has a minimum given by 1 ) ( 0 ) (1 t dt t dV Therefore, as t, 1) (q w t, which is nothing but the principal eigenvector. When, we introduced the self-stabiliz ing PCA algorithm, the claim was that we would remove the external parameter T and still have discrete-time stabili ty. Local stability analysis will help us get a better insight. A ssuming that the non-linear function )) ( (tw h in (A.16) is smooth enough to be linearized in the ne ighborhood of the stable stationary point 1) (q w t, where 1q is the principal eigenvector, we can compute the linearization matrix 1q ww w h A ) ( )) ( (t tas 1 1 1 1 11 2 I q q R AT. The eigenvalues of A are given by n kk...., 5 4 3 2 1 1 21 1 1 1 A Since all the poles (eigenvalues) are in the LeftHalf Plane (LHP), the stationary point 1) (q w t is stable. The corresponding z-domain poles are exactly given by, 1 1 1 11 1 1 k zA n k ,...., 5 4 3 2 Note that only the first z-domain pole can be negative and all others are strictly positive. Also, since all the poles lie within the un it circle, the stationary point of the discrete -time update equation is also stable. In order to complete the analysis we have to prove that the other stationary points are locally unstable. The linearization matrix A for the case ktq w ) ( with 1 k is given by k k T k k k 1 2 I q q R A For instance, when 2 k, the eigenvalues of A are

PAGE 147

132 2 2 2 1 2 2 11 1 2 1 kA, where n k ,...., 6 5 4 3 The first pole is in the RightHalf Plane (RHP) and hence this stationary point is locally unstable. Similarly, it can be shown that for 1 k, there will be exactly 1 k poles in the RHP that will render all these stationary points locally unstable [73]. The evolution of the discrete-time weight norm over time may not be m onotonic. There is a single z-domain pole which can be negative if 1max 1 and this can make the norm of the weight vector undergo damped oscillations before settling to unity (like a high pass filter). Then the upper bound on the norm is determined by the eigenvalues of the data. Further analysis is required to determine the exact upper bound. Minor Components Extraction: Self-S tabilizing Fixed-Point PCA Algorithm So far, we have extensively dealt with algorithms for extracting the first principal component. Although, for many applications this is sufficient, it is so metimes desirable to estimate a few minor components. Traditionally, deflation ha s been the key idea behind estimating the minor components. Deflati on is often referred in communications literature as Graham-Schmidt Orthogonalization [83]. If we ar e interested in finding the second principal component, we will first subtract from the original input, the projection of the first principal component. Mathematically, if kx is the actual input vector and 1qis the first principal component, then, after applyi ng deflation step once, the modified input signal will be k T k kx q q x x1 1ˆ Now, the covariance matrix of the deflated signal is given by, R q q R RT 1 1ˆ Observe that, the first eigenvalue of R ˆ is zero and all the other eigenvalues are the same as that of R The deflation process can be sequentially applied to estimate all the minor component s. By nature, deflation is a sequential

PAGE 148

133 procedure, i.e., the second principal compone nt can be estimated (converges) only after the first principal component and so on. Th ere are a few algorithms that do not require deflation for estimating the minor compone nts. The LMSER algorithm proposed by Xu [64] is one of them. Another algorithm is SI PEX [68] which does not require deflation as it implicitly uses an orthonormal rotation matrix An alternative way of doing deflation is to use a lateral network as in the case of Rubner-Tavan [57,58] and APEX [54,59] algorithms. The central idea is to decorrelate the outputs of the PCA network using lateral connections between output node s. The lateral weights are traditionally trained using inhibition learning or anti-Hebb ian learning [84]. Anti-Hebbian learning is slow and can show unpredictable convergence characterist ics. Choosing the step-size is tricky and usually very small step-sizes are chosen to guarantee convergence. APEX uses normalized anti-Hebbian learning, but this o ffers very little improvement. We propose to use the idea of lateral network; however, the learning algorithm can be derived using fixed-point theory. In Figure A-1, we have drawn a represen tative 4-input, 2-output PCA network. Let 1w and 2w represent the feedforward weight ve ctors corresponding to the first and second output nodes respectively. The scalar weight 1c represents the lateral connection between the first and second outputs. The co rrelation between the outputs is given by 1 1 1 2 1 1 1 2 1 2 1)] ( [ ) ( Rw w Rw w x w x w x wT T k T k T k Tc c E y y E (A.21) If the correlation is zero, then 1 1 2 1 1Rw w Rw wT Tc which will eventually go to zero as the weights 1w and 2w become orthogonal. T hus, the fixed-point of 1c is zero. From this, we can deduce a fixed-point learning rule to adapt 1c over time. This will ensure that, at every iteration, the outputs of the ne twork are orthogonal. The learning rules for

PAGE 149

134 1c and 2w are given by ) ( ) ( ) ( 1 ) ( ) ( ) ( ) 1 (2 2 2 1 1n n n n n n n cT Tw R w w R w (A.22) ) ( ) ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) 1 (1 1 2 2 2 2 2n n c n n n n n n nTw w R w w w R w (A.23) Note that ) 1 (1 n c has a different denominator term from the expression derived earlier. However, the denominator does not matter, as the fixed-point is zero. Also, from (A.23), we see that the update for 2w is modified by the inclusion of the ) ( ) (1 1n n cw product. In general, the update rules fo r both the feedforward and la teral weights are given by 1 1) ( ) ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) 1 (b k k kb b T b b b bn n c n n n n n n nw w R w w w R w (A.24) ) ( ) ( ) ( 1 ) ( ) ( ) ( ) 1 (n n n n n n n cb T b b T a abw R w w R w (A.25) Further analysis required to quantify the gain s and limitations of using the lateral network trained with these fixed-point rules. Figure A-1. Representative network architecture s howing lateral connections. Y2 Y1

PAGE 150

135 APPENDIX B FAST TOTAL LEAST-SQUARES ALGO RITHM USING MINOR COMPONENTS ANALYSIS Introduction TLS is nothing but the solution to an over de termined set of linear equations of the form b x Aˆ ˆ where A ˆ and bˆ denote the noisy data matrix of dimension n m and desired vector of dimension m respectively, such that, F F ˆ ˆ ˆ ; ˆ b ; A b A; b A is minimized or 0 x b A; 1 ;T (B.1) Let S be the SVD of the augmented matrix b A; such that S = U VT, where m T mI U U u ,...., u u u u U 4 3 2 1, 1 1 4 3 2 1 n T nI V V v ,...., v v v v V and 1 1 1 4 3 2 1 ) ,...., , (n n m ndiagO with 0 ....,1 4 3 2 1 n As 01 n in order to obtain a solution to (B.1) we must re duce the rank of b A; from 1 n to n. This can be done by making 01 n and the solution becomes 1 1 11 n n nv v x; (B.2) where, 1 1 n nv is the last element of the minor eigenvector 1 nv Therefore the best approximation using (B.2) will give us 1ˆ ˆn Fb ; A which means that the solution to TLS can be obtained by estimating the mini mum eigenvector of the correlation matrix b A; b A; RT followed by the normalization as in (B.2). In the case when there is no

PAGE 151

136 perturbation in A and b then 01 n which makes 0 ˆ ˆ b ; A When the perturbations in A are uncorrelated with those in b and when the variances of the perturbations are equal, we w ill still get an unbiased estimate of the parameter vector x In this case the correlation matrix I R R2ˆ where R is the correlation matrix of the clean b A; and 2 is the variance of th e perturbation. It is obvious that the minimum eigenvector of R ˆ will be the same as the minimum eigenvector of R with corresponding eigenvalue equal to2. Thus, the TLS solution is still unbiased. However, when the perturbation variances are not th e same, then we will always ha ve a biased estimate of the parameter vector x In the next section, we will present the proposed algorithms for solving the TLS problem. Fast TLS Algorithms The architecture for the proposed algorithms (having complexities of ) (N O and ) (2N O respectively) consists of a linear network with 1 n inputs and one or two outputs. For the ) (N O algorithm, we require two outputs and for the ) (2N O algorithm we need only one output. We will elaborate on the details later in this section. In the input vector to the network, the first n elements correspond to the data input (one row of the data matrix A ) and the last element is the corre sponding desired out put. The augmented input vector is represented as Tk k k) ( ); ( ) ( d A where the index k can be time for filtering purposes. We will first describe the ) (N O algorithm. Let 1 2 1, nW W be the network weight vectors. The corresponding network outputs are ) ( ) ( ), ( ) (2 2 1 1k k y k k yT T W W respectively. The goal is now to estimate the minor eigenvector of the matrix ) (TE R Towards this end, we will first compute the

PAGE 152

137 principal eigenvector by updating the vector 1W using the proposed fixed-point PCA algorithm outlined in appendix A. Accord ingly, the update rule for the vector 1W is ) ( 1 ) 1 ( 1 1 1 ) 1 ( ) ( ) ( 1 ) 1 ( 1 1 ) (2 1 1 1 1k y k k Q k k k y k k k k k W P W (B.2) where, kk y k k k) ( ) ( 1 ) (1 P kk y k k Q) ( 1 ) (2 1. For the TLS solution, we need the minor eigenvector. Deflation is the standard procedure to es timate the minor components. However, since we are interested in th e component correspondin g to the smallest eigenvalue, it would be undesirable to use defl ation. We adopt a simple trick to estimate the minor eigenvector using an estimate of the maximum eigenvector [85] we obtain from (B.2). Let R I R maxˆ, where ) (TE R as before and max be the estimate of the maximum eigenvalue of R Note that R ˆ is always positive definite and the maximum eigenvalue of R ˆ is min max where min is the minimum eigenvalue of R Hence, by estimating the maximum eigenvector of R ˆ we can obtain the minimum eigenvector of R We can now use the fixe d-point PCA rule with R replaced by R ˆ. With this modification, the update rule for 2W is given by ) ( ˆ ) ( ) ( ) ( ) ( ˆ ) ( ) ( ) 1 (2 2 2 2 2k Q k k Q k k k k Q k W W P W W (B.3) where, kk y k k k k k y k k k) ( ) ( 1 ) 1 ( ˆ 1 1 ) ( ) ( 1 ) ( ˆ2 2 P P ,) ( ) ( ) (2 2k k k yT W and kk y k k Q k k y k k Q) ( 1 ) 1 ( ˆ 1 1 ) ( 1 ) ( ˆ2 2 2 2. The definitions of ) (kP and ) (k Q

PAGE 153

138 are still the same as before. No te that all the elements except ) (2k W can be computed locally. We will now prove that the algo rithm in (B.3) converges to the minimum eigenvector of R Making the basic assumptions of stochastic approximation theory as before, we can easily write the ODE correspond ing to the update equation in (B.3) as ) ( ) ( ) ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) ( ) ( ) (2 2 2 2 2 1 1 2 2 1 1 2 2t t t t t t t t t t t dt t dT T T kT tW RW W W RW W W RW RW W W W (B.4) Theorem 1: The ODE in (B.4) has a singl e stable stationary point pq W 2, where pq is the eigenvector corresponding to the smallest eigenvalue of R with all other points locally unstable. Proof: See [86]. We will now present the alternative ) (2N O algorithm. As men tioned before, this algorithm requires a single layer linea r network with one output. The input dimensionality remains the same. In order to estimate the minor eigenvector, we can utilize the fact that th e maximum eigenvalue of 1 R is the same as the minimum eigenvalue of R Again, using the fixed-point algorit hm for the principal component and utilizing matrix inversion lemma [8], we get ) ( ) 1 ( ) ( ) ( ) ( ) 1 ( ) 1 (2 1 2 2 2 1 2k k k k k k kTW R W W W R W (B.5) ) ( ) ( ) ( 1 ) ( ) ( ) ( ) ( ) ( ) 1 (1 1 1 1 1k k k k k k k k kT T R R R R R (B.6) It is easy to verify that ) 1 (2 k W converges to the minimum eigenvector of R asymptotically with the assumption that 1 1) 1 ( R Rk as k. The algorithm in (B.5) can be very fast when the eigenspr ead is very high. However, note that the

PAGE 154

139 complexity of the algorithm ) (2N O. We will now briefly summarize both algorit hms for solving the TLS problem using the minor eigenvector. For all k with random initial conditions for ) 0 (1W, ) 0 (2W, build the augmented data vector Tk k k) ( ); ( ) ( d A For ) (N O algorithm, compute ) ( ) ( ) ( ), ( ) ( ) (2 2 1 1k k k y k k k yT T W W and update ) ( ˆ ), ( ˆ ), ( ) (k Q k k Q k P P For ) (2N O algorithm, update 1 R using equation (B.6) For ) (N O algorithm update ) (1k W and ) (2k W using (B.2) and (B.3) For ) (2N O algorithm update ) (2k W using (B.5) Compute the TLS solution given by 2 2pTLSW W W (B.7) where, 2pW denotes the last component of the vector 2W The last component of TLSW will be –1 and is discarded. Thus TLSW will be of dimension n and not 1n. Simulation Results with TLS We will now show the performance of the proposed TLS algorithms through simulations. Simulation 1: Noise Free FIR Filter Modeling We generated a 200-tap FIR filter with ra ndom filter coefficients. The input was a 1000 sample length random signal. Both the input and the desired response (which is just the filtered input) were noise free. Thus, th e minimum eigenvalue of the composite signal ) ( k must be zero. Figure B-1 shows the pl ots of the estimation of the minimum eigenvalue using the ) (2N O algorithm along with the direction cosine defined as

PAGE 155

140 p p Tk k k DC V W V W ) ( ) ( ) (2 2 (B.8) where, pV is the true minor eigenvector. Note th at the algorithm conve rges in just 300 on-line iterations. This high convergence speed can be attri buted to the fact that the algorithm performs better when the eigenspr ead is very high (infin ity, in this case) However, the computational load becomes very significant with 201 dimensions in the composite signal. For comparisons with other methods see [86]. 100 200 300 400 500 600 700 800 900 1000 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Number of on-line iterations Plot of estimation of minimum eigenvalue and the DC(true,estimated) Minimum eigenvalue Direction cosine Figure B-1. Estimation of minor eigenvector. Simulation 2: FIR Filter Modeling with Noise As stated before, the advantage of TLS lies in the fact that it can provide unbiased estimates when the observations are noisy. In this simulation, we will experimentally verify this fact and show that RLS can at best provide a biased estimate of the parameters. We will consider a FIR with 50-taps and use the ) (N O rule to estimate the minimum eigenvector. The input is a colore d noise signal with 15000 samples. The desired signal is the filtered input signal. Uncorrelated rando m noise with unit variance is added to the input and desired signals. Note th at in this case, the minimum eigenvalue of

PAGE 156

141 the composite signal will be equal to the vari ance of the noise, i.e., 1. Figure B-2 shows the convergence of the minimum eigenvalue. The algorithm converg es in less than 15,000 iterations, which is one complete pres entation of the input data. Figure B-3 shows the comparison between the estimated filter coe fficients and the true filter coefficients using the ) (N O rule and Figure B-4 depicts the perf ormance of RLS. It is clearly seen that in the presence of noise, RLS estimates are always biased and this bias can reach unacceptable levels when the noise variance increases. 0 0.5 1 1.5 2 2.5 x 105 0 1 2 3 4 5 6 7 8 9 10 Plot of estimation of minimum eigenvalue Number of online iterations Figure B-2. Minimum eigenvalue estimation. 0 5 10 15 20 25 30 35 40 45 50 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 Plot of Estimated vs. true filter coefficients Estimated True Figure B-3. Comparison between the estimated and true filter coefficients using TLS.

PAGE 157

142 0 5 10 15 20 25 30 35 40 45 50 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Filter coefficients estimated using RLS vs. true values RLS True values Figure B-4. Comparison between the estimated and true filter coefficients using RLS. We can extend the application of TLS technique to model IIR filters also. In this case, we can use the past values of the desire d responses along with the input to form the composite signal. The architecture will rema in the same as that of the FIR case.

PAGE 158

143 APPENDIX C ALGORITHMS FOR GENERALI ZED EIGENDECOMPOSITION Introduction In appendix A, we discussed about Pr incipal Components Analysis (PCA) as optimal adaptive matrix transformations. PC A by nature, involves estimation of optimal transformations (also referred to as projections and rotations) derived to span the space of the input data. However, there are several a pplications, where we are forced to find projections in joint spaces. One simple appli cation is the classical pattern classification problem with two classes where the goal is to determine the best discriminant in the joint space that separates the two classes. This is well known in the pattern recognition literature as the Fisher discriminant [53,94], which br ings us to the concept of generalized eigendecomposition (GED). Formal ly speaking, GED solves the generalized eigenvalue equation of the matrix pencil ( A B ), which is given by BV AV [8]. Note that, when B is identity matrix, then, the generaliz ed eigenvalue problem boils down to the PCA problem. In the Fisher discriminant case, the matrices A and B are the betweenclass scatter and the within-class scatter respectively. Like PCA, GED is an extremely useful statistical tool and ha s many applications including feature extraction, pattern classification, signal estimat ion and detection [53,55]. Review of Existing Learning Algorithms Many analytical techniques have been devel oped in the linear al gebra literatures to compute the generalized eigenvectors [8 ]. These numerical techniques are computationally prohibitive and moreover they require blocks of data. For engineering

PAGE 159

144 applications, on-line, sampleby-sample algorithms are desi red. The importance of the sample-by-sample methods is even more pronounced in environments where signal statistics change slowly ove r time and hence tracking becomes a key issue. Only fast, online algorithms can adapt quickly to the changing environment while block techniques lack this feature. Compared to PCA, th ere are fewer on-line algorithms for GED. Mao and Jain have proposed a two-step PCA appr oach to solve GED in [55]. They use the Rubner-Tavan model [57,58] fo r training the PCA blocks. Thus the convergence of their method depends solely on the convergence of the PCA algorithms. A similar approach is used in [71,74] but a faster PCA algorithm has been incorporated that drastically improves the performance. However, as ment ioned before, this two-step PCA method may not be very suitable for real-world applications. Chatterjee et al have proposed a gradient algorithm based on linear discrimi nant analysis (LDA) [69]. They propose an on-line algorithm for extracting the first gene ralized eigenvector a nd then use deflation procedure for estimating the minor compon ents. They prove convergence of the algorithm using stochastic a pproximation theory. However, the main drawback of their method is that the algorithm is based on simple gradient techniques and this makes convergence dependent on the step-sizes th at are difficult to set apriori. Xu et al have developed an on-line and local algorithm for GED [95]. The rule for extracting the first generalized eigenvector is sim ilar to the LDA algorithm in [69], but they use a lateral inhibition network similar to the APEX algor ithm for PCA [59] for extracting the minor components. Although the problem formulation is novel, there is no rigorous proof of global convergence. A qua si-Newton type algorithm was proposed by Mathew et al [96]. The computational complexity is quite high, bu t a pipelined architecture can be used to

PAGE 160

145 reduce the complexity [96]. The method make s approximations in computing the Hessian required for the Newton-type methods. Diamantaras et al demonstrate an unsupervised neural model based on the APEX models for extracting first genera lized eigenvector only [97]. The update equations are quite complicated given the fact that it extracts only the principal generalized eigenvector. Most of the above mentioned algorithms are based on gradient methods and they involve the selecti on of right step-sizes to ensure convergence. In general the step-sizes have an upper bound th at is a function of the eigenvalues of the input data. This fact makes it very hard on many occasions to choose the proper step-size. On the other hand, we can adopt better optimization procedures, but computational complexity is also a key issue. Motivated by the success of the fixed-point PCA algorithm discussed in appendix A, we will derive a fixed-point GED algorithm based on the same lines. Fixed-Point Learning Algorithm for GED From a mathematical perspective, genera lized eigendecomposition involves solving the matrix equation W R W R2 1, where 2 1R R are square matrices, W is the generalized eigenvector matrix and is the diagonal generali zed eigenvalue matrix [8]. These are typically the full covariance matrices of zero-m ean stationary random signals ) (1n x and ) (2n x respectively. For real symmetric a nd positive definite matrices, all the generalized eigenvectors are real and th e corresponding generali zed eigenvalues are positive. GED possesses some very interest ing properties that can be exploited for various signal-processing ap plications. The generalized eigenvectors achieve simultaneous diagonalization of the matrices 2 1R R as W R W 1 T and I W R W 2 T. This property enables us to derive an iterative algorithm for GED using

PAGE 161

146 two PCA steps as mentioned in the previous se ction. Alternatively, GED is also referred to as Oriented PCA (OPCA) [59,97]. Acco rdingly the generalized eigenvectors act as filters in the joint-space of the two signals ) (1n x and ) (2n x, minimizing the energy of one of the signals and maximizing the energy of the other at the same time. This property has been successfully applied to the problems of signal separa tion [98] and more recently for detecting transitions in time series [74]. The oriented energy concept comes from the fact that the generalized eigenvalues can be expressed as ratios of two energies. Equivalently, this means that any generalized eigenvector w that is a column of the matrix W is a stationary point of the function w R w w R w w2 1) (T TJ (C.1) This is because w R w R w w R w w R w R w w R w R w w R w R w w w2 2 1 1 2 2 2 1 1 20 2 2 ) (T T T T TJ This is nothing but the gene ralized eigenvalue equation and the generalized eigenvalues are the values of (C.1) evalua ted at the stationary points. Most of the gradient-based methods use equation (C.1) as the cost f unction and perform maximization with some constraints. It is easy to recognize the fact that the well known linear discriminant analysis or LDA problem involves ma ximizing the ratio in (C.1) with B R 1, the between-class covariance matrix and w R 2 which is the within-class covariance matrix. We will now state our approach to estimate the generalized eigenvector corresponding to the largest generalized eigenva lue hereafter referred to as the principal generalized eigenvector. Us ing (C.1), we can rewrite the GED equation as w R w R w w R w w R2 2 1 1 T T (C.2)

PAGE 162

147 If I R 2, then (C.2) reduces to the Rayleigh quotient and the gene ralized eigenvalue problem will degenerate to PC A. Left multiplying (C.2) by 1 2R and rearranging the terms, we get w R R w R w w R w w1 1 2 1 2T T (C.3) Equation (C.3) is the basis of our iter ative algorithm. Let the weight vector ) 1 ( nw at iteration ) 1 (n be the estimate of the principa l generalized eigenvector. Then, the estimate of the new wei ght vector at iteration n according to (C.3) is ) 1 ( ) ( ) ( ) 1 ( ) ( ) 1 ( ) 1 ( ) ( ) 1 ( ) (1 1 2 1 2 n n n n n n n n n nT Tw R R w R w w R w w (C.4) We can observe that (C.4) tracks the GED e quation at every time-step. The fixed-point algorithms are known to be faster compared to the gradient algorithms, but many fixedpoint algorithms work in batch-mode, which means that the weight update is done after a window of time [99]. This can be a potential drawback of the fixed-point methods, but in our case, we can easily transfor m the fixed-point update in (C.4) into a form that can be implemented online. To begin with, we need a matrix inversion operation for each update. By using Sherman-Morrison-Woodbury matrix inversion lemma [8] we get ) ( ) 1 ( ) ( 1 ) 1 ( ) ( ) ( ) 1 ( ) 1 ( ) (2 1 2 2 1 2 2 2 1 2 1 2 1 2n n n n n n n n nT Tx R x R x x R R R (C.5) If we assume that w is the weight vector of a singlelayer feed-forward network, then define ) ( ) 1 ( ) (1 1n n n yTx w and ) ( ) 1 ( ) (2 2n n n yTx w as the outputs of the network for signals ) (1n x and ) (2n x respectively. With this definiti on, it is easy to show that, n i T n i Ti y n n n n i y n n n n1 2 2 2 1 2 1 1) ( 1 ) 1 ( ) ( ) 1 ( ), ( 1 ) 1 ( ) ( ) 1 ( w R w w R w This is true

PAGE 163

148 in the stationary cases when sample-varianc e estimators can be used instead of the expectation operators (of cour se assuming that the weights are changing slowly enough). However, for non-stationary signals, a simple forgetting factor can be included with a trivial change in the update equation. With these simplifications, we can write the modified update equation for the stationary case as 2 1 ), ( ) 1 ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) (1 1 1 1 2 1 2 1 1 2 2 x w x R w l i i i y i y i n n i y i y nl T l n i n i n i (C.6) where ) (1 2nR is estimated using (C.5). In orde r to implement the summations, we can use recursive estimators. We will now su mmarize the fixed-point algorithm below. Initialize the 1) 0 ( nw to a random vector Initialize a vector 1) 0 ( nP to a vector with small random values Fill the matrix n n ) 0 ( Q with small random values Initialize scalar variables ) 0 ( ), 0 (2 1C C to zero For 0j Compute ) ( ) 1 ( ) (1 1j j j yTx w and ) ( ) 1 ( ) (2 2j j j yTx w Update P as ) ( ) ( 1 ) 1 ( 1 1 ) (1 1j y j j j j jx P P Update Q as ) ( ) 1 ( ) ( 1 ) 1 ( ) ( ) ( ) 1 ( ) 1 ( ) (2 2 2 2j j j j j j j j jT Tx Q x Q x x Q Q Q Update 2 1 C C as 2 1 ), ( 1 ) 1 ( 1 1 ) (2 i j y j j C j j Ci i i Update the weight vector as ) ( ) ( ) ( ) ( ) (1 2j j j C j C j P Q w Normalize the weight vector Go back to step 5 and repeat until convergence is reached The above algorithm extracts the principa l generalized eigenvector. For the minor components, we will resort to the deflati on technique. Consider the following pair of

PAGE 164

149 matrices, 2 2 1 1 1 1 1 1 1 1ˆ ˆ R R R w R w w w R I R T T, where 1w is the best estimate of the principal generalized eigenvector using (C.6). For this pair of matrices, 0 ˆ1 1 w R and 1 ˆ2 1 ii i i w R w R. The time index n is implicit and is omitted for convenience. It is easy to see that, 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 12 ˆ R w w R w w R R w w R w w R R w R w R w w R R R T T T T T T With, ) (1 1 1n yTx w T T T T TE y y E1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1ˆ ˆ ˆ x x w R w R w x w R w w R x R where, ) ( ˆ1n x is given by ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ˆ1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1i y i y i n y n n y n nn i n i T x x w R w w R x x (C.7) With the above deflation procedure, we can estimate the second generalized eigenvector using the same update rule in (C.6) with ) (1n x replaced by ) ( ˆ1n x. ) (2n x remains the same as before. At this point, we would like to stress that the deflation scheme does not require any further comput ations as the summations n ii y i1 1 1) ( ) ( x and n ii y1 2 1) ( are already pre-calculated for estimating the prin cipal generalized eigenvector. They can be efficiently substituted by ) (jP and ) (1j C mentioned in the summary of the algorithm for the principal generalized eigenvector. The fixed-point GED algorit hm has many useful properties compared to other methods. The convergence of the algorithm is exponential whereas the convergence of the on-line gradient methods is linear. This is generally true for most fixed-point algorith ms [99]. Simulations have shown beyond doubt that the proposed algorithm has superior convergence speed when compared with other methods.

PAGE 165

150 As we have seen before, gradient algorithms are dependent on step-sizes, which results in non-robust performance. In contrast, the fi xed-point algorithm does not require a stepsize for the updates (similar to the PCA algor ithm in appendix A). Moreover, like the gradient methods, the fixedpoint algorithm has an online implementation that is computationally feasible. The computational complexity is 2N O where N is the dimensionality of the data, which is comparab le to the complexities of the algorithms in [55,69]. Mathematical Analysis We will now investigate the convergence characteristics of the GED algorithm given by (C.6) using stochastic approximation techniques that was cited before in the analysis of PCA algorithms. Without dw elling too much into the methodology of stochastic approximation tools, we directly apply it to our algorithm. We will state some assumptions similar to the ones used in PCA algorithm analysis. The inputs ) ( ), (2 1n n x x are at least wide sense stationary (WSS) with positive definite autocorrelation matrices 2 1, R R. The sequence of weight vectors ) (nw is bounded with probability 1. The update function )) ( ), ( ), ( (2 1n n n x x w h is continuously differentiable with respect to w, 2 1, x x and its derivatives are bounded in time. Even if )) ( ), ( ), ( (2 1n n n x x w h has some discontinuities a mean vector field ) ( ), ( ), ( lim ,2 1 2 1n n n Enx x w h x x w h exists and is regular. The initial weights are chosen such that 0 ) 0 (1 q wT, where 1q is the principal generalized eigenvector. The second assumption is satisfied by the fact that we force the norm of the weight vector to unity. Thus, when the wei ght vector is bounded, then the updates are al so bounded in

PAGE 166

151 time. Since we assume that the matrices 2 1, R R are full rank, the inverses exist. By satisfying the first two assump tions, it is easy to see that the derivatives of the update function are also bound in time. Under thes e conditions, we enunc iate the following theorem. Theorem 1: There is a locally stable solution in the Lyapunov se nse to the ODE. In other words, the ODE has an attractor *w, whose domain of attraction is ) (*wD. Proof: The update function ) ( ), ( ), ( lim ,2 1 2 1n n n h Enx x w x x w h is given by ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( ) , (1 1 2 1 2 2 1t t t t t t dt t d n nT Tw w R R w R w w R w w w w x x w h (C.8) where, is the typical step-size parameter which is set to unity in this case. We want to find the stable stationary points of this ODE. Let ) (tw be expanded in terms of the complete set of m generalized eigenvectors of 2 1, R R as4 m k k kt t1) ( ) ( q w (C.9) where, ) (tk is a time-varying projection and kq is the generalized eigenvector corresponding to the eigenvalue k Using the simultaneous diagonalization property of the generalized eigenvectors, we can rewrite (C.8) using (C.9) as ) ( ) ( ) ( ) ( ) (1 2 1 2t t t t dt t dk k k m l l l m l l k (C.10) We will analyze the dynamics of the non-linear differential equation in (C.10) separately. 4 Any vector can be expressed as a linear comb ination of the complete set of basis vector spanning the vector space. In this case since w ( t ) is operating in the joint space, it can be represented as a linear combination of the generalized eigenvectors spanning the s pace. Also note that the generalized eigenvectors ar e the principal components of the matrix R-1 2R1.

PAGE 167

152 The goal is to show that the time vary ing projections corresponding to the modes associated with all eigenvectors except the principal eigenvector decay to zero asymptotically.For m k 1, we define ) ( ) ( ) (1t t tk k Therefore, by simple algebra, dt t d t t dt t d t dt t dk k k) ( ) ( ) ( ) ( ) ( 1 ) (1 1 1 which can be further simplifed to k m l l l m l l k kt t t dt t d 1 1 2 1 2) ( ) ( ) ( ) ( (C.11) m l l l m l l k k kt t t f t t f dt t d1 2 1 2 1) ( ) ( ) ( ), )( ( ) ( ) ( (C.12) Note that 0 ) ( t f for all t Therefore, it can be easily shown using Lyapunov stability theorems that, with 0 ....4 3 2 1 m ,1 as 0 ) ( k t tk and 1) ( q w c t where c is an arbitrary constant. As we are hard-limiting the weights to unity norm, eventually 1) ( q w t. Thus, the principal generalized eigenvector is the stable stationary point of the ODE. We will now prove that all other stationary points, i.e., the 1 m minor generalized eigenvectors are saddle points. Linearizing the ODE in (C.8) in the vicinity of a stationary point, we can com pute the linearization matrix A as, m m k tkt t I R R w w h Aq w1 ) ( ) (1 1 2 ) (. The eigenvalues of the matrix A are given by 1 k m m A It is easy to see that only for 1 k, all the eigenvalues which are analogous to the s -poles are within the LHP except the first pole which is at zero. All other stationary points have one or more poles in the RHP and hence they are saddle points. This means that near convergence, if the weight vector reaches any of the 1 m saddle points, it will diverge from that point and converge only to th e stable stationary

PAGE 168

153 point which is the principal generalized eigenvector. This completes the proof. We would like to men tion that the constant in the ODE equation given by (C.8) is unity for our algorithm which makes this a constant gain algorithm. This constant essentially relates how well the discrete-t ime update equation approximates the ODE. When we make gain unity, we might be intr oducing discretization erro rs in our analysis which can lead to ambiguous results. This has been reported earlier in the PCA analysis and the trick is to analyze the discrete-time behavior. The z -poles can be extracted from the s -poles derived above using the transformati on that we used for converting the update equation to the ODE. In this case, th e transformation is simply given by 1 s z with 1 The corresponding z -poles for the stable stationary point 1q are given by 1 m for all values of m. Thus, we can easily deduce that th e first pole is on the unit circle at 1z and all other z -poles reside inside the unit circle. Because of this pole at 1 z, the weight vector converges to a scaled value of the principal generalized eigenvector as shown before. A simple normalization will give us the exact result. Now, it suffices to say that even with the constant unity gain (step-size), the fixed-poi nt update converges to the exact solution [75,76] A smaller value of can be used and this will strongly tie convergence of the ODE to that of the disc rete-time update equation, but reduces the speed of convergence. Theorem 2:.The weight vector ) ( nw enters a compact subset M of the basin of attraction ) (*wD infinitely often, with probability 1. Proof : The domain of attraction ) (*wD includes all vectors with bounded norm. Also, let the initial weights are chosen such that 0 ) 0 (1q wT, where 1q is the principal generalized eigenvector (assumption 5). Let M be a compact subset defined by the set of

PAGE 169

154 vectors with norm less than or equal to a fin ite constant. We are forcing the norm to be unity after every update. Thus, the weight vector ) ( nw will always lie inside the compact subset M From theorems 1 and 2, we can deduce from the theory of stochastic approximation for adaptive algorithms with constant gains [80] that, C t Pt 1) ( sup lim q w where 0 and C is a small constant which becomes zero when 0 The proposed GED algorithm has been succ essfully used to design optimum linear Multiuser Detectors [100-104] for Direct Sequence (DS) CDMA systems based on the receiver design proposed by Wong et al [76, 104, 105]. The same algorithm can be easily used to solve the Extended TLS problem disc ussed in Chapter 1. More details on the algorithm and the simulation results can be found in [76].

PAGE 170

155 APPENDIX D SOME DERIVATIONS FOR THE NOISY INPUT CASE Consider the matrices R, S, P, and Q estimated from noisy data. For R, we write V R v v x x v v x v v x x x v x v x x x R ~ )] ( ) ( ) ( ~ ) ( ~ [ )] ( ) ( ) ( ) ( ) ( ) ( ~ ) ( ~ ) ( ~ [ ] )) ( ) ( ~ ))( ( ) ( ~ [( )] ( ) ( [n n n n E n n n n n n n n E n n n n E n n ET T T T T T T T (D.1) Similarly, for S, P and Q matrices, we obtain L L T T T T T T T T T T T T T T T T T Tn L n L n n E n L n L n n E n L n n L n n L n n L n L n n L n n L n n L n n E n n L n L n L n L n n n E n L n L n n n n n n E V R V R v v v v x x x x V R v v x v v x x x v v x v v x x x V R v x v x v xv x R x x x x x x x x S ~ ) ~ ( 2 )] ( ) ( ) ( ) ( [ )] ( ~ ) ( ~ ) ( ~ ) ( ~ [ ) ~ ( 2 )( ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ~ ) ( ~ ) ( ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ~ ) ( ~ ) ~ ( 2 ] )) ( ) ( ~ ))( ( ) ( ~ ( )) ( ) ( ~ ))( ( ) ( ~ [( 2 )] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( [ (D.2) P x v v x x v x x P ~ )] ( ~ ) ( ~ [ )] ( ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ~ ) ( ~ [ ))] ( ) ( ~ ))( ( ) ( ~ [( )] ( ) ( [ n d n E n u n n d n n u n n d n E n u n d n n E n d n E (D.3) Ln u L n n d L n n u L n L n u n L n d n L n u n E n d L n L n d n E n u n d L n L n L n u L n d n n E n d L n L n d n E L n d L n n d L n L n d n n d n E L n d n d L n n E P P v v x v v x x x P v x v xP x x P x x x x x x Q ~ ~ 2 ) ( ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ) ( ) ( ~ ) ( ) ( ) ( ~ )] ( ~ ) ( ~ ) ( ~ ) ( ~ [ ~ 2 ))] ( ) ( ~ ))( ( ) ( ~ ( )) ( ) ( ~ ))( ( ) ( ~ [( ~ 2 )] ( ) ( ) ( ) ( [ 2 )] ( ) ( ) ( ) ( ) () ( ) ( ) ( [ ))] ( ) ( ))( ( ) ( [( (D.4)

PAGE 171

156 APPENDIX E ORTHOGONALITY OF ERROR TO INPUT Recall that the optimal solution of AE C satisfies equation (2.9), which is equivalently 0 )] ( ) 2 1 [( L k k L k k k ke e e E x x x (E.1) Rearranging the terms in (E.1), we obtain 0 ))] 2 ( ( [ L k k L k k ke E x x x x (E.2) Notice that ) 2 (L k k L k x x x forms an estimate of the acceleration of the input vector xk. Specifically for 2 / 1 the term that multiplies ek becomes a single-step prediction for the input vector xk (assuming zero velocity and constant acceleration), according to Newtonian mechanics. Thus, the optimal solution of EWC tries to decorrelate the error signal from the predicted next input vector.

PAGE 172

157 APPENDIX F AEC AND ERROR ENTR OPY MAXIMIZATION This appendix aims to motivate an unde rstanding of the relationship between entropy and sample differences. In general, the parametric family describing the error probability density function (pdf) in supervised learning is not analytically available. In such circumstances, non-parametric approach es such as Parzen windowing [115] could be employed. Given the i.i.d. samples {e(1),…, e ( N ) } of a random variable e the Parzen window estimate for the underlying pdf fe(.) is obtained by N i ei e x N x f1)) ( ( 1 ) ( ˆ (F.1) where (.) is the kernel function, which itself is a pdf, and is the kernel size that controls the width of each window Typically, Gaussian kernel s are preferred, but other kernel functions like the Cauchy density [ 114] or the members of the generalized Gaussian family can be employed. Shannon’s entropy for a random variable e with pdf fe(.) is defined as )] ( [ ) ( log ) ( ) ( e f E dx x f x f e He e e e (F.2) Given i.i.d. samples, this entr opy could be estimated [116] using N j N ii e j e N N e H11)) ( ) ( ( 1 log 1 ) ( ˆ (F.3) This estimator uses the sample mean a pproximation for the expected value and the Parzen window estimator for the pdf. Viol a proposed a similar entropy estimator, in

PAGE 173

158 which he suggested dividing the samples into two subsets: one for estimating the pdf, the other for evaluating the sample mean [117]. In order to ap proximate a stochastic entropy estimator, we approximate the expectation by evaluating the argument at the most recent sample, ek. In order to estimate the pdf, we use the L previous samples. The stochastic entropy estimator then becomes L ii e k e L e H1)) ( ) ( ( 1 log ) ( (F.4) For supervised training of an ADALINE (or an FIR filter), with weight vector n w given the input (vector)-desired training sequence ( x ( n ), d ( n )), where mn ) ( x and ) ( n d the instantaneous error is given by ) ( ) ( ) ( ) ( n n n d n eTx w The stochastic gradient of the error entropy with respect to the weights becomes L i L ii n e n e i n n i n e n e H1 1)) ( ) ( ( ) ( ) ( )) ( ) ( ( ) ( x x w X (F.5) where ) ( ) ( ) ( ) ( i n n i n d i n eT x w is also evaluated using the same weight vector as e ( n ) [116]. For the specific choice of a single error sample e ( k-L ) for pdf estimation and a Gaussian kernel function, (F.5) reduces to 2/ ) ( ) ( )) ( ) ( ( ) (L n n L n e n e H x x w X (F.6) We easily notice that the expression in (F.6) is also a stochastic gradient for the cost function ) 2 /( ] )) ( ) ( [( ) (2 2L n e n e E J w which is essentially a scaled form of the second term in the AEC.

PAGE 174

159 APPENDIX G PROOF OF CONVERGENCE OF ERROR VECTOR NORM IN AEC-LMS The dynamics of the error vector norm is given by ) ( ) ( 22 2 2 2 1k k k k T k k k k ke e e e sign x x 2 2k k k ke e x x (G.1) Further, since k k T ke x and k k T ke x we have 2 2 2 2 12k k k ke e 2 2k k k ke e x x (G.2) Define the following term. 2 2 22k k k k k ke e e e x x (G.3) If the step-size (positive) upper bound in equation (4.13) is satisfied, then 0 for all k Therefore, equation G.3 reduces to the inequality 2 2 2 2 1 k k k ke e (G.4) Iterating (G.4) from k = 0, we get, k t t t ke e1 2 2 2 0 2 In the limit k, it is easy to see that, 2 0 1 2 2 2 t t te e which implies that 0 lim2 2 t t te e as the summation in the error terms must converge to a finite value given by 2 2 0 The instantaneous cost 2 2t te e becomes zero only when the weights converge to the true weights Tw ( 02 ). Also note that the gradient becomes zero at this point.

PAGE 175

160 LIST OF REFERENCES 1. Hoffman, K., Kunze, R. Linear Algebra Prentice-Hall, New Delhi, India, 1996. 2. Principe, J.C., Euliano, N., Lefebvre, C. Neural and Adaptive Systems: Fundamentals Through Simulations John Wiley, New York, 2000. 3. Hestenes, M.R. Optimization Theory John Wiley, New York, 1975. 4. Scharf, L.L. Statistical Signal Processing Addison-Wesley, Boston, MA, 1991. 5. Orfanidis, S.J. Optimum Signal Processing : An Introduction McGraw-Hill, Singapore, 1990. 6. Wiener, N. Extrapolation, Interpolation, and Sm oothing of Stationary Time Series with Engineering Applications MIT Press, Cambridge, MA, 1949. 7. Meyer, C.D. Matrix Analysis and Applied Linear Algebra SIAM, Philadelphia, PA, 2001. 8. Golub, G.H., van Loan, C.F. Matrix Computations The John Hopkins University Press, London, UK, 1996. 9. Widrow, B., Hoff, Jr. M.E. “Adaptive Sw itching Circuits,” Proceedings of IRE WESCON Convention Record, part 4, pp. 96–104, 1960. 10. Widrow, B., Stearns, S. Adaptive Signal Processing Prentice-Hall, Englewood Cliffs, NJ, 1985. 11. Macchi, O. Adaptive Processing : The Least Mean Squares Approach with Applications in Transmission John Wiley, New York, 1995. 12. Haykin, S., Widrow, B (eds.). Least-Mean-Square Adaptive Filters John Wiley, New York, 2003. 13. Solo, V., Kong, X. Adaptive Signal Processing Algorithms : Stability and Performance Prentice-Hall, Englewood Cliffs, NJ, 1995. 14. Haykin, S. Adaptive Filter Theory Prentice Hall, Upper Saddle River, NJ, 1996. 15. Farhang-Boroujeny, B. Adaptive Filters : Theory and Applications John Wiley, New York, 1998.

PAGE 176

161 16. Luenberger, D. Optimization by Vector Space Methods John Wiley, New York, 1969. 17. Rao, S.S. Engineering Optimization : Theory and Practice John Wiley, New Delhi, 1996. 18. Kalman, R.E. “New methods in Wiener f iltering theory,” Research Institute for Advanced Studies, Rep. 61-1, Baltimore, MD, 1961. 19. Mueller, M. “Least-Squares Algorithms for Adaptive Equalizers,” Bell Systems Technical Journal, vol. 60, pp. 1905-1925, 1981. 20. Lucky, R.W. “Techniques for Adaptive Equalization of Digital Communications Systems,” Bell Systems Technical Jo urnal, vol. 45, pp. 255-286, 1966. 21. Verhoeckx, N.A.M., van den Elzen, H.C., Snijders, F.A.M., van Gerwen, P.J. “Digital Echo Cancellation for Baseband Da ta Transmission,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 27, pp. 761–781, June 1979. 22. Jayant, N.S., Noll, P. Digital Coding of Waveforms Prentice-Hall, Englewood Cliffs, NJ, 1984. 23. Widrow, B., McCool, J.M., Larimore, M.G., Johnson, C.R. “Stationary and Nonstationary Learning Characteristic of the LMS Adaptive Filter,” Proceedings of the IEEE, vol. 64, pp. 1151–1162, 1976. 24. Harris, R., Chabries, D., Bishop, F.A. “A Variable Step (VS) Adaptive Filter Algorithm,” IEEE Transactions on Acoustic s, Speech and Signal Processing, vol. 34, pp. 309–316, April 1986. 25. Kwong, R., Johnston, E.W. “A Variab le Step Size LMS Algorithm,” IEEE Transactions on Signal Processing, vol. 40, pp. 1633–1642, July 1992. 26. Aboulnasr, T., Mayyas, K. “A Robust Va riable Step-size LMS-type Algorithm: Analysis and Simulations,” IEEE Transac tions on Signal Processing, vol. 45, pp. 631–639, March 1997. 27. Wei, Y., Gelfand, S., Krogmeier, J.V. “Noise-Constrained Least Mean Squares Algorithm,” IEEE Transactions on Signal Processing, vol. 49, no. 9, pp. 19611970, September 2001. 28. Sethares, W., Lawrence, D., Johnson, Jr. C ., Bitmead, R. “Parameter Drift in LMS Adaptive Filters,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 34-4, pp. 868–879, August 1986. 29. Liavas, A., Regalia, P. “On the Numeri cal Stability and Accuracy of the Conventional Recursive Leas t Squares Algorithm,” I EEE Transactions on Signal Processing, vol. 47, no. 1, pp. 88-96, January 1999.

PAGE 177

162 30. Slock, D.T.M., Kailath, T. “Numerically Stable Fast Transversal Filters for Recursive Least Squares Adaptive F iltering,” IEEE Transactions on Signal Processing, vol. 39, pp. 92–114, January 1991. 31. Eleftheriou, E., Falconer, D. “Tracking Properties and Steady-st ate Performance of RLS Adaptive Filter Algorithms,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 34, pp. 1097–1109, October 1986. 32. Sderstrm, T., Stoica, P. System Identification Prentice-Hall, London, UK, 1989. 33. Erdogmus, D., Principe, J.C. “An Erro r-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptiv e Systems,” IEEE Transactions on Signal Processing, vol. 50, no. 7, pp. 1780-1786, July 2002. 34. Principe, J.C., Fisher, J. III., Xu, D. “Inf ormation theoretic lear ning,” in Haykin, S. (ed), Unsupervised Adaptive Filtering John Wiley, New York, 2000. 35. Cadzow, J.A. “Total Least Squares, Matr ix Enhancement, and Signal Processing,” Digital Signal Processing vol. 4, pp. 21-39, 1994. 36. de Moor, B. “Total Least Squares for Affi nely Structured Matrices and the Noisy Realization Problem,” IEEE Transactions on Signal Processing, vol. 42, pp. 31043113, November 1994. 37. Lemmerling, P. “Structured Total Least Squares: Analysis, Algorithms, and Applications,” Ph.D. Dissert ation, Katholeike University, Leuven, Belgium, 1999. 38. Yeredor, A. “The Extended Least Squares Criterion: Minimization Algorithms and Applications,” IEEE Transactions on Signa l Processing, vol. 49, no. 1, pp. 74-86, January, 2001. 39. Feng, D.Z., Bao, Z., Jiao, L.C. “Total Least Mean Squares Algorithm,” IEEE Transactions on Signal Processing, vol. 46, no. 8, pp. 2122-2130, August 1998. 40. Davila, C.E. “An Efficient Recursive Total Least Squares Algorithm for FIR Adaptive Filtering.” IEEE Transactions on Signal Processing, vol. 42, no. 2, pp. 268-280, February 1994. 41. Deprettere, F. (ed.). SVD and Signal Processing – Al gorithms, Applications and Architectures North-Holland, Amsterdam, 1988. 42. Cichocki A., Amari, S. Adaptive Blind Signal and Im age Processing: Learning Algorithms and Applications John Wiley, New York, 2002. 43. Ljung, L., Sderstrm, T. Theory and Practice of Recu rsive System Identification MIT Press, Cambridge, MA, 1983.

PAGE 178

163 44. Rao, Y.N. “Algorithms for Eigendecom position and Time Series Segmentation,” MS Thesis, University of Fl orida, Gainesville, FL, 2000. 45. Davila, C.E. “A Subspace Approach to Es timation of Autoregressive Parameters from Noisy Measurements,” IEEE Transac tions on Signal Processing, vol. 46, no. 2, pp. 531-534, February, 1998. 46. So, H.C. “Modified LMS Algorithm for U nbiased Impulse Response Estimation in Nonstationary Noise,” Electronics Letters, vol. 35, no. 10, pp. 791-792, May 1999. 47. Gao, K., Ahmad, M.O., Swamy, M.N.S. “A Constrained Anti-Hebbian Learning Algorithm for Total Least Squares Estimation with Applications to Adaptive FIR and IIR Filtering,” IEEE Transactions on Ci rcuits and Systems-Pa rt II, vol. 41, no. 11, pp. 718-729, November 1994. 48. Mathew, G., Reddy, V. U., Dasgupta, S. “Adaptive Estimation of Eigensubspace,” IEEE Transactions on Signal Processing, vol. 43, no. 2, pp. 401-411, February 1995. 49. Zhang, Q., Leung, Y. “A Class of Lear ning Algorithms for Principal Component Analysis and Minor Component Analys is,” IEEE Transactions on Neural Networks, vol. 11, no. 1, pp. 200-204 January, 2000. 50. Luo, F. L., Unbehauen, R. “A Minor Subspace Analysis Algorithm,” IEEE Transactions on Neural Ne tworks, vol. 8, no. 5, pp. 1149-1155, September 1997. 51. Xu, L., Oja, E., Suen, C. Y. “Modified Hebbian learning for Curve and Surface Fitting,” Neural Networks, vol. 5, pp. 441-457, 1992. 52. Joliffe, I.T. Principal Component Analysis Springer-Verlag, Berlin, 1986. 53. Duda, R.O., Hart, P.E. Pattern Classification and Scene Analysis John Wiley, New York, 1973. 54. Kung, S.Y., Diamantaras, K.I., Taur, J.S. “Adaptive Principal Component Extraction (APEX) and Applications,” I EEE Transactions on Signal Processing, vol. 42, no. 5, pp. 1202-1217, May 1994. 55. Mao, J., Jain, A.K. “Artificial Neural Networks for Feature Extraction and Multivariate Data Projection,” IEEE Transa ctions on Neural Networks, vol. 6, no. 2, pp. 296-317, March 1995. 56. Sanger, T.D. “Optimal Unsupervised Learning in a Si ngle-layer Linear Feedforward Neural Network,” Neural Networks, vol. 2, pp. 459-473, 1989. 57. Rubner, J., Tavan, P. “A Self-Organ izing Network for Principal Component Analysis,” Europhysics Letters, vol. 10, no. 7, pp. 693-698, 1989.

PAGE 179

164 58. Rubner, J., Tavan, P. “Development of Feature Detectors by Self Organization,” Biological Cybernetics, vol. 62, pp. 193-199, 1990. 59. Diamantaras, K.I., Kung, S.Y. Principal Component Neural Networks, Theory and Applications John Wiley, New York, 1996. 60. Hua, Y., Xiang, Y., Chen, T., Abed-Merai m, K., Miao, Y. “Natural Power Method for Fast Subspace Tracking,” Proceedings of the 1999 IEEE Signal Processing Society Workshop, pp. 176-185, August 1999. 61. Yang, B. “Projection Approximation Subs pace Tracking,” IEEE Transactions on Signal Processing, vol. 43, no. 1, pp. 95-107, January 1995. 62. Oja, E. “A Simplified Neuron Model as a Principal Component Analyzer,” Journal of Mathematical Biology, vol.15, pp. 267-273, 1982. 63. Baldi, P., Hornik, K. “Learning in Lin ear Neural Networks: A Survey,” IEEE Transactions on Neural Networks, vol 6, no. 4, pp. 837-858, July 1995. 64. Xu, L. “Least Mean Square Error Reconstruction Principle for Self-Organizing Neural Nets,” Neural Netw orks, vol. 6, pp. 627-648, 1993. 65. Chatterjee, C., Kang, Z., Roychowdhury, V.P. “Algorithms for Accelerated Convergence of Adaptive PCA”. IEEE Tran sactions on Neural Networks, vol. 11, no. 2, pp. 338-355, March 2000. 66. Erdogmus, D., Rao, Y.N., Principe, J.C., Fontenla-Romero, O., Vielva, L. “An Efficient, Robust and Fast Converg ing Principal Com ponents Extraction Algorithm: SIPEX-G,” Proceedings of Eu ropean Signal Processing Conference, vol. 2, pp. 335-338, September 2002. 67. Erdogmus, D., Rao, Y.N., Ozturk, M.C., Vielva, L., Principe, J.C. “On the Convergence of SIPEX: A Simultane ous Principal Components Extraction Algorithm,” Proceedings of Internati onal Conference on Acoustics, Speech and Signal Processing, vol. 2, pp. 697-700, April 2003. 68. Erdogmus, D., Rao, Y.N., Hild, K.E., Pr incipe, J.C. “Simultaneous Principal Component Extraction with Application to Adaptive Blind Multiuser Detection,” EURASIP Journal on Applied Signal Pr ocessing, vol. 2002, no. 12, pp. 1473-1484, December 2002. 69. Chatterjee, C., Roychowdhury, V.P., Ramos, J., Zoltowski, M.D. “Self-Organizing Algorithms for Generalized Eigendecompos ition,” IEEE Transactions on Neural Networks, vol. 8, no. 6, pp. 1518-1530, November 1997. 70. Miao, Y., Hua, Y. “Fast Subspace Tr acking and Neural Ne twork Learning by a Novel Information Criterion,” IEEE Transac tions on Signal Processing, vol. 46 no. 7, pp. 1967-1979, July 1998.

PAGE 180

165 71. Rao, Y.N, Principe, J.C. “A Fast On-line Generalized Eigendecomposition Algorithm for Time Series Segmentation,” Adaptive Systems for Signal Processing, Communications and Control Symposium 2000, pp. 266-271, October 2000. 72. Rao, Y.N., Principe, J.C. “A Fast Online Algorithm for PCA and its Convergence Characteristics,” Proceedings of th e 2000 IEEE Signal Processing Society Workshop vol. 1 pp. 299-307, December 2000. 73. Rao, Y.N., Principe, J.C. “Robust On-lin e Principal Component Analysis Based on a Fixed-Point Approach,” Proceedings of International Conference on Acoustics, Speech and Signal Processing, vol. 1, pp. 981-984, May 2002. 74. Rao, Y.N., Principe, J.C., “Time Seri es Segmentation Using a Novel Adaptive Eigendecomposition Algorithm,” Journal of VLSI Signal Processing, vol. 32, pp. 717, 2002. 75. Rao, Y.N., Principe, J.C. “An RL S Type Algorithm for Generalized Eigendecomposition,” Proceedings of th e 2001 IEEE Signal Processing Society Workshop pp. 263-272, September 2001. 76. Rao, Y.N., Principe, J.C., Wong, T.F. “Fast RLS-like Algorithm for Generalized Eigendecomposition and its Applications,” Journal of VLSI Signal Processing, vol. 37, pp. 333-344, 2004. 77. Rao, Y.N. “Optimal Adaptive Projectio ns Using Stochastic and Recursive Algorithms and their Applications,” Ph .D. proposal, University of Florida, Gainesville, FL, December 2002. 78. Ljung, L. “Analysis of Recursive Stocha stic Algorithms,” IEEE Transactions on Automatic Control, vol. 22, no. 4, pp. 551-575, August 1977. 79. Kushner, H.J., Clark, D.S. Stochastic Approximation Methods for Constrained and Unconstrained Systems Springer-Verlag, New York, 1978. 80. Benveniste, A., Metivier, M., Priouret, P. Adaptive Algorithms and Stochastic Approximations Springer-Verlag, Berlin, 1990. 81. Kushner, H.J., Yin, G. Stochastic Approximation Algorithms and Applications Springer-Verlag, Berlin 1997. 82. Plumbley, M.D. “Lyaponuv Functions for Convergence of Principal Component Algorithms,” Neural Networ ks, vol. 8, no. 1, 1995. 83. Proakis, J.G. Digital Communications McGraw-Hill, New York, 2001. 84. Haykin, S. Neural Networks : A Comprehensive Foundation Prentice-Hall, Englewood Cliffs, NJ, 1999.

PAGE 181

166 85. Moon, T.K., Stirling, W.C. Mathematical Methods and Algorithms for Signal Processing Prentice-Hall, Englewood Cliffs, NJ, 1999. 86. Rao, Y.N., Principe, J.C. “Efficient Total Least Squares Method for System Modeling using Minor Component Analysis ,”. Proceedings of the IEEE Workshop on Neural Networks for Signal Proce ssing XII, pp. 259-268, September 2002. 87. van Huffel, S., Vanderwalle, J. The Total Least Squares Problem : Computational Aspects and Analysis SIAM, Philadelphia, PA, 1991. 88. Mathews, J., Cichocki, A. “Total Leas t Squares Estimation,” Technical Report, University of Utah, USA and Brain Sc ience Institute Riken, Japan, 2000. 89. Shynk, J.J. “Adaptive IIR Filtering,” IEEE Signal Processing Magazine, vol. 6, pp. 4-21, April 1989. 90. Regalia, P.A. Adaptive IIR Filtering in Signal Processing and Control Marcel Dekker, New York, 1995. 91. Regalia, P.A., “An Unbiased Equation Error Identifier and Reduced Order Approximations,” IEEE Transactions on Si gnal Processing, vol. 42, no. 6, pp. 1397-1412, June 1994. 92. Regalia, P.A., “An Adaptive Unit Norm Filter with Applications to Signal Analysis and Karhunen-Loeve Transformations,” IEEE Transactions on Circuits and Systems, vol. 37, no. 5, pp. 646-649, May 1990. 93. Sderstrm, T., Stoica, P. Instrumental Variable Methods for System Identification Springer-Verlag, Berlin, 1983. 94. Fukunaga, K. Introduction to Statistical Pattern Recognition Academic Press, New York, 1990. 95. Xu, D., Principe, J.C., Wu, H.C. “General ized Eigendecomposition with an On-line Local Algorithm,” IEEE Signal Processing Letters, vol. 5, no. 11, pp. 298-301, November 1998. 96. Mathew G., Reddy, V.U. “A Quasi-Newt on Adaptive Algorithm for Generalized Symmetric Eigenvalue Problem,” IEEE Tran sactions on Signal Processing, vol. 44, no. 10, pp. 2413-2422, October 1996. 97. Diamantaras, K.I., Kung, S.Y. “An Unsupervised Neural Model for Oriented Principal Component Extraction,” Proceedings of International Conference on Acoustics, Speech and Signal Processing, pp. 1049-1052, vol. 2, May 1991.

PAGE 182

167 98. Cao, Y., Sridharan, S., Moody, M. “Multichannel Speech Separation by Eigendecomposition and its Application to Co-talker Interference Removal,” IEEE Transactions on Speech and Audio Processing, vol. 5, no. 3, pp. 209-219, May 1997. 99. Hyvrinen, A. “Fast and Robust Fixe d-Point Algorithms for Independent Component Analysis,” IEEE Transactions on Neural Networks, vol. 10, no. 3, pp. 626-634, May 1999. 100. Wang, X., Poor, H.V. “Blind Multiuser Detection: A Subspace Approach,” IEEE Transactions on Information Theory, vol 44, no. 2, pp. 677-691, March 1998. 101. Wang, X., Host-Madsen, A. “Group-Blind Multiuser Detection for Uplink CDMA,” IEEE Journal on Selected Areas in Communications, vol. 17, no. 11, pp. 1971-1984, November 1999. 102. Reynolds, D., Wang, X. “Adaptive GroupBlind Multiuser Detection Based on a New Subspace Tracking Algorithm,” IEEE Transactions on Communications, vol. 49, no. 7, pp. 1135-1141, July 2001. 103. Honig, M., Madhow, U., Verdu, S. “B lind Adaptive Multiuser Detection,” IEEE Transactions on Information Theory, vol 41, no. 4, pp. 944-960, July 1995. 104. Wong, T.F., Lok, T.M., Lehnert, J.S., Zoltowski, M.D. “A Linear Receiver for Direct-Sequence Spread-Spectrum Multiple -Access Systems with Antenna Arrays and Blind Adaptation”, IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 659-676, March 1998. 105. Rao, Y.N. “Linear Receiver for Direct -Sequence Spread-Spectrum Multiple Access Systems,” CNEL technical report, December 2001. 106. Hahn, S. L. Hilbert Transforms in Signal Processing Artech House, London, UK, 1996. 107. Shannon, C.E., Weaver, W. The Mathematical Theory of Communication University of Illinois Press, Urbana, IL, 1964. 108. Tarasenko, F.P. “On the Evaluation of an Unknown Probability Density Function, the Direct Estimation of the Entropy fr om Independent Observations of a Continuous Random Variable, and the Distribution-Free Entropy Test of Goodness-of-fit,” Proceedings of IEEE, vol. 56, pp. 2052-2053, 1968. 109. Bickel, P.J., Breiman, L. “Sums of F unctions of Nearest Neighbor Distances, Moment Bounds, Limit Theorems and a Goodne ss-of-fit Test,” Annals of Statistics, vol. 11, no. 1, pp. 185-214, 1983.

PAGE 183

168 110. Beirlant, J., Zuijlen, M.C.A. “The Em pirical Distribution Function and Strong Laws for Functions of Order Statisti cs of Uniform Spacings,” Journal of Multivariate Analysis, vol. 16, pp. 300-317, 1985. 111. Kozachenko, L.F., Leonenko, N.N. “Sam ple Estimate of Entropy of a Random Vector,” Problems of Information Tran smission, vol. 23, no. 2, pp. 95-101, 1987. 112. Beck, C., Schlogl, F. Thermodynamics of Chaotic Systems Cambridge University Press, Cambridge, UK, 1993. 113. Tsybakov, A.B., van der Meulen, E.C. “R oot-n Consistent Estimators of Entropy for Densities with Unbounded Support,” Scandinavian Journal of Statistics vol. 23, pp. 75-83, 1996. 114. Papoulis, A., Pillai, S.U. Probability, Random Variables and Stochastic Processes McGraw-Hill, New York, 2002. 115. Parzen, E. “On Estimation of a Probability Density Function and Mode,” Time Series Analysis Papers Holden-Day, Inc., San Diego, CA, 1967. 116. Erdogmus, D. “Information Theoretic Learning: Renyi's Entropy and its Applications to Adaptive System Traini ng,” Ph.D. Dissertati on, University of Florida, Gainesville, FL, 2002. 117. Viola, P., Schraudolph, N., Sejnowski, T. “Empirical Entropy Manipulation for Real-World Problems,” Proceedings of Neural Information Processing Systems, pp. 851-857, November 1995. 118. Bishop, C. Neural Networks for Pattern Recognition Clarendon Press, Oxford, UK, 1995. 119. Akaike, H. “A New Look at the Stat istical Model Identification,” IEEE Transactions on. Automatic Control, vol. 19, pp. 716-723, December 1974. 120. Rissanen, J. Stochastic Complexity in Statistical Inquiry World Scientific, London, UK, 1989. 121. Principe, J.C., Rao, Y.N., Erdogmus, D. “E rror Whitening Wiener Filters: Theory and Algorithms,” in Least-Mean-Square Adaptive Filters Haykin, S., Widrow, B. (eds.), John Wiley, New York, 2003. 122. Reiersl, O. “Confluence Analysis by Means of Lag Moments and Other Methods of Confluence Analysis,” Econometrica, vol. 9, pp. 1-23, 1941. 123. Wong, K.Y., Polak, E. “Ident ification of Linear Discrete Time Systems Using the Instrumental Variable Approach,” IEEE Transactions on Automatic Control, vol. 12, no. 8, pp. 707-718, December 1967.

PAGE 184

169 124. Young, P.C. “An Instrumental Variable Me thod for Real-Time Identification of a Noisy Process,” Automatica, vol. 6, no.2, pp. 271-287, March 1970. 125. Young, P.C. “Parameter Estimation fo r Continuous-Time Models: A Survey,” Automatica, vol. 17, no. 1, pp. 23-39, January 1981. 126. Young, P.C. Recursive Estimation and Time Series Analysis Springer-Verlag, Berlin, 1984. 127. Rao, Y.N., Erdogmus, D., Rao, G.Y., Pr incipe, J.C. “Fast Error Whitening Algorithms for System Identification a nd Control,” Proceedings of the IEEE Workshop of Neural Networks for Si gnal Processing, pp. 309-318, September 2003. 128. Rao, Y.N., Erdogmus, D., Principe, J.C. “Error Whitening Criterion for Adaptive Filtering: Theory and Algorithms,” to appear in IEEE Transactions on Signal Processing, 2005. 129. Chansarkar, M., Desai, U.B. “A Robust R ecursive Least Squares Algorithm,” IEEE Transactions on Signal Processing, vol. 45, no. 7, pp. 1726-1735, July 1997. 130. Boyd, S., Vandenberghe, L. Convex Optimization Lecture Notes, Stanford University, Winter 2001. 131. Al-Naffouri, T.Y., Sayed, A.H. “Adaptive Filters with Error Nonlinearities: MeanSquare Analysis and Optimum Design, ” EURASIP Journal on Applied Signal Processing, no. 4, pp. 192-205, 2001. 132. Sayed, A.H. “Energy Conservation a nd the Learning Ability of LMS Adaptive Filters,” in Least-Mean-Square Adaptive Filters Haykin, S., Widrow, B. (eds.), John Wiley, New York, 2003. 133. Price, R. “A Useful Theorem for Nonlin ear Devices Having Gaussian Inputs,” IRE. Transactions on Information Theory vol. 4, pp. 69-72, June 1958. 134. Eweda, E. “Convergence Analysis of th e Sign Algorithm wit hout the Independence and Gaussian Assumptions,” IEEE Transac tions on Signal Processing, vol. 48, no. 9, pp. 2535-2544, September 2000. 135. Reuter, M., Quirk, K., Zeid ler, J., Milstein, L. “N on-Linear Effects in LMS Adaptive Filters,” Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000, pp. 141-146, October 2000. 136. Widrow, B., Walach, E. Adaptive Inverse Control Prentice-Hall, Englewood Cliffs, NJ, 1995.

PAGE 185

170 137. Rao, Y.N., Erdogmus, D., Rao, G.Y., Prin cipe, J.C, “Stochastic Error Whitening Algorithm for Linear Filter Estimation with Noisy Data,” Neural Networks, vol. 16, no. 5-6, pp. 873-880, June 2003. 138. Rao, Y.N., Erdogmus, D., Principe, J.C .,“Error Whitening Criterion for Linear Filter Estimation,” Proceedings of Inte rnational Joint Conference on Neural Networks, vol. 2, pp. 1447-1452, July 2003. 139. Rao, Y.N., Erdogmus, D., Principe, J.C. “Error Whitening Met hodology for Linear Parameter Estimation in Noisy (White Noise) Inputs,” US patent, submitted, March 2003. 140. Robbins, H., Munro, S. “A stochas tic optimization method,” Annals of Mathematical Statistics, vol. 22, pp. 400-407, 1951. 141. Rao, Y.N., Erdogmus, D., Principe, J.C. “A ccurate Linear Parameter Estimation in Colored Noise,” accepted for publication in International Conference of Acoustics, Speech and Signal Processing, 2004. 142. Douglas, S.C., Rupp, M. “On bias remova l and unit-norm constr aints in equationerror adaptive filters,” 30th Annual Asilomar Conference on Signals, Systems and Computers, CA, vol. 2, pp. 1093-1097, November 1996. 143. Rao, Y.N., Kim, S.P., Sanchez, J.C., Er dogmus, D., Principe, J.C., Carmena, J.M., Lebedev, M.A., Nicolelis, M.A.L. “Learning Mappings in Brain Machine Interfaces with Echo State Networks,” accepted for publication in International Joint Conference on Neural Networks, 2004. 144. Cherkassky, V., Mulier, F. Learning from Data John Wiley, New York, 1998. 145. Hastie, T., Tibshirani, R., Friedman, J. The Elements of Statistical Learning : Data Mining, Inference, and Prediction Springer-Verlag, Berlin, 2001. 146. Chen, L., Narendra, K.S. “Nonlinear Adap tive Control Using Neural Networks and Multiple Models,” Automatica vol. 37, no. 8, pp. 1245-1255, August 2001. 147. Principe, J.C., Wang, L., Motter, M.A. “Local Dyna mic Modeling with SelfOrganizing Maps and Applications to Nonlinear System Identification and Control,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2240-2258, November 1998. 148. Thampi, G.K., Principe, J.C., Motter, M.A. Cho, J., Lan, J. “Multiple model based flight control design,” Proceedings of the 45th Midwest Symposium on Circuits and Systems, vol. 3, pp. 133-136, August 2002.

PAGE 186

171 149. Kim, S.P., Sanchez, J.C., Erdogmus, D., Rao, Y.N., Principe, J.C., Nicolelis, M.A.L. “Divide-and-Conquer Approach fo r Brain Machine In terfaces: Nonlinear Mixture of Competitive Linear Models,” Neural Networks, vol. 16, no. 5-6, pp. 865-871, Jun 2003. 150. Cho, J., Lan, J., Thampi, G.K., Principe, J.C., Motter, M.A. “Identification of Aircraft Dynamics Using a SOM and Local Linear Models,” Proceedings of the 45th Midwest Symposium on Circuits and Systems, vol. 2, pp. 148-151, August 2002.

PAGE 187

172 BIOGRAPHICAL SKETCH Yadunandana Nagaraja Rao was born in Mysore, India, on January 11, 1976. He graduated with a bachelor’s degree in el ectronics and communicat ion engineering from the University of Mysore in August 1997 and th en worked as a software engineer at IT Solutions Inc., Bangalore, India, till July 1998. In fall 1998, he began graduate studies in the Department of Electrical and Computer Engineering at the University of Florida, Gainesville, FL. Yadu joined the Computati onal NeuroEngineering Laboratory (CNEL) in the spring of 1999 and obtained his maste r’s degree (thesis option) in spring 2000 under the supervision of Dr. Jose C. Principe. After a brief stint as a design engineer at GE Medical Systems, WI, Yadu returned to CNEL in spring 2001 as a doctoral candidate. Since then, he has been working towards his Ph.D. in the Department of Electrical and Computer Engin eering, under the able guidance of Dr. Jose C. Principe. His primary research interests include algorithm development, analysis, design of optimal adaptive learning mechanisms and neural networ ks. He is a member of the International Neural Network Society (INNS) and al so a student member of the IEEE.


Permanent Link: http://ufdc.ufl.edu/UFE0004355/00001

Material Information

Title: An Augmented error criterion for linear adaptive filtering : theory, algorithms and applications
Physical Description: Mixed Material
Language: English
Creator: Rao, Yadunandana Nagaraja 1976- ( Dissertant )
Principe, Jose C. ( Thesis advisor )
Harris, John ( Reviewer )
Nechyba, Michael ( Reviewer )
Yang, Mark ( Reviewer )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2004
Copyright Date: 2004

Subjects

Subjects / Keywords: Adaptive filters   ( lcsh )
Digital filters (Mathematics)   ( lcsh )
Electrical and Computer Engineering thesis, Ph.D
Electronic noise   ( lcsh )
Dissertations, Academic -- UF -- Electrical and Computer Engineering

Notes

Abstract: Ever since its conception, the mean-squared error (MSE) criterion has been the workhorse of optimal linear adaptive filtering. However, it is a well-known fact that the MSE criterion is no longer optimal in situations where the data are corrupted by noise. Noise, being omnipresent in most of the engineering applications, can result in severe errors in the solutions produced by the MSE criterion. In this dissertation, we propose novel error criteria and the associated learning algorithms followed by a detailed mathematical analysis of these algorithms. Specifically, these criteria are designed to solve the problem of optimal filtering with noisy data. Firstly, we discuss a new criterion called augmented error criterion (AEC) that can provide unbiased parameter estimates even in the presence of additive white noise. Then, we derive novel, online sample-by-sample learning algorithms with varying degrees of complexity and performance that are tailored for real-world applications. Rigorous mathematical analysis of the new algorithms is presented. In the second half of this dissertation, we extend the AEC to handle correlated noise in the data. The modifications introduced will enable us to obtain optimal, unbiased parameter estimates of a linear system when the data are corrupted by correlated noise. Further, we achieve this without explicitly assuming any prior information about the noise statistics. The analytical solution is derived and an iterative stochastic algorithm is presented to estimate this optimal solution. The proposed criteria and the learning algorithms can be applied in many engineering problems. System identification and controller design problems are obvious areas where the proposed criteria can be efficiently used. Other applications include model-order estimation in the presence of noise and design of multiple local linear filters to characterize complicated nonlinear systems.
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 187 pages.
General Note: Includes vita.
Thesis: Thesis (Ph.D.)--University of Florida, 2004.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0004355:00001

Permanent Link: http://ufdc.ufl.edu/UFE0004355/00001

Material Information

Title: An Augmented error criterion for linear adaptive filtering : theory, algorithms and applications
Physical Description: Mixed Material
Language: English
Creator: Rao, Yadunandana Nagaraja 1976- ( Dissertant )
Principe, Jose C. ( Thesis advisor )
Harris, John ( Reviewer )
Nechyba, Michael ( Reviewer )
Yang, Mark ( Reviewer )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2004
Copyright Date: 2004

Subjects

Subjects / Keywords: Adaptive filters   ( lcsh )
Digital filters (Mathematics)   ( lcsh )
Electrical and Computer Engineering thesis, Ph.D
Electronic noise   ( lcsh )
Dissertations, Academic -- UF -- Electrical and Computer Engineering

Notes

Abstract: Ever since its conception, the mean-squared error (MSE) criterion has been the workhorse of optimal linear adaptive filtering. However, it is a well-known fact that the MSE criterion is no longer optimal in situations where the data are corrupted by noise. Noise, being omnipresent in most of the engineering applications, can result in severe errors in the solutions produced by the MSE criterion. In this dissertation, we propose novel error criteria and the associated learning algorithms followed by a detailed mathematical analysis of these algorithms. Specifically, these criteria are designed to solve the problem of optimal filtering with noisy data. Firstly, we discuss a new criterion called augmented error criterion (AEC) that can provide unbiased parameter estimates even in the presence of additive white noise. Then, we derive novel, online sample-by-sample learning algorithms with varying degrees of complexity and performance that are tailored for real-world applications. Rigorous mathematical analysis of the new algorithms is presented. In the second half of this dissertation, we extend the AEC to handle correlated noise in the data. The modifications introduced will enable us to obtain optimal, unbiased parameter estimates of a linear system when the data are corrupted by correlated noise. Further, we achieve this without explicitly assuming any prior information about the noise statistics. The analytical solution is derived and an iterative stochastic algorithm is presented to estimate this optimal solution. The proposed criteria and the learning algorithms can be applied in many engineering problems. System identification and controller design problems are obvious areas where the proposed criteria can be efficiently used. Other applications include model-order estimation in the presence of noise and design of multiple local linear filters to characterize complicated nonlinear systems.
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 187 pages.
General Note: Includes vita.
Thesis: Thesis (Ph.D.)--University of Florida, 2004.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0004355:00001


This item has the following downloads:


Full Text











AN AUGMENTED ERROR CRITERION FOR LINEAR ADAPTIVE FILTERING:
THEORY, ALGORITHMS AND APPLICATIONS














By

YADUNANDANA NAGARAJA RAO


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2004
































Copyright 2004

by

YADUNANDANA NAGARAJA RAO


































This dissertation is dedicated to my family, teachers and friends for their enduring love,
support and friendship.
















ACKNOWLEDGMENTS

First of all, I would like to thank Dr. Jose Principe, for his constant guidance,

encouragement, patience and continuous support over the past five years. His enthusiasm

for research and quest for excellence have left an everlasting impression in my mind. To

me, he has been more than an advisor, and this research would not have been possible

without him. Secondly, I would like to thank Dr. John Harris for being on my committee

and offering me guidance not only in research but in many other aspects of life. I would

also like to thank Dr. Michael Nechyba and Dr. Mark Yang for being on my committee.

I would like to thank Dr. Deniz Erdogmus, my friend and colleague at CNEL,

whose contributions in this research have been tremendous. I deeply benefited from all

the long hours of fruitful discussions with him on a multitude of topics. His drive for

research and enormous ability to motivate others have been quite inspirational. I also

wish to extend my acknowledgements to all the members of CNEL who have been

primarily responsible for my fruitful stay in the lab. I would like to extend my gratitude

to the always cheerful Ellie Goodwin for her golden words of wisdom. Her ability to get

things done was truly remarkable. I would also like to acknowledge Linda Kahila for the

extensive support and assistance she provided during my stay at UFL.

I would like to thank my family and friends for their constant love and

encouragement. They have allowed me to pursue whatever I wanted in life. Without their

guidance and affection, it would have been impossible for me to advance my education.










Lastly, I would like to thank my life partner Geetha for making my life beautiful

and for being on my side whenever I needed her. Her everlasting love has made me a

better individual.




















TABLE OF CONTENTS


page

ACKNOWLEDGMENT S .............. .................... iv


LI ST OF T ABLE S ................. ...............x................


LI ST OF FIGURE S .............. .................... xi


AB STRAC T ................ .............. xiv

CHAPTER


1 MEAN SQUARED ERROR BASED ADAPTIVE SIGNAL PROCESSING
SY STEMS: A BRIEF REVIEW .............. ...............1.....

Introducti on ................... .. .. ........ .. ........... .............
Why Do We Need Adaptive Systems? ................ ...............2...............
Design of Adaptive Sy stem s............... .... ...............3...............
Least Mean Squares (LMS) Algorithm ................ ...............5............ ...
Recursive Least Squares (RLS) Algorithm .............. ...............6.....
Other Al gorithms .........._..... ........___ ..... .. .._.._.. ............
Limitati ons of MSE Criteri on B ased Linear Adaptive Sy stems..........._.._.._ ...............8
Total Least Squares (TLS) and Other Methods ................. ................ ......... .10
Limitations of TLS ........._..... ... ..._. __ ...............11...
Extended TLS for Correlated Noise .....___.....__.___ ...... .._._ ..........1
Other M ethod s ........._.. ..... ._ ............... 13....
Summary ........._.. ..... ._ ...............13.....


2 AUGMENTED ERROR CRITERION FOR LINEAR ADAPTIVE SYSTEMS......15

Introducti on .........._.... ......._.___..... ...............15
Error Whitening Criterion (EWC) ................ ...............16................
Motivation for Error Whitening Criterion ................. .............................17
Analysis of the Autocorrelation of the Error Signal ................. ........._.._. ......17
Augmented Error Criterion (AEC) .............. ...............22....
Properties of Augmented Error Criterion .............. ...............24....
Shape of the Performance Surface .............. ...............24....
Analysis of the Noise-free Input Case ................. ...............25...............
Analysis of the Noisy Input Case ................ ...............27...............
Orthogonality of Error to Input ................. ...............29........... ...












Relationship to Error Entropy Maximization ................. .......... ...............30
Note on Model-Order Selection .............. ...............3 1....
The Effect of P on the Weight Error Vector .....___ .................. ................32
Numerical Case Studies of AEC with the Theoretical Solution .............. ..............33
Summary ................. ...............40..__.._.......


3 FAST RECURSIVE NEWTON TYPE ALGORITHMS FOR AEC .......................41


Introducti on .................. ... ......... .......... ........ .. .. ........4
Derivation of the Newton Type Recursive Error Whitening Algorithm ................... .41
Extension of the REW Algorithm for Multiple Lags ................. ............... ....45
Relationship to the Recursive Instrumental Variables Method ................... ........48
Recursive EWC Algorithm Based on Minor Components Analysis..........................49
Experimental Results ................. ............. ........ ..... .................5
Estimation of System Parameters in White Noise Using REW ..........................51
Effect of P and Weight Tracks of REW Algorithm ............... ... .........___......53
Performance Comparisons between REW, EWC-TLS and IV methods ............55
Summary ............ ..... ._ ...............57....


4 STOCHASTIC GRADIENT ALGORITHMS FOR AEC .............. ....................5


Introducti on................ .... .... ... ................. ... .. .........5
Derivation of the Stochastic Gradient AEC-LMS Algorithm .............. ..............59
Convergence Analysis of AEC-LMS Algorithm ......____ ..... ... ._ ........._......6 1
Proof of AEC-LMS Convergence for P > 0 ................ .....___ ........._._._..61
Proof of AEC-LMS Convergence for P < 0 ........._..... ....___. ........._.....63
On-line Implementations of AEC-LMS for P< 0.........._._... ........___..........67
Excess Error Correlation Bound for EWC-LMS ................. .......................69
Other Variants of the AEC-LMS Algorithms ............_......__ ..............72
AEC-LMS Algorithm with Multiple Lags .............. ...............73....
Simulation Results ............... .........__ ... ......___..........7
Estimation of System Parameters in White Noise............... ...............74.
Weight Tracks and Convergence................ ........... .......7
Inverse Modeling and Controller Design Using EWC .............. ............. ..80
Summary ........._...... ...............83..__..........


5 LINEAR PARAMETER ESTIMATION IN CORRELATED NOISE......................85

Introducti on ........._...... ...............85..__..........
Existing Solutions ........._...... .. ... .___. ... ...._...... ... ..........8
Criterion for Estimating the Parameters in Correlated Noise ........._...... ..................87
Stochastic Gradient Algorithm and Analysis .............. ...............90....
Simulation Results .........._...... ... ....._._... .. ......._._... .............9
Sy stem Identifi cati on with the Analytical S oluti on ........._...... ......_._...........93
System Identification with Stochastic Gradient Algorithm............... ................9
Verification of the Local Stability of the Gradient Algorithm............._._... .........95
Extensions to Correlated Noise in the Desired Data .............. .....................9












Experimental Results ................ ...............100................
System Identification............... ............10
Stochastic Algorithm Performance............... ..............10
Summary ................. ...............101................


6 ON UNDERMODELING AND OVERESTIMATION ISSUES IN LINEAR
SY STEM ADAPTATION............... ..............10

Introducti on ................. ...............104................
Undermodeling Effects ................. ...............105................
Overestimation Effects .............. ...............108....
Experimental Results ................ ...............109................
Sum m ary ................. ...............113......... ......


7 CONCLUSIONS AND FUTURE DIRECTIONS ................. ........................114


Conclusions............... .. .............11
Future Research Directions ............_..._ ......._ ....._.. ............1


APPENDIX


A FAST PRINCIPAL COMPONENTS ANALYSIS (PCA) ALGORITHMS ...........118

Introducti on .................. ......... ...... ...............118......
Brief Review of Existing Methods ................. ...............119..............
Derivation of the Fixed-Point PCA Algorithm .................. ............................121
Mathematical Analysis of the Fixed-Point PCA Algorithm ................. ................. 123
Self-Stabilizing Fixed-Point PCA Algorithm .................. ............... .... ............... ..128
Mathematical Analysis of the Self-Stabilizing Fixed-Point PCA Algorithm ...........129
Minor Components Extraction: Self-Stabilizing Fixed-Point PCA Algorithm........1 32


B FAST TOTAL LEAST-SQUARES ALGORITHM USING MINOR
COMPONENTS ANALYSIS .............. ...............135....


Introducti on ................. ...............135____.......
Fast TLS Al gorithms .............. ...............136....
Simulation Results with TLS................ ............. ..........13
Simulation 1: Noise Free FIR Filter Modeling. ......___ ...... .._ ............139
Simulation 2: FIR Filter Modeling with Noise ................. .......___...........140


C ALGORITHMS FOR GENERALIZED EIGENDECOMPOSITION....................143

Introducti on ............ _... ....._ .... ...............143.
Review of Existing Learning Algorithms .....__.....___ ..............._.........4
Fixed-Point Learning Algorithm for GED .............. ...............145....
M athematical Analysis .............. ...............150....











D SOME DERIVATIONS FOR THE NOISY INPUT CASE............... .................15

E ORTHOGONALITY OF ERROR TO INPUT .............. ...............156....


F AEC AND ERROR ENTROPY MAXIMIZATION .............. .....................5


G PROOF OF CONVERGENCE OF ERROR VECTOR NORM IN AEC-LMS ......159


LIST OF REFERENCES ........._._.. ...._ ... ...............160....

BIOGRAPHICAL SKETCH .........._.... ...............172.__..........

















LIST OF TABLES

Table pg

1-1. Outline of the RLS Algorithm. ........._. ...... .__ ...............7.

3-1. Outline of the REW Al gorithm. ............. ...............45.....

















LIST OF FIGURES


Figure pg

1-1. Block diagram of an Adaptive System. .............. ...............4.....

1-2. Parameter estimates using RLS algorithm with noisy data. ............. ....................9

2-1i. Schemati c di agram of EWC adaptati on ................. ...............16...........

2-2. The MSE performance surfaces, the AEC contour plot, and the AEC performance
surface for three different training data sets and 2-tap adaptive FIR fi1ters. .............25

2-3. Demonstration scheme with coloring filter h, true mapping fi1ter w, and the
uncorrelated white signals. ............. ...............34.....

2-4. The average squared error-norm of the optimal weight vector as a function of
autocorrelation lag L for various f values and SNR levels. ............. ....................35

2-5. The average squared error-norm of the optimal weight vector as a function of filter
length m for various f values and SNR levels. ......____ .... ... ._ ................35

2-6. Histograms of the weight error norms (dB) obtained in 50 Monte Carlo simulations
using 10000 samples of noisy data using MSE (empty bars) and EWC with B = -0.5
(filled bars). The subfigures in each row use fi1ters with 4, 8, and 12 taps
respectively. The subfigures in each column use noisy samples at -10, 0, and 10 dB
SNR, respectively. ............. ...............37.....

2-7. Error autocorrelation function for MSE (dotted) and EWC (solid) solutions. ...........38

3-1. Histogram plots showing the error vector norm for EWC-LMS, LMS algorithms and
the numerical TLS solution. ............. ...............53.....

3-2. Performance of REW algorithm (a) SNR = OdB and (b) SNR = -10 over various beta
values. ............. ...............54.....

3-3. Weight tracks for REW and RLS algorithms. ............. ...............55.....

3-4. Histogram plots showing the error vector norms for all the methods. .......................56

3-5. Convergence of the minor eigenvector of G with (a) noisy data and (b) clean data..57










4-1. Histogram plots showing the error vector norm for EWC-LMS, LMS algorithms and
the numerical TLS solution. ............. ...............75.....

4-2. Comparison of stochastic versus recursive algorithms............... ...............7

4-3. Contour plots with the weight tracks showing convergence to saddle point.............. 77

4-4. Weight tracks for the stochastic algorithm. ............. ...............77.....

4-5. Contour plot with weight tracks for different initial values for the weights. .............78

4-6. Contour plot with weight tracks for EWC-LMS algorithm with sign information
(left) and without sign information (right) ................. ...............79...............

4-7. EWC performance surface (left) and weight tracks for the noise-free case with and
without sign information (right). ............. ...............80.....

4-8. Block diagram for model reference inverse control. ................. .................8

4-9. Block diagram for inverse modeling. ............. ...............81.....

4-10. Plot of tracking results and error histograms ................. ...............82.............

4-11i. Magnitude and phase responses of the reference model and designed model-
controller pairs............... ...............82.

5-1. System identification block diagram showing data signals and noise........................88

5-2. Histogram plots showing the error vector norm in dB for the proposed and MSE
criteria. .............. ...............94....

5-3. Weight tracks for LMS and the stochastic gradient algorithm in the system
identification example. ............. ...............96.....

5-4. Weight tracks for LMS and the stochastic gradient algorithm showing stability
around the optimal solution. ............. ...............96.....

5-5. Histogram plots of the error norms for the proposed method and MSE. ...............101

5-6. Weight tracks showing the convergence of the stochastic gradient algorithm......... 102

6-1. Undermodeling effects with input SNR = OdB (left) and input SNR = 5dB (right).109

6-2. Crosscorrelation plots for EWC and MSE for undermodeling. ............. ..............110

6-3. Crosscorrelation plots for EWC and MSE for overestimation ................ .. .............111

6-4. Power normalized error crosscorrelation for EWC and MSE with overestimation. 111











6-5. Weight tracks for LMS and the stochastic gradient algorithm in the case of
under modeling. ................ ...............112_____.......

A-1. Representative network architecture showing lateral connections. .........................134

B-1. Estimation of minor eigenvector ................. ...............140........... ..

B-2. Minimum eigenvalue estimation............... ..............14

B-3. Comparison between the estimated and true filter coefficients using TLS.............141

B-4. Comparison between the estimated and true filter coefficients using RLS.............142
















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

AN AUGMENTED ERROR CRITERION FOR LINEAR ADAPTIVE FILTERING:
THEORY, ALGORITHMS AND APPLICATIONS

By

Yadunandana Nagaraj a Rao

May 2004

Chair: Jose C. Principe
Cochair: John G. Harris
Major Department: Electrical and Computer Engineering

Ever since its conception, the mean-squared error (MSE) criterion has been the

workhorse of optimal linear adaptive filtering. However, it is a well-known fact that the

MSE criterion is no longer optimal in situations where the data are corrupted by noise.

Noise, being omnipresent in most of the engineering applications, can result in severe

errors in the solutions produced by the MSE criterion. In this dissertation, we propose

novel error criteria and the associated learning algorithms followed by a detailed

mathematical analysis of these algorithms. Specifically, these criteria are designed to

solve the problem of optimal filtering with noisy data. Firstly, we discuss a new criterion

called augmented error criterion (AEC) that can provide unbiased parameter estimates

even in the presence of additive white noise. Then, we derive novel, online sample-by-

sample learning algorithms with varying degrees of complexity and performance that are

tailored for real-world applications. Rigorous mathematical analysis of the new

algorithms is presented.









In the second half of this dissertation, we extend the AEC to handle correlated

noise in the data. The modifications introduced will enable us to obtain optimal, unbiased

parameter estimates of a linear system when the data are corrupted by correlated noise.

Further, we achieve this without explicitly assuming any prior information about the

noise statistics. The analytical solution is derived and an iterative stochastic algorithm is

presented to estimate this optimal solution.

The proposed criteria and the learning algorithms can be applied in many

engineering problems. System identification and controller design problems are obvious

areas where the proposed criteria can be efficiently used. Other applications include

model-order estimation in the presence of noise and design of multiple local linear filters

to characterize complicated nonlinear systems.















CHAPTER 1
MEAN SQUARED ERROR BASED ADAPTIVE SIGNAL PROCESSING SYSTEMS:
A BRIEF REVIEW

Introduction

Conventional signal processing techniques can be typically formulated as linear or

non-linear operations on the input data. For example, a Einite impulse response (FIR)

Eilter is a linear combination of the time delayed versions of the input signal. We know

that a linear combiner is nothing but a linear proj ector in the input space. Mathematically

speaking, a projection can be defined as a linear transformation between two vector

spaces [1]. These linear transformations can be vectors spanning 8"' or matrices

spanning 8"". For vector transformations the proj sections are given by the inner products

and in case of matrix transformations the proj sections become rotations. Often, most of the

design tasks in signal processing involve finding appropriate projections that perform the

desired operation on the input. For instance, the filtering task is basically finding the

projection that preserves only a specified part of the input information [2]. Another

example is data compression, wherein we estimate an optimal projection matrix or

rotation matrix that preserves most of the information in the input space. The first step in

finding these projections is to understand the specifications of the problem. Then, the

specifications are translated into mathematical criteria and equations that can be solved

using various mathematical and statistical tools. The solutions thus obtained are often

optimal with respect to the criterion used.










Why Do We Need Adaptive Systems?

Depending on the problem at hand, estimating the optimal projections can be a

daunting task. Complexities can arise due to the non-availability of a closed form solution

or even the non-existence of a feasible analytical solution. In the latter case, we may have

to be contented with sub-optimal solutions. On the other hand, scenarios exist where we

have to synthesize projections that are not based on user specifications. For instance,

suppose we are given two signals, an input and a desired signal, and the goal is to find the

optimal projection (filter) that generates the desired signal from the input. Thus the

specifications do not convey any explicit information regarding the type of filter we have

to design. The conventional filter synthesis cookbook does not contain any recipes for

these types of problems. Such problems can be solved by learning mechanisms that

intelligently deduce the optimal projections using only the input and desired signals or at

times using the input signal alone. These learning mechanisms form the foundation of

adaptive systems and neural networks. All learning mechanisms have at least two major

pieces associated with them. The first is the criterion and the second is the search

algorithm. The search algorithm finds the best possible solution in the space of the inputs

under some constraints. Optimization theory has provided us with a variety of search

techniques possessing different degrees of complexity and robustness [3]. These learning-

based adaptive systems provide us with a powerful methodology that can go beyond

conventional signal processing. The projections derived by these adaptive systems are

called optimal adaptive proj sections. Another very desirable feature of adaptive systems is

their innate ability to automatically adjust and track according to the changing statistical

properties of signals. This can be vital in many engineering applications, viz., wireless

data transmission, biomedical monitoring and control, echo cancellation over wired










telephone lines etc., wherein the underlying physical sources that generate the

information change over time. In the next section, we will briefly review the theory

behind the design of linear adaptive systems.

Design of Adaptive Systems

A block diagram of an adaptive system is shown in Figure 1-1. Assume that we are

given a zero-mean input signal x,, and a zero-mean desired signal d,, Further, these

signals are assumed to be corrupted by noise terms v,, and u,, respectively. Let the

parameters of the adaptive system be denoted by the weight vector w. Note that we have

not put any constraints on the topology of the adaptive filter. For convenience, we will

assume a FIR topology in this chapter. The goal then is to generate an output y,, that best

approximates the desired signal. In order to achieve that, a criterion (often referred to as

the cost J(w)) is devised which is typically a function of the error e,, defined as the

difference between the desired signal and the output, i.e., e,, = d,, y,, The most widely

used criterion in the literature is the Mean-Squared Error (MSE) which is defined as

J(w) = E(e,2) (1.1)

The MSE cost function has some nice properties, namely,

* Physical relevance to energy

* The performance surface (shape ofJ(w)) is smooth and has continuous derivatives

* The performance surface is a convex paraboloid with a single global minimum

* The weight vector w, corresponding to the global minimum is the best linear
unbiased estimate in the absence of noise [4]

* If the desired signal is a future sample of the input, i.e., d,, = x,,,, then the filter
with coefficients w, is guaranteed to be minimum phase [5]












11


-Algorithm 4 Criterion -


Figure 1-1. Block diagram of an Adaptive System.

Once the criterion is fixed, the next step is to design an algorithm to optimize the cost

function. This forms another important element in an adaptive system. Optimization is a

well researched topic and there is a plethora of search methods for convex cost functions.

Specifically, we minimize the MSE cost function and since the performance surface is

quadratic with a single global minimum, an analytical closed form optimal solution w,

can be easily determined. The optimal solution is called the Wiener solution for MSE [6]

(Wiener filter), which is given by

w, = R'P (1.2)

In equation (1.2), R denotes the covariance matrix of the input defined as R = E xkX:

and the vector P denotes the cross correlation between the desired signal and the lagged

input defined as P= E(xkdk COmputing the Wiener solution requires inverting the

matrix R which requires O(N3) Operations [7]. However, due to the time-delay

embedding of the input, the matrix R can be easily shown to be symmetric and Toeplitz,

which facilitates a computationally efficient inverse operation with complexity O(N2) [8].

From the point of view of an adaptive system, the Wiener solution is still not elegant









because one requires the knowledge of all data samples to compute equation (1.2). A

sample-by-sample iterativee) algorithm is more desirable as it suits the framework of an

adaptive system. The most commonly used algorithms to iteratively estimate the optimal

Wiener solution w, are the stochastic gradient based Least Mean Squares (LMS) and the

fixed-point type Recursive Least Squares (RLS).

Least Mean Squares (LMS) Algorithm

The gradient of the cost function in (1.1) is given by

8J(w)
= -2E(ekXk ) (1.3)
dw

Notice that the output of the adaptive filter y, is simply the inner-product between the

weight vector w and the vector x, which is a vector comprised of the delayed versions

of the input signal xn Instead of computing the exact gradient, Widrow and fellow

researchers [9,10] proposed the instantaneous gradient which only considered the most

recent data samples (both input and desired). This led to the development of the

stochastic gradient algorithm for MSE minimization that is popularly known as the Least

Mean Squares (LMS) algorithm. The stochastic gradient is given by

8J(w)
= -2ekXk (1.4)
dw

Once the instantaneous gradient is known, the search should be in the direction opposite

to the gradient which gives us the stochastic LMS algorithm in (1.5).

w(k +1) = w(k)+ r(k)ekXk (1.5)

The term r(k) denotes a time-varying step-size that is typically chosen from a set of

small positive numbers. Under mild conditions, it is possible to show that the LMS









algorithm converges in the mean to the Wiener solution [10-14]. The stochastic LMS

algorithm is linear in complexity, i.e., O(N), and allows on-line, local computations.

These nice features facilitate efficient hardware implementation for real-world adaptive

systems. Being a stochastic gradient algorithm, LMS suffers from problems related to

slow convergence and excessive misadjustment in the presence of noise [14, 15]. Higher

order methods have been proposed to mitigate these effects and mainly they are variants

of Quasi-Newton, Levenberg-Marquardt (LM) and Conjugate-Gradient (CG) methods

popular in optimization [16-17]. Alternatively, we can derive a recursive fixed-point

algorithm to iteratively estimate the optimal Wiener solution. This is the well-known

Recursive Least Squares (RLS) algorithm [18, 19].

Recursive Least Squares (RLS) Algorithm

The derivation of the RLS algorithm utilizes the fact that the input covariance

matrix R can be iteratively estimated from its past values using the recursive relation,

R(k) = R(k -1)+ xkX (1.6)

The above equation can also be viewed as a rank-1 update on the input covariance matrix

R. Further, the cross correlation vector P satisfies the following recursion.

P(k) = P(k 1)+ xkdk (1.7)

We know that the optimal Wiener solution at the time instant k is simply

w*(k) = R '(k)P(k) (1.8)

Recall the matrix inversion lemma [7,8] at this point which allows us to recursively

update the inverse of a matrix.

R '(k -1)xXX R1 (k-_1)
R '(k) = R (k -1)-k (1.9)
1+ x R '(k -1)xk









It is important to note that the inversion lemma is useful only when the matrix itself can

be expressed using reduced rank updates as in equation (1.6). By plugging equation (1.9)

into the Wiener solution in (1.8) and using the recursive update for P(k) in (1.7), we can

derive the RLS algorithm outlined in Table 1-1 below.

Table 1-1. Outline of the RLS Algorithm.

Initialize R (~0) = cI, where c is a large positive constant
w(0) = 0, initialize the weight vector to an all zero vector
At every iteration, compute
R '(k 1) x
K (k) =k
1+ x R'(k -1)xk
e(k) = dk WT(k- 1)xk
w(k) = w(k -1)+ K(k)e(k)
R '(k) = R (k -1)- K(k)x R '(k -1)

The RLS algorithm is a truly fixed-point method as it tracks the exact Wiener solution at

every iteration. Also, observe the complexity of the algorithm is O(N2) as compared to

the linear complexity of the LMS algorithm. This additional increase in complexity is

compensated by the fast convergence and zero misadjustment of the RLS algorithm.

Other Algorithms

Although LMS and RLS form the core of adaptive signal processing algorithms,

researchers have proposed many other variants possessing varying degrees of complexity

and performance levels. Important amongst them are the sign LMS algorithms that were

introduced for reduced complexity hardware implementations [20, 21]. Historically, the

sign-error algorithm has been utilized in the design of channel equalizers [20] and also in

the 32kbps ADPCM digital coding scheme [22]. In terms of improving the speed of

convergence with minimum misadjustment, variable step-size LMS and normalized LMS

algorithms have been proposed [23-27]. Leaky LMS algorithms [28] have been explored









to mitigate the finite word length effects at the expense of introducing some bias in the

optimal solution. Several extensions to the RLS algorithm have also been studied. Some

of these algorithms show improved robustness against round-off errors and superior

numerical stability [29,30]. The conventional RLS algorithm works well when the data

statistics do not change over time (stationarity assumption). Analysis of the RLS tracking

abilities in non-stationary conditions have been studied by Eleftheriou and Falconer [31]

and many solutions have been proposed [14].

Limitations of MSE Criterion Based Linear Adaptive Systems

Although MSE based adaptive systems have been very popular, the criterion may

not be the optimal choice for many engineering applications. For instance, consider the

problem of system identification [32] which is stated as follows: Given a set of input and

output noisy measurements where the outputs are the responses of an unknown system,

obtain a parametric model estimate of the unknown system. If the unknown system is

nonlinear, then it is obvious that MSE minimization would not result in the best possible

representation of the system (plant). Criteria that utilize higher order statistics like the

error entropy, for instance, can potentially provide a better model [33,34].

Let us restrict ourselves to the class of linear parametric models. Although the

Wiener solution is optimal in the least squares sense, the biased input covariance matrix

R, in the presence of additive white input noise yields a biasl in the optimal solution

compared to what would have been obtained with noise-free data. This is a major

drawback, since noise is omnipresent in practical scenarios. In order to illustrate the

degradation in the quality of the parameter estimate, we created a random input time


SThe Wiener solution with noise-free data gives unbiased estimates. We refer to this mismatch in
the estimates obtained with and without noise as the bias introduced by noise.










Filter coefficients estimated using RLS vs. true values

RLS
1.5 -e-O True values

1-

0.5 I




-0.5 ,







0 5 10 15 20 25 30 35 40 45 50


Figure 1-2. Parameter estimates using RLS algorithm with noisy data.

series with arbitrary coloring and passed it through a FIR filter with 50 taps. The filtered

data were used as the desired signal. Uncorrelated white noise was added to the colored

input signal and the input signal-to-noise ratio (SNR) was fixed at OdB. The RLS

algorithm was then used to estimate the weight vector. Ideally, if the SNR was infinite,

RLS would have resulted in a weight vector exactly matching the FIR filter. However,

because of the noisy input, the RLS estimates were biased as can be seen in Figure 1-2.

This is a very serious drawback of the MSE criterion which is further accentuated by the

fact that the optimal Wiener M~SE solution varies nI ithr changing noise power. Researchers

have dwelt on this problem for many years and several modifications have been proposed

to mitigate the effect of noise on the estimate. Total least-squares (TLS) is one method

which is quite powerful in eliminating the bias due to noise [35-42]. The instrumental

variables (IV) method proposed as an extension to the Least-Squares (LS) has been

previously applied for parameter estimation in white noise [32]. This method requires









choosing a set of instruments that are uncorrelated with the noise in the input [32,43]. Yet

another classical approach is subspace Wiener filtering [14,44]. This approach tries to

suppress the bias by performing an optimal subspace projection (Principal Component

Space) and then training a filter in the reduced input space. In the next few sections, we

will briefly cover some of these methods and discuss their benefits and the limitations.

Total Least Squares (TLS) and Other Methods

Mathematically speaking, TLS solves an over-determined set of linear equations of

the form Ax = b, where As E" "m is the data matrix, be E is the desired vector, and

x E W" is the parameter vector and m denotes the number of different observation vectors

each of dimension n [41]. Alternatively, the linear equations can be written in the form

[A; b][xT ;-1] = 0, where [A; b] denotes an augmented data matrix. Let S be the SVD [8]

of the augmented data matrix [A; b] such that S = UIV where UTU = I,,, V' V = I

and 1= [diag(o a,,a,,a,,a4 "" n+1); 0(,, ,,,] with all singular values ak > 0. If

[A; b][xT;-1]= 0, the smallest singular value must be zero. This is possible only if

[xT;-1] is a singular vector of [A; b] (corresponding to the zero singular value)

normalized such that its (n+1)th element value is -1. When [A; b] is a symmetric square

matrix, the solution reduces to finding the eigenvector corresponding to the smallest

eigenvalue of [A;b]. The TLS solution in this special case is then

[x;-1]= -V nlv /vn~ (1.10)

where v,,~, is the last element of the minor eigenvector v, The Total Least-Squares

technique can be easily applied to estimate the optimal solution using minor components

estimation algorithms [45-51]. The computation of the TLS solution requires efficient










algorithms for extracting the principal components [52] or the eigenvectors of the data

covariance matrix. Eigendecomposition is a well studied problem and many algorithms

have been proposed for online estimation of eigenvectors and eigenvalues directly from

data samples [53-77]. We have proposed robust, sample efficient algorithms for solving

Principal Components Analysis (PCA) that have outperformed most of the available

methods. A brief review of PCA theory and the proposed algorithms are outlined in

appendix A. Brief mathematical analyses of the proposed algorithms according to the

principles of stochastic approximation theory [78-85] are also included. A fast minor

components analysis (MCA) based TLS algorithm [86] is discussed in appendix B.

Limitations of TLS

Total least squares gives unbiased estimates only when both the noise in the input

and the desired data are independent and identically distributed (i.i.d.) and have same

variance. Further, when the noise is truly i.i.d. Gaussian-distributed, the TLS solution is

also the maximum likelihood solution. However, the assumption of equal noise variances

is very restrictive, as measurement noises seldom have similar variances. The

Generalized TLS (GTLS) problem [87] specifically deals with cases where the noise (still

assumed to be i.i.d.) variances are different. However, the caveat is that the ratio of noise

variances is assumed to be known which is, once again, not a practical assumption.

Extended TLS for Correlated Noise

In order to overcome the i.i.d. assumption, Mathews and Cichocki have proposed

the Extended TLS (ETLS) [88] that allows the noise to have non-zero correlations. We

will briefly describe the approach they adopted. Let the augmented input matrix [A; b] be










represented as H = [A; b] Then, the square matrix HTH can be written as a combination

of the clean data matrix HTH and the noise covariance matrix R,.

HTH= HTH+R, (1.11)

The above equation is true when the noise is uncorrelated with the clean data. This

assumption is reasonable as the noise processes in general are unrelated (hence

independent) to the physical sources that produced the data. Assume that there exists a

matrix transformation H, such that

H = HR /2 (1.12)

The transformed data correlation matrix of H is simply

HTH = R /2 THR /2 +I (1.13)

Equation (1.13) basically tells us that the transformed data are now corrupted by an i.i.d.

noise process. Hence, we can now find the regular TLS solution with the transformed

data by estimating the minor eigenvector of the matrix HTH. In other words, the optimal

ETLS solution for correlated noise signals is given by estimating the generalized

eigenvector corresponding to the smallest generalized eigenvalue of the matrix pencil

(HTH, Rw ). Solving the generalized eigenvalue problem [8] is a non-trivial task and

there are only a handful of algorithms that can provide online solutions. Our research in

the area of PCA provided us the tools to develop a novel generalized eigenvalue

decomposition (GED) algorithm. A short summary of the GED problem, existing

learning algorithms and the proposed algorithm are listed in appendix C.

Although the ETLS seems to solve the general problem of linear parameter

estimation, there is an inherent drawback. The ETLS requires the full knowledge of the









correlation matrix of the noise (RN). This assumption potentially leaves the problem of

linear parameter estimation with noisy data wide open.

Other Methods

Infinite Impulse Response (IIR) system identification methods [89-92] deal with

the problem of measurement noise in the output (desired) data. The Instrumental

Variables (IV) method [93] for IIR system identification on the other hand, does not

guarantee stability. It has been known for quite a while that the unit norm constraint for

the equation-error (EE) based system identification is much better compared to the

conventional monic constraint [90-92]. However, imposing the unit norm constraint

appears too restrictive and hence limits the applicability.

Summary

In this chapter, we started by describing linear adaptive systems criteria and their

associated algorithms. Most often, adaptive solutions are derived using the MSE

criterion. We showed that the MSE criterion produces biased solutions in the presence of

additive noise. The optimal Wiener MSE solution varies with changing noise variances

which is highly undesirable. Alternative approaches to combat the effect of noise in the

parameter estimation have been explored. The most popular approaches are based on the

Total Least-Squares principles. Generalized TLS and Extended TLS improve upon the

ability of the TLS to provide bias free estimates in the presence of additive noise.

However, these methods rely on assumptions that can be very restrictive for real-world

applications. Further, they require SVD and Generalized SVD computation [94-105]

which increases the complexity. Another method called subspace Wiener filtering relies

on the accurate estimation of the signal subspace from the noisy data. This technique









reduces the effect of the bias when the signals are distinguishable from noise (high SNR

scenario). Otherwise, it fails since noise and signal subspaces cannot be separated.

Thus, it would not be fallacious to say that the problem of linear parameter

estimation with noisy data is a hard problem that does not yet have a satisfactory solution

in the existing literature. One of the major contributions of this dissertation is the

development of an elegant solution to this problem without making any unreasonable

assumptions about the noise statistics. Towards this end, we will present a new criterion

based on the error signal and derive new learning algorithms.















CHAPTER 2
AUGMENTED ERROR CRITERION FOR LINEAR ADAPTIVE SYSTEMS

Introduction

In the previous chapter, we discussed the Mean-Squared Error (MSE) criterion

which has been the workhorse of linear optimization theory due to the simple and

analytically tractable structure of linear least squares. In adaptive fi1ter theory, the

classical Wiener-Hopf equations [6,10] are more commonly used owing to the extension

of least squares to functional spaces (Hilbert spaces [106]) proposed by Wiener [6].

However, for Einite impulse response (FIR) filters, (vector spaces) the two solutions

coincide. There are also a number of important properties that help us understand the

statistical properties of the Wiener solution, namely the orthogonality of the error signal

to the input vector space as well as the whiteness of the predictor error signal for

stationary inputs, provided the fi1ter is long enough [5,14]. However, in a number of

applications of practical importance, the error sequence produced by the Wiener filter is

not white. One of the most important is the case of inputs corrupted by white noise,

where the Wiener solution is biased by the noise variance as we saw before in Chapter 1.

In this chapter, we will develop a new criterion which augments the MSE criterion.

In fact, MSE becomes a special case of this new criterion which we call the Augmented

Error Criterion (AEC). Further, we will show that, under some conditions, this new

criterion can produce a partially white error sequence at the output of an adaptive system

even with noisy data. This special case of the AEC is called the Error Whitening

Criterion (EWC). Our approach in this chapter will be as follows. We will first focus on










the problem of parameter estimation with noisy data and motivate the derivation of the

error whitening criterion. Then, we will deduce the more generic augmented error

criterion.

Error Whitening Criterion (EWC)

Consider the problem of parameter estimation with noisy data. Instead of

minimizing the MSE, we will tackle the problem by introducing a new adaptation

criterion that enforces zero autocorrelation of the error signal beyond a certain lag; hence

the name error whitening criterion (EWC). Since we want to preserve the on-line

properties of the adaptation algorithms, we propose to expand the error autocorrelation

around a lag larger than the filter length using Taylor series. Thus, instead of an error

signal, we will end up with an error vector, containing as many components as the terms

kept in the Taylor series expansion. A schematic diagram of the proposed adaptation

structure is depicted in Figure 2-1. The properties of this solution are very interesting, and

it contains the Wiener solution as a special case. Additionally, for the case of two error

terms, the same analytical tools developed for the Wiener filter can be applied with minor

modifications. Moreover, when the input signal is contaminated with additive white


Figure 2-1. Schematic diagram of EWC adaptation.









noise, EWC produces the same optimal solution that would be obtained with the noise

free data, with the same computational complexity of the Wiener solution.

Motivation for Error Whitening Criterion

The classical Wiener solution yields a biased estimate of the reference filter weight

vector in the presence of input noise. This problem arises due to the contamination of the

input signal autocorrelation matrix with that of the additive noise. If a signal is

contaminated with additive white noise, only the zero-lag autocorrelation is biased by the

amount of the noise power. Autocorrelation values at all other lags still remain at their

original values. This observation rules out MSE as a good optimization criterion for this

case. In fact, since the error power is the value of the error autocorrelation function at

zero lag, the optimal weights will be biased because they depend on the input

autocorrelation values at zero-lag. The fact that the autocorrelation values at non-zero

lags are unaffected by the presence of noise will be proved useful in determining an

unbiased estimate of the filter weights.

Analysis of the Autocorrelation of the Error Signal

The question that arises is what lag should be used to obtain the true weight vector

in the presence of white input noise. Let us consider the autocorrelation of the training

error at non-zero lags. Suppose noisy training data of the form (x(t),d(t)) are provided,

where x(t) = x(t)+ v(t) and d(t) = d(t) +u(t) with x(t) being the sample of the noise-

free input vector at time t (time is assumed to be continuous), v(t) being the additive

white noise vector on the input vector, d(t) being the noise-free desired output and u(t)

being the additive white noise on the desired output. Suppose that the true weight vector

of the reference filter that generated the data is w, (moving average model). Then the










error at time t is e(t) = (d(t)+ u(t)) ((t)+ v(t)) w where w is the estimated weight

vector. Equivalently, when the desired response belongs to the subspace of the input, i.e.,

d(t)= XT (t)wT, the error can be written as

e(t) = (1' (t)wT + u(t)) (X(t) + v(t))T w = XT (t)(wT w) + u(t) vT (t)w (2.1)

Given this noisy training data, the MSE-based Wiener solution will not yield a residual

training error that has zero autocorrelation for a number of consecutive lags, even when

the contaminating noise signals are white. From (2.1) it is easy to see that the error will

have a zero autocorrelation function if and only if

* the weight vector is equal to the true weights of the reference model,
* the lag is beyond the Wiener filter length.

During adaptation, the issue is that the filter weights are not set at w,, so the error

autocorrelation function will be generally nonzero. Therefore a criterion to determine the

true weight vector when the data is contaminated with white noise should be to force the

long lags (beyond the filter length) of the error autocorrelation function to zero by using

an appropriate criterion. This is exactly what the error-whitening criterion (EWC) that

we propose here will do. There are two interesting situations that we should consider:

* What happens when the selected autocorrelation lag is smaller than the filter
length?

* What happens when the selected autocorrelation lag is larger than the lag at which
the autocorrelation function of the input signal vanishes?

The answer to the first question is simply that the solution will be still biased since

it will be obtained by inverting a biased input autocorrelation matrix. If the selected lag is

L
autocorrelation matrix, where the zero-lag autocorrelation of the input signal shows up. In









the special case of MSE, the selected lag is zero and the zeroth sub-diagonal becomes the

main diagonal, thus the solution is biased by the noise power.

The answer to the second question is equally important. The MSE solution is quite

stable because it is determined by the inverse of a diagonally dominant Toeplitz matrix.

The diagonal dominance is guaranteed by the fact that the autocorrelation function of a

real-valued function has a peak at zero-lag. If other lags are used in the criterion, it is

important that the lag is selected such that the corresponding autocorrelation matrix

(which will be inverted) is not ill conditioned. If the selected lag is larger than the length

of the input autocorrelation function, then the autocorrelation matrix becomes singular

and a solution cannot be obtained. Therefore, lags beyond the input signal correlation

time should also be avoided in practice.

The observation that, constraining the higher lags of the error autocorrelation

function to zero yields unbiased weight solutions is quite significant. Moreover, the

algorithmic structure of this new solution and the lag-zero MSE solution are still very

similar. The noise-free case helps us understand why this similarity occurs. Suppose the

desired signal is generated by the following equation: d(t)= xT (t)w,, where w, is the

true weight vector. Now multiply both sides by x(t A) from the left and then take the

expected value of both sides to yield E[x(t A)d(t)] = E[x(t A)XT (t)]wT Similarly,

we can obtain E[x(t)d(t -A8)]= E[x(t)xT (t -A)]w,. Adding the corresponding sides of

these two equations yields

E[x(t)d(t -A) + i(t -)d(t)] = E[x(t)i' (t -A) + i(t A)i(t)]w, (2.2)

This equation is similar to the standard Wiener-Hopf equation [9,10]









E[x(t)d(t)] = E[x(t)xT (t)]w,. Yet, it is different due to the correlations being evaluated

at a lag other than zero, which means that the weight vector can be determined by

constraining higher order lags in the error autocorrelation. Now that we have described

the structure of the solution, let us address the issue of training linear systems using error

correlations. Adaptation exploits the sensitivity of the error autocorrelation with respect

to the weight vector of the adaptive filter. We will formulate the solution in continuous

time first, for the sake of simplicity. If the support of the impulse response of the adaptive

filter is of length m, we evaluate the derivative of the error autocorrelation function with

respect to the lag A, where A 2 m are both real numbers. Assuming that the noises in the

input and desired are uncorrelated to each other and the input signal, we get

ape (A) dEe(t~e(t A)]




E (~w, w)'I(t)n (t (w, w) +(ul(t) v(t)w)(u(t A) v( -A)w)]
(2.3)




= -2E ijt)IT (t A)w, w)

The identity in equation (2.3) immediately tells us that the sensitivity of the error

autocorrelation with respect to the weight vector becomes zero, i.e., 8pe(A)/8w = 0, if

(w, -w) =0. This observation emphasizes the following important conclusion: when

given training data that is generated by a linear filter, but contaminated with white noise,

it is possible to derive simple adaptive algorithms that could determine the underlying

filter weights without bias. Furthermore, if (w, w) is not in the null space of










E[X(t)XT (t A)], then only (w, w) = 0 makes pe (A) = 0 and 8pe (A) / w = 0 But

looking at (2.3), we conclude that a proper delay depends on the autocorrelation of the

input signal that is, in general, unknown. Therefore, the selection of the delay A is

important. One possibility is to evaluate the error autocorrelation function at different

lags A > m and check for a non zero input autocorrelation function for that delay, which

will be very time consuming and inappropriate for on-line algorithms.

Instead of searching for a good lag-A, consider the Taylor series approximation of

the autocorrelation function around a fixed lag-L, where L > m,


pe (A) = pe (L) + pe (L)(A L) + pe (L)(A L)2 +...
(2.4)
= E[e(t)e(t L)] E[e(t)e(t L)](A L) +-~E[e(t)e(t L)](A L)2 +...


In (2.4), e(t) and #(t) (see Figure 2-1) represent the derivatives of the error signal with

respect to the time index. Notice that we do not take the Taylor series expansion around

zero-lag for the reasons indicated above. Moreover, L should be less than the correlation

time of the input, such that the Taylor expansion has a chance of being accurate. But

since we bring more lags in the expansion, the choice of the lag becomes less critical than

in (2.3). In principle, the more terms we keep in the Taylor expansion the more

constraints we are imposing on the autocorrelation of the error in adaptation. Therefore,

instead of finding the weight vector that makes the actual gradient in (2.3) zero, we find

the weight vector that makes the derivative of the approximation in (2.4) with respect to

the weight vector zero.

If the adaptive filter is operating in discrete time instead of continuous time, the

differentiation with respect to time can be replaced by a first-order forward difference,









e(n) =e(n) -e(n -L) Higher order derivatives can also be approximated by their

corresponding forward difference estimates, e.g., e(n) = e(n) 2e(n L) + e(n 2L), etc.

Although the forward difference normally uses two consecutive samples, for reasons that

will become clear in the following sections of the chapter, we will utilize two samples

separated by L samples in time. The first-order truncated Taylor series expansion for the

error autocorrelation function for lag A evaluated at L becomes

pe (A) = E[e(n)e(n L)] E[e(n)(e(n) e(n L))](A L)
(2.5)
= -(A -L)E[e 2(n)] +(1 + A -L)E[e(n)e(n -L)]

Analyzing (2.5) we remark another advantage of the Taylor series expansion because the

familiar MSE is part of the expansion. Notice also that as one forces A 4 L, the MSE

term will disappear and only the lag-L error autocorrelation will remain. On the other

hand, as A 4L -1 only the MSE term will prevail in the autocorrelation function

approximation. Introducing more terms in the Taylor expansion will bring in error

autocorrelation constraints from lags iL.

Augmented Error Criterion (AEC)

We are now in a position to formulate the augmented error criterion (AEC). To the

regular MSE term, we add another function E(e2) to result in the augmented error

criterion as shown in equation (2.6).

J(w) = E[e 2 (nl E[e2 (n)] (2.6)

where pis a real scalar parameter. Equivalently, (2.6) can also be written as

J(w) = (1 +2P)E[e 2(n)] 2E[e(n)e(n -L)] (2.7)

which has the same form as in (2.5). Notice that when P = 0 we recover the MSE in

(2.6) and (2.7). Similarly, we would have to select A = L in order to make the first-order










expansion identical to the exact value of the error autocorrelation function. Substituting

the identity (1 +2P) = -( L), and using A = L, we observe that P = -1/2 eliminates

the MSE term from the criterion. Interestingly, this value will appear in a later discussion,

when we optimize S in order to reduce the bias in the solution introduced by input noise.

If p is positive, then minimizing the cost function J(w) is equivalent to minimizing the

MSE but with a constraint that the error signal must be smooth. Thus, the weight vector

corresponding to the minimum J(w) will result in a higher MSE than the Wiener solution.

The same criterion can also be obtained by considering performance functions of

the form



2~ (2.8)
= E[e 2 (Y) E[d 2 *nl+~~2 n +.

where the coefficients p, 7, etc. are assumed to be positive. Notice that (2.8) is the L2

norm of a vector of different objective functions. The components of this vector consist

of e(n), e(n), e'(n), etc. Due to the equivalence provided by the difference

approximations for derivative, these terms constrain the error autocorrelation at lags iL as

well as the error power as seen in (2.8).

In summary, the AEC defined by equation (2.6) can take many forms and hence

results in different optimal solutions.

* If fis 0, then AEC exactly becomes the MSE criterion

* If fis -0.5, then AEC becomes the EWC which will result in an unbiased estimate
of the parameters even in the presence of noise

* If 4is positive and not equal to 0, then the cost function minimizes a combination
of MSE with a smoothness constraint

In the following sections, we will further elaborate on the properties of AEC.









Properties of Augmented Error Criterion

Shape of the Performance Surface

Suppose that noise-free training data of the form (x(n), d(n)), generated by a linear

system with weight vector w, through d(n)= xT(n)w,, is provided. Assume without

loss of generality that the adaptive filter and the reference filter are of the same length.

This is possible since it is possible to pad w, with zeros if it is shorter than the adaptive

filter. Therefore, the input vector x(n) E We, the weight vector w, E We and the desired


output d(n)E eW. Equation (2.6) has a quadratic form and has a unique stationary point.

If p > 0, then this stationary point is a minimum. Otherwise, the Hessian of (2.6) might

have mixed-sign eigenvalues. We demonstrate this fact with sample performance

surfaces obtained for 2-tap FIR filters using P = -1/2 .

For three differently colored training data, we obtain the AEC performance

surfaced shown in Figure 2-2. In each row, the MSE performance surface, the AEC cost

contour plot, and the AEC performance surface are shown for the corresponding training

data. The eigenvalue pairs of the Hessian matrix of (2.6) are (2.35,20.30), (-6.13,5.21),

and (-4.08,-4.14), for these representative cases in Figure 2-2. Clearly, it is possible for

(2.6) to have a stationary point that is a minimum, a saddle point, or a maximum and we

start to see the differences brought about by the AEC.

The performance surface is a weighted sum of paraboloids, which will complicate

gradient-based adaptation, but will not affect search algorithms utilizing curvature

information. We will discuss more on the search techniques later in this Chapter and also

in Chapter 4.














CoztotrpltforAE~howngthentun


Pafonmancenfacefo~~IrAEldsoutonasntnn


01
;J
.I o

Y ''
-o!
-o
-o.


II


--

r
^r
~I


-08 -06 -04 -02 0 02 04 06 08 1



Coztotr ot fn~~~g~aopr MC sowng o~ odn














-08 -06 -04 -02 0 02 04 06 08 1


Pafommace unfae foAClr MC wd2 sadd pontoln


O


-rba~


Caontor lot for C slxwununan














-08 -06 4)4 -02 0 02 04 06 08


Pafommacerunface fo~~Ir M dol ation a un un








11


1-
OH
O(I

o
,,

W2[

-011
-01
-011

1


~z


Figure 2-2. The MSE performance surfaces, the AEC contour plot, and the AEC

performance surface for three different training data sets and 2-tap adaptive
FIR filters.



Analysis of the Noise-free Input Case


Theorem 2. 1: The stationary point of the quadratic form in (2.6) is given by



w* = (1 + P) '(P + Q) (2.9)


where we defined R = E[x(n)xT (n)],




QS = E[i(n)d (n)] .


S = E[ir(n)irT (n)], P = E[x(n)d (n)] and


1 1


.i

:,

"`


1 I









Proofi Substituting the proper variables in (2.6), we obtain the following explicit

expression for J(w).

J(w) = E[d 2(n)] + E[d 2(n)] +wT (R + P)w 2(P + p)T w (2.10)

Taking the gradient with respect to w and equating to zero yields
8J(w)
= 2(R + P)w 2(P + Q) = 0
dw (2.11)
-> w* = (R + P) '(P + Q)

Notice that selecting f = in (2.6) reduces the criterion to MSE and the optimal

solution, given in (2.9), reduces to the Wiener solution. Thus, the Wiener filter is a

special case of the AEC solution (though not optimal for noisy inputs, as we will show

later) .

Corollaryll~~~~~11111~~~~ 1. An equivalent expression for the stationary point of (2.6) is given by

w, = 1+2/i)R-fR i,1+20)P- PL (2.12)

where we defined the matrix RL = E[x(n L)i' (n) + i(n)i' (n -L)] and the vector

PL = E[x(n L)d(n) + x(n)d(n L)]. Notice that the interesting choice P = -1/ 2 yields

w* = R 'PL

Proof: Substituting the definitions of R, S, P, (1, and then recollecting terms to obtain

RL and PL yields the desired result.

w, = (R + P) '(P + SE))

IE~Iw(n)i' (n)] + SE~[(g-(n-L) (n ))in i(n -L)) ]
E [Ci(n)d(n)] + PE[(x(n) x(n L))(d(n) d (n L))]j
E~i~~i'(n) + pE~in~i (n] + ~i~ ~i'(n -L)]- RL 1 (2.13)

'IE[x(n)d (n)] + P(E[x(n)d (n] + E[x(n L)d (n L)] PL
= 1+2B)R-fl1 (i,~1+20)P- PL









From these results we deduct two extremely interesting conclusions:

Lenana 1. (Generalized Wiener-Hopf Equations) In the noise-free case, the true weight

vector is given by RLw T L (This result is also true for noisy data.)


Proof: This result follows immediately from the substitution of d(n)= xT(n)w, and


d (n L) = XT (n L)wT in the definitions of RL and PL

Lenana 2. In the noise-free case, regardless of the specific value of f the optimal

solution is equal to the true weight vector, i.e., w, = w, .

Proof: This result follows immediately from the substitution of the result in Lenana 1 into

the optimal solution expression given in (2.9).

The result in Lenana 1 is especially significant, since it provides a generalization of

the Wiener-Hopf equations to autocorrelation and cross correlation matrices evaluated at

different lags of the signals. In these equations, L represents the specific correlation lag

selected, and the choice L=0 corresponds to the traditional Wiener-Hopf equations. The

generalized Wiener-Hopf equations are essentially stating that, the true weight vector can

be determined by exploiting correlations evaluated at different lags of the signals, and we

are not restricted to the zero-lag correlations as in the Wiener solution.

Analysis of the Noisy Input Case

Now, suppose that we are given noisy training data (x(n), d(n)), where

x(n) = x(n) +v(n) and d(n) = d(n) +u(n) The additive noise on both signals are zero-

mean and uncorrelated with each other and with the input and desired signals. Assume

that the additive noise, u(n), on the desired is white (in time) and let the autocorrelation

matrices of v(n) be V = E[v(n)vT (n)], and VL = E[v(n L)vT (n) + v(n)vT (n L)] .









Under these circumstances, we have to estimate the necessary matrices to evaluate (2.9)

using noisy data. These matrices evaluated using noisy data, R, S, P, and Q will

become (see appendix D for details)

R = E[x(n)xT (n)] = R + V

S = E[(x(nZ) -x(n -L))(x(n) -x(n -L))T ]= 2(1 + V)-_RL VL
(2.14)
P = E[x(n)d(n)] = P
Q = E[(x(n) x(n L))(d(n) d(n L))T ] = 2P PL

Finally, the optimal solution estimate of AEC, when presented with noisy input and

desired output data, will be

v~v. = (R + PS)-'(P + Q)

= (R+ V)+ P(2(R + V) RL VL ) 1+ P+ (2P -L)j (2.15)
= 1+2P)(R+V)-R L, (V 1+2P)P- PL

Theorem 2.2: (EWC Noise-Rejection Theorem) In the noisy-input data case, the optimal

solution obtained using AEC will be identically equal to the true weight vector if and

only if P = -1/2, RL + 0, and VL = 0 There are two situations to consider:

* When the adaptive linear system is an FIR filter, the input noise vector vk COnsists
of delayed versions of a single dimensional noise process. In that case, VL = 0 if
and only if L > na, where na is the filter length and the single dimensional noise
process is white.

* When the adaptive linear system is an ADALINE, the input noise is a vector
process. In that case, VL = 0 if and only if the input noise vector process is white
(in time) and L > 1. The input noise vector may be spatially correlated.

Proofi Sufficiency of the first statement is immediately observed by substituting the

provided values of P and VL Necessity is obtained by equating (2.15) to w, and

substituting the generalized Wiener-Hopf equations provided in Lenana 1. Clearly, if

RL = 0, then there is no equation to solve, thus the weights cannot be uniquely









determined using this value of L. The statement regarding the FIR fi1ter case is easily

proved by noticing that the temporal correlations in the noise vector diminish once the

autocorrelation lag becomes greater than equal to the fi1ter length. The statement

regarding the ADALINE structure is immediately obtained from the definition of a

temporally white vector process.

Orthogonality of Error to Input

An important question regarding the behavior of the optimal solution obtained

using the AEC is the relationship between the residual error signal and the input vector.

In the case of MSE, we know that the Wiener solution results in the error to be

orthogonal to the input signal, i.e., E[e(n)x(n)]= 0 [10,14,15]. However, this result is

true only when there is no noise and also when the estimated fi1ter length is greater than

the actual system impulse response. Similarly, we can determine what the AEC will

achieve.

Lemma 3: At the optimal solution of AEC, the error and the input random processes

satisfy PE[e(n)x(n L) + e(n L)x(n)] = (1 + 2 P)E[e(n)x(n)], for all L > 0 .

Proofi We know that the optimal solution of AEC for any L > 0 is obtained when the

gradient of the cost function with respect to the weights is zero. Therefore,

8J
= 2E[e(n)x(n)] + 2PE[(e(n) e(n L))(x(n) x(n L))]
dw (2.16)
= (1+ 2 P)E[e(n)x(n)] PE[e(n)x(n L) + e(n L)x(n)] = 0

It is interesting to note that if = -1/ 2, then we obtain

E[e(n)x(n -L) + e(n -L)x(n)] = 0 for all L. On the other hand, since the criterion

reduces to MSE for P = 0, then we obtain E[e(n)x(n)] = 0 The result shown in (2. 16),

if interpreted in terms of Newtonian physics, reveals an interesting insight as to the









behavior of the EWC criterion ( P= -1/ 2) at its optimal solution (regardless of the

length of the reference filter that created the desired signal). In a simplistic manner, this

behavior could be summarized by the following statement: The optimal solution of EWC

tries to decorrelate the residual error from the estimated future value of the input vector

(see appendix E for details).

The case where P= -1/2 is especially interesting, because it results in complete

noise rejection. Notice that, in this case, since the optimal solution is equal to the true

weight vector, the residual error is given by e(n) = u(n) v' (n)w,, which is composed

purely of the noise in the training data. Certainly, this is the only way that the adaptive

filter can achieve E[e(n)x(n L) + e(n L)x(n)] = 0 for all L values, since

E[e(n)x(n -L)] =E[e(n -L)x(n)] =0 for this error signal. Thus, EWC not only

orthogonalizes the instantaneous error and input signals, but it orthogonalizes all lags of

the error from the input.

Relationship to Error Entropy Maximization

Another interesting property that the AEC solution exhibits is its relationship with

entropy [107]. Notice that when P < 0, the optimization rule tries to minimize MSE, yet

it tries to maximize the separation between samples of errors, simultaneously. We could

regard the sample separation as an estimate of the error entropy. In fact, the entropy

estimation literature is full of methods based on sample separations [108-113].

Specifically, the EWC case with f = -1/2, finds the perfect balance between entropy

and MSE that allows us to eliminate the effect of noise on the solution. Recall that the

Gaussian density displays maximum entropy among distributions of fixed variance [1 14].

In the light of this fact, the aim of EWC could be understood as finding the minimum









error variance solution, while keeping the error close to Gaussian. Notice that, due to

central limit theorem [114], the error signal will be closely approximated by a Gaussian

density when there are a large number of taps. A brief description of the relationship

between entropy (using estimators) [115-117] and sample differences is provided in

appendix F.

Note on Model-Order Selection

Model order selection is another important issue in adaptive filter theory. The

actual desired behavior from an adaptive filter is to find the right balance between

approximating the training data as accurately as possible and generalizing to unseen data

with precision [118]. One major cause of poor generalization is known to be excessive

model complexity [118]. Under these circumstances, the designer's aim is to determine

the least complex adaptive system (which translates to smaller number of weights in the

case of linear systems) that minimizes the approximation error. Akaike's information

criterion (AIC) [119] and Rissanen's minimum description length (MDL) [120] are two

important theoretical results regarding model order selection. Such methods require the

designer to evaluate an objective function, which is a combination of MSE and the filter

length or the filter weights, using different lengths of adaptive filters.

Consider the case of overmodeling in the problem of linear FIR filter (assume N

taps) estimation. If we use the MSE criterion, and assume that there is no noise in the

data, then, the estimated Wiener solution will have exactly N non-zero elements that

exactly match with the true FIR filter. This is a very nice property of the MSE criterion.

However, when there is noise in the data, then this property of MSE is no longer true.

Therefore, increasing the length of the adaptive filter will only result in more parameter

bias in the Wiener solution. On the other hand, EWC successfully determines the length









of the true filter, even in the presence of additive noise. In the overmodeling case, the

additional taps will decay to zero indicating that a smaller filter is sufficient to model the

data. This is exactly what we would like an automated regularization algorithm to

achieve: determining the proper length of the filter without requiring external discrete

modifications on this parameter. Therefore, EWC extends the regularization capability of

MSE to the case of noisy training data. Alternatively, EWC could be used as a criterion

for determining the model order in a fashion similar to standard model order selection

methods. Given a set of training samples, one could start solving for the optimal EWC

solution for various lengths of the adaptive filter. As the length of the adaptive filter is

increased past the length of the true filter, the error power with the EWC solution will

become constant. Observing this point of transition from variable to constant error power,

we can determine the exact model order of the original filter.

The Effect of fon the Weight Error Vector

The effect of the cost function free parameter P on the accuracy of the solution

(compared to the true weight vector that generated the training data) is another crucial

issue. In fact, it is possible to determine the dynamics of the weight error as a function of

p. This result is provided in the following lemma.

Lemma 4: (The effect of P on AEC solution) In the noisy training data case, the

derivative of the error vector between the optimal EWC solution and the true weight

vector, i.e., il. = w. w,, with respect to pis given by


= -I (+ 2a)(R + V) FR ][2(R- R,)i, R,w, i (2. 17)


Proofi Recall from (2.15) that in the noisy data case, the optimal AEC solution is given










by i^v = [(1+ 2a)(R + V) FRL L -1 [('1+ 2a)P SPL i. Using the chain rule for the

derivative and the fact that for any nonsingular matrix A(P),

dA-'/9= -A-'(8A/89)A', the result in (2.17) follows from straightforward

derivation. In order to get the derivative asp f -1/2, we substitute this value and




The significance of Lemma 4 is that it shows that no finite P value will make this

error derivative zero. The matrix inversion, on the other hand, approaches to zero for

unboundedly growing P. In addition, it could be used to determine the Euclidean error

norm derivative, ie Ii / 80.

Numerical Case Studies of AEC with the Theoretical Solution

In the preceding sections, we have built the theory of the augmented error criterion

and its special case, the error whitening criterion, for linear adaptive filter optimization.

We have investigated the behavior of the optimal solution as a function of the cost

function parameters as well as determining the optimal value of this parameter in the

noisy training data case. This section is designed to demonstrate these theoretical results

in numerical case studies with Monte Carlo simulations. In these simulations, the

following scheme will be used to generate the required autocorrelation and

crosscorrelation matrices.

Given the scheme depicted in Figure 2-3, it is possible to determine the true

analytic auto/cross-correlations of all signals of interest, in terms of the filter coefficients

and the noise powers. Suppose 5, v, and u are zero-mean white noise signals with

powers a, a ", and 0;, ,respectively. Suppose that the coloring filter h and the






















Figure 2-3. Demonstration scheme with coloring fi1ter h, true mapping fi1ter w, and the
uncorrelated white signals.

mapping fi1ter w are unit-norm. Under these conditions, we obtain






E[(x(n) + v(n))(x(n A) + v(n A))] 'Exnx"(~l i (2.19)



E[(lr(n)+ (n2))d(n2)]= w +f wlE[r(n2) (n1- A)] (2.20)


For each combination of SNR from (-10dB,0dB,10dB), /?from (-0.5,-0.3,0,0. 1), m from

(2,...,10), and L from (m,...,20) we have perfo~rmedl 100C hMo~nt Carlo si~mu~lationns using

randomly selected 30-tap FIR coloring and n-tap mapping filters. The length of the

mapping filters and that of the adaptive filters were selected to be equal in every case. In

all simulations, we used an input signal power of 0,2 = 1, and the noise powers z,2 = 0,2

are determined from the given SNR using SNR = 10 log (0, / a ) The matrices R S,

P, and Q, which are necessary to evaluate the optimal solution given by (2.15) are then

evaluated using (2.18), (2.19), and (2.20), analytically. The results obtained are

summarized in Figure 2-4 and Figure 2-5, where for the three SNR levels selected, the














average squared error norm for the optimal solutions (in reference to the true weights) are



given as a function of L and n for different P values. In Figure 2-4, we present the


average normalized weight vector error norm obtained using AEC at different SNR levels


and using different P values as a function of the correlation lag L that is used in the


criterion. The fi1ter length was Eixed to 10 in these simulations.


SNR=-10 SNR=0 SNR=10


O 6
beta= 0 1
05

04


beta=-0 3
0 2

0 1
beta= -1/2

10 15 20


05 I
04

03

0ea= 1
beta=-1/2

N`0
10 15 20


beta=-0 3
/ beta=0 1

~beta=0
beta= -1/2
0 15 20


Figure 2-4. The average squared error-norm of the optimal weight vector as a function of

autocorrelation lag L for various f values and SNR levels.


SNR=-10 SNR=0 SNR=10


09~

oa

07

06


0 4


0 3

0 2

0 1

O s


S beta= 0 1


*beta=0


beta=-0 3









beta= -1/2 4
5 10


beta= 0 1



beta--0 3
beta= 0



beta= -1/2 4
5 10


beta= 01

beta= -0 3



Beta= -1/2 4
5 10


Figure 2-5. The average squared error-norm of the optimal weight vector as a function of

filter length m for various f values and SNR levels.









From the theoretical analysis, we know that if the input autocorrelation matrix is

invertible, then the solution accuracy should be independent of the autocorrelation lag L.

The results of the Monte Carlo simulations presented in Figure 2-4 conform to this fact.

As expected, the optimal choice of = -1/2 determined the correct filter weights

exactly. Another set of results, presented in Figure 2-5, shows the effect of filter length

on the accuracy of the solutions provided by the AEC. The optimal value of P = -1/2

always yields the perfect solution, whereas the accuracy of the optimal weights degrades

as this parameter is increased towards zero (i.e. as the weights approach the Wiener

solution). An interesting observation from Figure 2-5 is that for SNR levels below zero,

the accuracy of the solutions using sub-optimal P values increases, whereas for SNR

levels above zero, the accuracy decreases when the fi1ter length is increased. For zero

SNR, on the other hand, the accuracy seems to be roughly unaffected by the fi1ter length.

The Monte Carlo simulations performed in the preceding examples utilized the

exact coloring filter and the true fi1ter coefficients to obtain the analytical solutions. In

our Einal case study, we demonstrate the performance of the batch solution of the AEC

criterion obtained from sample estimates of all the relevant auto- and cross-correlation

matrices. In these Monte Carlo simulations, we utilize 10,000 samples corrupted with

white noise at various SNR levels. The results of these Monte Carlo simulations are

summarized in the histograms shown in Figure 2-6. Each subplot of Figure 2-6

corresponds to experiments performed using SNR levels of -10 dB, OdB, and 10 dB for

each column and adaptive fi1ter lengths of 4-taps, 8-taps, and 12-taps for each row,

respectively. For each combination of SNR and filter length, we have performed 50

Monte Carlo simulations using MSE (f =0) and EWC (f =-1/2) criteria. The









correlation lag is selected to be equal to the filter length in all simulations, due to

Theorem 2.2. Clearly, Figure 2-6 demonstrates the superiority of the AEC in rejecting

noise that is present in the training data. Notice that in all subplots (for all combinations

of filter length and SNR), AEC achieves a smaller average error norm than MSE.


SHn; IOd8,11qE;1









Clrann~DI
mn; lod4,w~~;a









G-anrn(b~l


Ihn;-113d~.lt~P;B


Sl;OdB Yrapr;l
' '


~; I0dslupr;ll


SI;O~,Ylapr;l?









G;arnn(d~


~m;los aup~;l2



Figure 2-6. Histograms of the weight error norms (dB) obtained in 50 Monte Carlo
simulations using 10000 samples of noisy data using MSE (empty bars) and
EWC with f = -0.5 (filled bars). The subfigures in each row use filters with 4,
8, and 12 taps respectively. The subfigures in each column use noisy samples
at -10, 0, and 10 dB SNR, respectively.


,,ldldOl
ClrannM


G-anmr(dm


III Ilr~R~rrnn(O~jj


Cnana~R(Pql











The discrepancy between the performances of the two solutions intensifies with

increasing filter length. Next, we will demonstrate the error-whitening property of the

EWC solution. From equation (2.1) we can expect that the error autocorrelation function

will vanish at lags greater than or equal to the length of the reference filter, if the weight

vector is identical to the true weight vector. For any other value of the weight vector, the

error autocorrelation fluctuates at non-zero values. A 4-tap reference filter is identified

with a 4-tap adaptive filter using noisy training data (hypothetical) at an SNR level of

OdB. The autocorrelation functions of the error signals corresponding to the MSE

solution and the EWC solution are shown in Figure 2-7. Clearly, the EWC criterion

determines a solution that forces the error autocorrelation function to zero at lags greater

than or equal the filter length (partial whitening of the error).



-EWC
-MSE


0.1

1.5 0.05-
0
::1
-0.05-
~0.5 \ -0.1
W C 5 10 15 20 25 30





0 5 10 15 20 25 30
Lag


Figure 2-7. Error autocorrelation function for MSE (dotted) and EWC (solid) solutions.

Finally, we will address the order selection capability and demonstrate how the


AEC (specifically EWC) can be used as a tool for determining the correct filter order,

even with noisy data, provided that the given input-desired output pair is a moving









average process. For this purpose, we determine the theoretical Wiener and EWC (with

7= -1/2 and L=m, where m is the length of the adaptive filter) solutions for a

randomly selected pair of coloring filter, h, and mapping filter w, at different adaptive

filter lengths. The noise level is selected to be 20 dB, and the length of the true mapping

filter is 5. We know from our theoretical analysis that if the adaptive filter is longer than

the reference filter, the EWC will yield the true weight vector padded with zeros. This

will not change the MSE of the solution. Thus, if we plot the MSE of the EWC versus the

length of the adaptive filter, starting from the length of the actual filter, the MSE curve

will remain flat, whereas the Wiener solution will keep decreasing the MSE,

contaminating the solution by learning the noise in the data. Figure 2-8(a) shows the

MSE obtained with the Wiener solution as well as the EWC solution for different lengths

of the adaptive filter using the same training data described above. Notice (in the

zoomed-in portion) that the MSE with EWC remains constant starting from 5, which is

the filter order that generated the data. On the other hand, if we were to decide on the

filter order looking at the MSE of the Wiener solution, we would select a model order of

4, since the gain in MSE is insignificantly small compared to the previous steps from this

point on. Figure 2-8(b) shows the norm of the weight vector error for the solutions

obtained using the EWC and MSE criteria, which confirms that the true weight vector is

indeed attained with the EWC criterion once the proper model order is reached.

This section aimed at experimentally demonstrating the theoretical concepts set

forth in the preceding sections of the chapter. We have demonstrated with numerous

Monte Carlo simulations that the analytical solution of the EWC criterion eliminates the

effect of noise completely if the proper value is used for /7. We have also demonstrated









that the batch solution of EWC (estimated from a finite number of samples) outperforms

MSE in the presence of noise, provided that a sufficient number of samples are given so

that the noise autocorrelation matrices diminish as required by the theory.

Summary

In this chapter, we derived the augmented error criterion (AEC) and discussed a

special case of AEC called the error whitening criterion (EWC). The proposed AEC

includes MSE as a special case. We discussed some of the interesting properties of the

AEC cost function and worked out the analytical optimal solution. Further, we discussed

the reasoning behind naming the special case of AEC with the parameter /7 = -0.5 as

EWC. The intuitive reasoning is that this criterion partially whitens the error signal even

in the presence of noise which cannot be achieved by the MSE criterion. Thus the error

whitening criterion is very useful for estimating the parameters of a linear unknown

system in the presence of additive white noise. AEC with other values of f can be used as

a constrained MSE criterion where the constraint is the smoothness of the error signal.

Most of the material presented in this chapter can be found in [121].

Although we have presented a complete theoretical investigation of the proposed

criterion and its analytical solution, in practice, on-line algorithms that operate on a

sample-by-sample basis to determine the desired solution are equally valuable. Therefore,

in the following chapters, we will focus on designing computationally efficient on-line

algorithms to solve for the optimal AEC solution in a fashion similar to the well-known

RLS and LMS algorithms. In fact, we aim to come up with algorithms that have the same

computational complexity with these two widely used algorithms.















CHAPTER 3
FAST RECURSIVE NEWTON TYPE ALGORITHMS FOR AEC

Introduction

In Chapter 2, we derived the analytical solution for AEC. We also showed

simulation results using block methods. In this Chapter, the focus will be on deriving

online, sample-by-sample Newton type algorithms to estimate the optimal AEC solution.

First, we will derive a Newton type algorithm that has a structure similar to the well-

known RLS algorithm that estimates the optimal Wiener solution for MSE criterion. The

complexity of the proposed algorithm is O(N2) which is again comparable with that of the

RLS algorithm. Then, we will propose another Newton type algorithm derived from the

principles of TLS using minor components analysis. This algorithm in its current form

estimates the optimal EWC solution which is a special case of AEC with B = -0.5.

Derivation of the Newton Type Recursive Error Whitening Algorithm

Given the estimate of the filter tap weights at time instant (n -1i), the goal is to

determine the best set of tap weights at the next iteration n that would track the optimal

solution. We call this algorithm as Recursive Error Whitening (REW) algorithm although

the error whitening property is applicable only when the parameter B is set to -0.5. But,

the algorithm can be applied with any value of J. Recall that the RLS algorithm belongs

to the class of fixed-point algorithms in the sense that they track the optimal Wiener

solution at every time step. The REW algorithm falls in the same category and it tracks

the optimal AEC solution at every iteration. The noteworthy feature of the fixed-point

algorithms is their exponential convergence rate as they utilize higher order information









like curvature of the performance surface. Although the complexity of the fixed-point

Newton type algorithms is higher when compared to the conventional gradient methods,

the superior convergence and robustness to the eigenspread of the data can be vital gains

in many applications.

For convenience purposes, we will drop the tilde convention that we used in the

previous chapter to differentiate between noise-corrupted and noise free matrices and

vectors. Recall that the optimal AEC solution is given by

w* = (R + PS)-'(P + PQ) (3.1)

Letting T(n) =R(n) + S(n) and V(n) =P(n) + Q(n), we obtain the following

recursion.

T (n) = T (n 1) + (1 + 2 P) x(n) x (n) fx (n L) x (n) fx (n) x (n L)

= T (n 1) + 2 fx(n) x (n) px (n L) x (n) + x (n) x (n) fx (n) x (n L) (3 .2)

= T (n 1) + (2 fx(n) px (n L)) x (n) + x (n)(x (n) fx (n L))T

Realize that equation (3.2) basically tells us that the matrix T(n) can be obtained

recursively using a rank-2 update. In comparison (see Chapter 1), the RLS algorithm

utilizes a rank-1 update for updating the covariance matrix. At this point, we invoke the

matrix inversion lemma2 (Sherman-Morri son-Woodbury identity) [7,8] given by

(A +BCD T) = A A-'B(C + DT A-B)1 'DT A (3.3)

Substituting A =T(n -1), B =[(2px(n) x(n L)) x(n)], C= Ix2, a 2x2 identity

matrix and D = [x(n) (x(n) fx(n L))] we get the equation (3.2) in the same form as

the LHS of equation (3.3). Therefore, the recursion for the inverse of T(n) becomes


2 NOtice that the matrix inversion lemma simplifies the computation of the matrix inverse only when the
original matrix can be written using reduced rank updates.









T l(n) = T '(n -1) T- T(n -1)B(I2x2 + gT -(n -1)B) 1D T (n -1) (3.~4)

Note that the computation of the above inverse is different than the conventional RLS

algorithm. It requires the inversion of a 2x2 matrix (Izx2 + D T -(n -)B) owing to the

rank-2 update of T(n).The recursive estimator for V(n) is a simple correlation estimator

given by

V(n) = V(n 1) + [(1+ 2 P)d(n)x(n) pd(n)x(n L) fd~(n L)x(n)] (3.5)

Using T l(n) and V(n), an estimate of the fi1ter weight vector at iteration index n is

w(nZ)= T l(n)V(nz) (3.6)

We will define a gain matrix analogous to the gain vector in the RLS case [14] as

K(n) = T (n )BI2x2 +D T (n 1)Br (3.7)

Using the above definition, the recursive estimate for the inverse of T(n) becomes

T l(n) = T l(n 1) K(n)D T -(n 1) (3.8)

Once again, the above equation is analogous to the Ricatti equation for the RLS

algorithm. Multiplying (3.7) from the right by (I2x2 + D T 1(n -1)B), we obtain

K(H12x Z+D T (n- 1)B)= T (n-1)B
K(n2)= T1(n 1)B -K(nl)D T -(n -1)B (3.9)
K(n) = T l(n)B

In order to derive an update equation for the fi1ter weights, we substitute the recursive

estimate for V(n) in (3.6).

w(n) = T l(n)V(n 1) + T l(n)[(1 + 2P)d(n)x(n) ~a'(n)x(n L) fd'(n L)x(n)] (3.10)

Using (3.8) and recognizing the fact that w(n-1) =T (~n-1)V(n-1) the above



























Note that the product D'w(n-1) is nothing but the matrix of the outputs

l(n) y'(n1)- Syc(nz L)1 where y'(n) = x' (n)w(n -1), y(n L) = x' (n L)w(n -1).

The apriori error matrix is defined as


d~n n) y(n) P(d(n L) y(n L))n)- e~nn) e(n L) (.4

Using all the above definitions, we will formally state the weight update equation for the

REW algorithm as

w(n) = w(n 1) + K(n~e(n) (3.15)

The overall complexity of (3.15) is O(N2) which is comparable to the complexity of the

RLS algorithm (this was achieved by using the matrix inversion lemma). Unlike the

stochastic gradient algorithms that are easily affected by the eigenspread of the input data

and the type of the stationary point solution (minimum, maximum or saddle), the REW

algorithm is immune to these problems. This is because it inherently makes use of more

information about the performance surface by computing the inverse of the Hessian


equation can be reduced to


w(n) = w(n- 1) -K(n)D w(n 1)


Using


(3.11)


+ T- (n)[(1 + 2P)d(n)x(n) pd(n)x(n L) fd'(n L)x(n)]

the definition for B = [(2px(n) px(n L)) x(n)], we can easily see that


d>n fd~(n -dn L) ,?


From (3.9) and (3.12), the weight update equation simplifies to


wl(nl)= w(n- 1)-K(n)D'w(n-1)+Ki"dn ) d~:~n )1


(3.13)









matrix R + pS A summary of the REW algorithm is given below in Table 3-1.


Table 3-1. Outline of the REW Algorithm.

Initialize T 1(0) = cI, c is a large positive constant
w(0) = 0
At every iteration, compute
B = [(2 px(n) fx(n L)) x(n)] and D = [x(n) (x(n) fx(n L))]

K(n)= T (n-1)BIZx2+D T (n-1)Br
y(n) = xT (n)w(n -1) and y(n -L) = xT (n -L)w(n -1)
d(n) y(n) e(n)
.l.d(n) y(n) P(d(n L) y(n L)) ;e(n) fe(n L)
w(n) = w(n- 1) +K(n~e(n)
T l(n) = T l(n 1) K(n)D T -(n 1)


The above derivation assumes stationary signals. For non-stationary signals, a

forgetting factor is required for tracking. Inclusion of this factor in the derivation is trivial

and is left out in this chapter. Also, note that the REW algorithm can be applied for any

value of B. When B = -0.5, we know that AEC reduces to EWC and hence REW

algorithm can be used for estimating the parameters in the presence of input white noise.

Extension of the REW Algorithm for Multiple Lags

In Chapter 2, we briefly mentioned the fact the AEC can be extended by including

multiple lags in the cost function. It is easy to see that the extended AEC is given by


Lmax
JI(w) = E[e (n)]+ C 13E[e(n)- e(n -L]


(3.16)


where, Lmax denotes the maximum number of lags utilized in the AEC cost function. It is

not mandatory to use the same constant a for all the error lag terms. However, for the

sake of simplicity, we assume single a value. The gradient of (3.16) with respect to the










weight vector w, is

cl/(w) Lma
= -e(n)x(n) BC [e(nz)- e(nz L)] [x(n2)- x(nZ L)] (3.17)
dw L=1

Recall the following matrix definitions (restated here for clarity),

R = E[x(n) x "(n)]

SL = E[(x(nZ) -x(n -L))(x(n) -x(n -L))T ]= 2(R-_ RL)
RL = E[x(n)x(n L)T + x(nZ L)x(nZ)T ]3.8
P = E[x(n)d(n)]

PL = E[x(n)d(n L) + x(n L)d(n)]
Q L = E[(x(n) x(n L))(d(n) d(n L))T ] = 2P PL

Using the above definitions in (3.17) and equating the gradient to zero, we get the

optimal extended AEC solution as shown below.

Lma. Lmr
w = (R + pC SL> -1 p L) (3.19)
L=1 L=1

At first glance, the computational complexity of (3.19) seems to be O(N3). But, the

symmetric structure of the matrices involved can be exploited to lower the complexity.

Once again, we resort to the matrix inversion lemma as before and deduce a lower O(N2)

complexity algorithm. Realize that the optimal extended AEC solution at any time instant

n will be


w(n) = T l(n)V(n) (3.20)

Lmax. Lm
where, T(n) = R(n) + pC SL (n) and V(n) P(n) + pC QL (n) as before. The estimator
L=1 L=1

for the vector V(n) will be a simple recursive correlator.

Lmax
V(n) = V(n ) + [(1 + 2ar~max,)d(n)x(n) f d(n)x(n L) fd(nz -L)x(n)] (3.21)
L=1










The matrix T(n) can be estimated recursively as follows.


T(nz) = T(n2 1) + (1 + 2a~ma,)x(n2)xT (n2) ff x(n L>x" (n)> aC x(n2)xT (n2 L)
L=1 L=1
Lmax
= T(n -1)+ 2ar~maxx~rn~x" (n) -j ffx(nZ L)xT(n)+
L=1
Lmax (3.21)
x (nz) x (n7) -j fx (nZ) x (nI -L)
L=1


T () =T ( ) +2 L ma, xx(nZ) ffx X(n L) x ')(n)! + x(n xn)- ffx (n L)


Now, the matrices A, B, C and D used in the inversion lemma in equation (3.3) are

defined as follows.

A = T (n 1)

B = (2~maxx(nZ) aC x L=1
(3.22)
C=I
2x2


L=1


The only differences from the previous definitions lie in the expressions for the B and D

matrices that now require an inner loop running up to Lmax. The rest of the procedure

remains the same as before. Once again, by the proper application of the matrix inversion

lemma, we were able to reduce the complexity of the matrix inversion to O(N2) by

recursively computing the inverse in a way that we only require an inversion of a simple

2x2 matrix. This measure of complexity does not include the computations involved in

building the B and D matrices. However, typically, the maximum number of lags will be

smaller than the length of the adaptive filter. Therefore, the additional overhead incurred

in the estimation of B and D matrices will not result in a significant change in the overall

compl exity.









Relationship to the Recursive Instrumental Variables Method

The previously derived REW algorithm for the single lag case has a structure

similar to the Instrumental Variables (IV) method. The IV method has its origins in

statistics and was apparently proposed by Reiersarl [122]. Over a period of time, it has

been adapted to model dynamical systems in control engineering. Lot of work in the

applications of IV to control engineering problems has been done by Wong, Polak [123]

and Young [124-126]. Recent advances in IV methods for system identification and

control have been mainly due to Soiderstroim and Stoica [32,93]. It is beyond the scope of

this dissertation to summarize the applications and impacts of IV in various engineering

problems. For more details, refer to [32].

Basically, IV can be viewed as an extension to the standard Least Squares

regression and can be used to estimate the parameters in white noise once the model

order is known. The fundamental principle is to choose delayed regression vectors known

as instruments that are uncorrelated with the additive white noise. IV can also be

extended to handle colored noise situations. This will be exclusively handled in Chapter

5. For now, the discussion will be strictly limited to the white noise scenario.

Mathematically speaking, the IV method computes the solution

w,, = E[x(n)x(n A)T ]-'E[x(n)d(n A)] (3.23)

where, the lag A is chosen such that the outer product of the regression vector x(n) with

the lagged regression vector x(n-A) result in a matrix that is independent of the additive

white noise components v(n). In comparison, the REW solution is given by w, = RL1PL *

Notice that in REW solution, the matrix RL is symmetric and Toeplitz [8], which is very

much desirable and we exploit this fact to derive an elegant minor components based









algorithm in the next section. Thus, in effect the IV method can be considered as a special

case of the REW algorithm obtained by removing the symmetric terms in RL and PL *

We will compare the performances of REW and IV methods later in this chapter.

Recursive EWC Algorithm Based on Minor Components Analysis

Till now, we focused on a Newton type algorithm to compute the optimal AEC

solution. Although the algorithm is fast converging, the convergence of the algorithm can

be sensitive to the ill-conditioning of the Hessian matrix R(n) + PS(n) which can happen

at the first few iterations. Alternatively, we can explore the idea of using the minor

components analysis (MCA) to derive a recursive algorithm similar to the TLS algorithm

for MSE. We call this algorithm as EWC-TLS algorithm. As the name suggests, this

algorithm can be used only for the case with a = -0.5, which defaults the augmented error

criterion to error whitening criterion. Recall that the TLS problem in general, solves an

over-determined set of linear equations of the form Ax = b, where As E "'" is the data

matrix, beW"' is the desired vector, and x e is the parameter vector and m denotes

the number of different observation vectors each of dimension n [41]. Alternatively, the

linear equations can be written in the form [A; b][xT;'-1]= 0 whe-Pre[;b] depnpote an


augmented data matrix. When [A; b] is a symmetric square matrix, it can be shown that

the TLS solution is simply given by

[x;-1l]= -v, n~l,/v,n, (3.24)

where, v,,,~, is the last element of the minor eigenvector v,, ,. In the case of EWC, it is

easy to show that the augmented data matrix [127, 128] (analogous to [A; b]) is


G = LP p (L) (3.25)









The term pd (L) in (3.25) denotes the autocorrelation of the desired signal at lag L. It is

important to note that the matrix (3.25) is square symmetric due to the symmetry of R,.

Hence, the eigenvectors of G are all real which is highly desirable. Also, it is important to

note the fact that (3.25) still holds even with noisy data as the entries of G are unaffected

by the noise terms. In the infinite-sample case, the matrix G is not full rank and we can

immediately see that one of the eigenvalues of (3.25) is zero. In the finite-sample case,

the goal would be to Eind the eigenvector corresponding to the minimum absolute

eigenvalue (finite samples also imply that G-1 exists). Since the eigenvalues of G can be

both positive and negative, typical iterative gradient or even some Eixed-point type

algorithms tend to become unstable. A workaround would be to use the matrix G2 inStead

of G. This will obviate the problem of having mixed eigenvalues while still preserving

the eigenvectors. However, the squaring operation is good only if the eigenvalues of G

are well separated. Otherwise, the smaller eigenvalues blend together making the

estimation of the minor component of G2 mOre difficult. Also, the squaring operation

creates additional overhead, thereby negating any computational benefits offered by the

Eixed point type PCA solutions as discussed in Appendix A.

So, we propose to use the inverse iteration method for estimating the minor

eigenvector of G [8]. If w(n)E e St" denotes the estimate of the minor eigenvector

corresponding to the smallest absolute eigenvalue at time instant n, then the estimate at

the (n+1)th instant is given by

w(n +1) = G(n +1)-w(n)
w (n + 1) (3.26)
w(n +1) =
Iw (n + 1)

The term G(n +1) denotes the estimate of the augmented data matrix G (equation (3.25))









at the (n+1)th instant. It is easy to see that G(n) can be recursively estimated as

G(n) = G(n -1) + y(n)~y(n -L)T + \(nZ- L)V(nZ)T where, ~y(n) = [x(n); d(n)] is the

concatenated vector of the input and desired response. Now, we can invoke the inversion

lemma as before and obtain a recursive O(n2) OStimate for matrix inversion in (3.26). The

details of this derivation are trivial and omitted here. Once the minor component estimate

converges i.e., w(n) 4v, the EWC-TLS solution is simply given by equation (3.24).

Thus, the overall complexity of the EWC-TLS algorithm is still O(n2) which is the same

as the REW algorithm. However, we have observed through simulations that, the EWC-

TLS method converges faster than the EWC-REW while preserving the accuracy of the

parameter estimates.

Experimental Results

We will now show the simulation results with the Newton type algorithms for

AEC. Specifically, our objective is to show the superior performance of the proposed

criterion and the associated algorithms in the problem of system identification with noisy

input data.

Estimation of System Parameters in White Noise Using REW

The REW algorithm can be used effectively to solve the system identification

problem in noisy environments. As we have seen before, setting the value of /7 = -0.5,

noise immunity can be gained for parameter estimation. We generated a purely white

Gaussian random noise of length 50,000 samples and added this to a colored input signal.

The white noise signal is uncorrelated with the input signal. The noise free, colored, input

signal was filtered by the unknown reference filter, and this formed the desired signal for

the adaptive filter. Since, the noise in the desired signal would be averaged out for both









RLS and REW algorithms, we decided to use the clean desired signal itself. This will

bring out only the effects of input noise on the filter estimates. Also, the noise added to

the clean input is uncorrelated with the desired signal. In the experiment, we varied the

Signal-to-Noi se-Ratio (SNR) in the range -10dB to +10dB. The number of desired filter

coefficients was also varied from 4 to 12. We then performed 100 Monte Carlo runs and

computed the normalized error vector norm given by


error = 20 0logIW lIIrl (3.27)


where, w, is the weight vector estimated by the REW algorithm with /7 = -0.5 after

50,000 iterations or one complete presentation of the input data and w, is the true weight

vector. In order to show the effectiveness of the REW algorithm, we performed Monte

Carlo runs using the RLS algorithm on the same data to estimate the filter coefficients.

Further, we also evaluated the analytical TLS solution for each case Figure 3-1 shows a

histogram plot of the normalized error vector norm given in (3.27) for all the three

methods. It is clear that the REW algorithm was able to perform better than the RLS at

various SNR and tap length settings. In the high SNR cases, there is not much of a

difference between RLS and REW results. However, under noisy circumstances, the

reduction in the parameter estimation error with REW is orders of magnitude more when

compared with RLS. Also, the RLS algorithm results in a rather useless zero weight

vector, i.e., w = 0 when the SNR is lower than -10dB. On the other hand, TLS performs

well only in the cases when the noise variances in the input and desired signals are the

same. This is in conformance with the well-known theoretical limitations of the TLS

algorithm.











laPerltluinnieRNRI ladB,#rp;rug P renewwnhSNR DAarlaprps Para~mlncethSNR lidB, rip 4







Eo: numid Ere umnBEr nlid

:- I







Ey now in dB brra nulm in dB ber now in dB
In Prdnnarl6wNhR 10d8.,Hap 1 Ivhrmr nohanF;Mssin = tsap?' = rrmnaneow NR= 10dB Mapi'





.I I I I .. it
Ew nol ind rynl nd renl nd
Figure~, 3-.Hsormposshwn h ro etr omfrECLS M
aloitm an te nueialTSsouin












aveage rrovetrnrvausfr~nW -1 inFgue3--2. ~ Ntcththeeis ipa =-05(nitebya""nthfgu)ad










Performance of REW over vanous beta values Performance of REW algorithm over var ous beta values


-8 3



S-10 2 -






1-OB -06 -04 -02 0 02 04 06 08 1 -08 -06 -04 -02 0 02 04 06 OB
beta beta

(a) (b)

Figure 3-2. Performance of REW algorithm (a) SNR = OdB and (b) SNR = -10 over
various beta values.



this clearly gives us the minimum estimation error. This corresponds to the EWC

solution. For p = 0, (indicated by a "o" in the figure) the REW algorithm reduces to the


regular RLS giving a fairly significant estimation error.

Next the parameter p is set to -0.5 and SNR to OdB, and the weight tracks are

estimated for the REW and the RLS algorithms. Figure 3-3 shows the averaged weight

tracks for both REW and RLS algorithms averaged over 50 Monte Carlo trials. Asterisks

on the plots indicate the true parameters. The tracks for the RLS algorithm are smoother,

but they converge to wrong values, which we have observed quite consistently. The

weight tracks for the REW algorithm are noisier compared to those of the RLS, but they

eventually converge to values very close to the true weights. This brings us to an

important issue of estimators viz., bias and the variance. The RLS algorithm has a

reduced variance because of the positive-definiteness of the covariance matrix R(n).

However, the RLS solution remains asymptotically, biased in the presence of noisy input.










We ght tracks for HEW algorithm Weight tracks for RLS algorithm




E~ 6i *5 'denoted true we ghts
1 514
K I~ denotes true weights 1 2




005 1 15 2 25 3 36 4 46 5 '0 06 1 15 2 26 3 36 4 45
Iterat ons x 104 Iterations x1


Figure 3-3. Weight tracks for REW and RLS algorithms.


On the other hand, REW algorithm produces zero bias, but the variance can be high

owing to the conditioning of the Hessian matrix. However, this variance diminishes with

increasing number of samples.

The noisy initial weight tracks of the REW algorithm may be attributed to the ill

conditioning that is mainly caused by the smallest eigenvalue of the estimated Hessian

matrix, which is R(n)+ PS(n). The same holds true for the RLS algorithm, where the


minimum eigenvalue of R(n) affects the sensitivity [14]. The instability issues of the


RLS algorithm during the initial stages of adaptation have been well studied in literature

and effects of round off error have been analyzed and many solutions have been proposed

to make the RLS algorithm robust to such effects [129]. Similar analysis on the REW

algorithm is yet to be done and this would be addressed in future work on the topic.

Performance Comparisons between REW, EWC-TLS and IV methods

In this example, we will contrast the performances of the REW, EWC-TLS and the

Instrumental Variables (IV) method in a 4-tap system identification problem with noisy

data. The input signal is colored and corrupted with white noise (input SNR was set at

5dB) and the desired signal SNR is 10dB. For the IV method, we chose the delayed input







56


Performance with SNR = 5dB, # taps = 4
M EWC-REW
C D IV







3 C I la : 11 "



-20 -5 -10 -5 O 1
menrro norm inea dB d

Fiue34 itga lt hwn h ro etrnrsfraltemtos

veto xn A a te nsrmn n h llag 1 wa hsntob or wihi h






vector norm ~ ~ ~ Errrnrr in dBgiebyeuto(32)Fiue34sosteerrhtgamfr







REW, EWC-TLS, IV and the optimal Wiener solutions. EWC-TLS and REW algorithms

outperform the Wiener MSE solution. The IV method also produces better results than

the Wiener solution. Amongst the EWC solutions, we obtained better results with the

EWC-TLS algorithm (equations 3.24 and 3.26) than REW. However, both EWC-TLS

and REW outperformed IV method. This may be partially attributed to the conditioning

of the matrices involved in the estimation of the REW and IV methods. Further

theoretical analysis is required to quantify the effects of conditioning and symmetric

Toeplitz structure of R,. In Figure 3-5, we show the angle between the estimated minor

eigenvector and the true eigenvector of the augmented data matrix G for a random single










trial in scenarios with and without noise. Notice that the rates of convergence are very

much different. It is well known that the rate of convergence for inverse iteration method


is given by the ratio Az 22 Where ~ is the largest eigenvalue of G1 and ii2 -1 is the

second largest eigenvalue of G-1 [8]. Faster convergence can be seen in the noiseless case


owing to the huge IA /12I ratio.

Convergence of the minor e genvector of G with noisy data Convergence of the mi nor e genvector of G with clean data




m460
703






0 05 1 1 5 2 2 5 3 3 5 4 41 5 U 50 100 150 200
Iterations x 104 Iterations

Figure 3-5. Convergence of the minor eigenvector of G with (a) noisy data and (b) clean
data.



Summary

In this chapter, we derived recursive Newton type algorithm to estimate the optimal

AEC solution. First, the Recursive Error Whitening (REW) algorithm was derived using

the analytical AEC solution and the matrix inversion lemma. The well-known RLS

algorithm for MSE becomes a special case of the REW algorithm. Further, a Total Least-

Squares based EWC algorithm called EWC-TLS was proposed. This algorithm works

with a = -0.5 and can be easily applied to estimate parameters in the presence of white

noise. A fixed-point minor components extraction algorithm was developed using the

inverse iteration method. Other fixed-point or gradient-based methods cannot be used

because of the indefiniteness (matrix with mixed eigenvalues make the algorithms locally









unstable) of the matrix involved in the EWC-TLS formulation. The computational

complexity of the above mentioned algorithms is O(N2). We briefly explored an

extension of the Newton type algorithm for the extended AEC with multiple lags.

Effective usage of the matrix inversion lemma can cut the complexity of the extended

REW algorithm to O(N2)

In the later half of the chapter, we discussed the performance of the algorithms in

the problem of system identification in the presence of additive white noise. The

proposed recursive algorithms outperform the RLS and the analytical MSE TLS

solutions. We also showed the simulation results with the EWC-TLS algorithm and

quantitatively compared the performance with the well-known IV method.

Although the recursive EWC algorithms presented in this chapter are fast

converging and sample efficient, the complexity of O(N2) can be high for many

applications involving low power designs. Additionally, the recursive algorithms can

exhibit limited performance in non-stationary conditions if the forgetting factors are not

chosen properly. This motivates us to explore stochastic gradient (and its variants)

algorithms for estimating the optimal AEC solution. Chapter 5 will describe these

algorithms and also highlight other benefits of the stochastic algorithms over their

Newton type counterparts.















CHAPTER 4
STOCHASTIC GRADIENT ALGORITHMS FOR AEC

Introduction

Stochastic gradient algorithms have been at the forefront in optimizing quadratic

cost functions like the MSE. Owing to the presence of a global minimum in quadratic

performance surfaces, gradient algorithms can elegantly accomplish the task of reaching

the optimal solution at minimal computational cost. In this chapter, we will derive the

stochastic gradient algorithms for the AEC. Since the AEC performance surface is a

weighted sum of quadratics, we can expect that difficulties will arise. However, we will

show that using some simple optimization tricks, we can overcome these difficulties in an

elegant manner.

Derivation of the Stochastic Gradient AEC-LMS Algorithm

Assume that we have a noisy training data set of the form (x(n),d(n)), where

x(n)E e 8 is the input and d(n)E e is the output of a linear system with coefficient

vector w,. The goal is to estimate the parameter vector w, using the augmented error

criterion. We know that the AEC cost function is given by

J(w) = E(e2 SE)e2(n)) (4.1)


where, e(n) = e(n) e(n L) w is the estimate of the parameter vector and L 2 m, the

size of the input vector. For convenience, we will restate the following definitions,

i(n) =x(n) -x(n- L), d(n) =d(n) -d(n- L), R =E[x(n)x" (n)], S = E[ir(n)ir (n)],


P = E[x(n)d(n)] and Q = E[ir(n)d(n)]. Using these definitions, we can rewrite the cost









function in (4.1) as

J(w) = E[d2 (n)] + PE[d2 (n)] + wT (R + PS)w 2(P + pQ)T w (4.2)

It is easy to see that both E[e- (n)] and E[e- (n)] have parabolic performance surfaces as

their Hessians have positive eigenvalues. However, the value of P can invert the

performance surface of E[e (n)]. For P > the stationary point is always a global

minimum and the gradient given by equation (4.2) can be written as the sum of the

individual gradients as shown below.

8J(w)
= 2(R + PS)w 2(P + PQ) = 2(Rw P) + 2 P(Sw Q) (4.3)
dw

The above gradient can be approximated by the stochastic instantaneous gradient by

removing the expectation operators.


'31W(-)[e(n2)x(n)f + f(n)ia(n)] (4.4)
dw(n)

The goal is to minimize the cost function and hence using steepest descent, we can write

the weight update for the stochastic AEC-LMS algorithm for P > 0 as

w(n + 1) = w(n) + r(n)(e(n~x(n) + pe(n~ir(n)) (4.5)

where r(n) > 0 is a finite step-size parameter that controls convergence. For p <0o, the

stationary point is still unique, but it can be a saddle point, global maximum or a global

minimum depending on the value a. Evaluating the gradient as before and using the

instantaneous gradient, we get the AEC-LMS algorithm for P < 0 .

w(nf + )= w(nI)+ l(nZ) e(n)x)(n) $e(n)i(n)) (4.6)

where, r(n) is again a small step-size. However, there is no guarantee that the above

update rules will be stable for all choices of step-sizes. Although, equations (4.5) and









(4.6) are identical, we will use PI in the update equation (4.6) to analyze the

convergence of the algorithm specifically for p < 0 The reason for the separate analysis

is that the convergence characteristics of (4.5) and (4.6) are very different.

Convergence Analysis of AEC-LMS Algorithm

Theorem 4.1. The stochastic AEC algorithms asymptotically converge in the mean to the

optimal solution given by

w* = (R + PS)-'(P + PQ), S >
(4.7)
w. = (R + S)-'(P PQ), S <

We will make the following mild assumptions typically applied to stochastic

approximation algorithms [79-81,84] that can be easily satisfied.

1. The input vectors x(n) are derived from at least a wide sense stationary (WSS)
colored random signal with positive definite autocorrelation matrix
R = E[x(n)xT (n)]

2. The matrix RL = E[x(n)xT (n L) + x(n L)xT(n)] exists and has full rank

3. The sequence of weight vectors w(n) is bounded with probability 1

4. The update functions h(w(n)) = e(n)x(n)+ fe(n)ir(n) for P > 0 and
hz(w(n)) = e(nz)x(n) p e(n)ii(n) for P < exist and are continuously
differentiable with respect to w(n), and their derivatives are bounded in time.

5. Even if h(w(n)) has some discontinuities a mean update vector
h(w(nz))= limn E[h(w(nz))] exists.

Assumption A.1 is easily satisfied. A.2 requires that the input signal have sufficient

correlation with itself for at least L lags.

Proof of AEC-LMS Convergence for (3 > 0

We will first consider the update equation in (4.5) which is the stochastic AEC-

LMS algorithm for P > 0 Without loss of generality, we will assume that the input








vectors x(n) and their corresponding desired responses d(n) are noise-free. The mean

update vector h (w(n)) is given by

h (w(n)) dt E [(n)x(n) + PLe(n)i(n)]l = Rw(nZ) P(nI)+ P(Sw(n) Q(nI)) (4.8)

The stationary point of the ordinary differential equation (ODE) in (4.8) is given by

w. = (R + pS) (P + pQ) (4.9)

We will define the error vector at time instant n as S(n) = w, w(n). Therefore

S(n + 1) = S(n) r(n)[e(n)xOn + pe(n)ir(n)] (4.10)

and the norm of the error vector at time n+1 is simply

1(n + 1) i = 1(n) -? 21?(n)[g (I (ne(nl)x(r) + p;T (l)e(nz)i(n) +
(4.11)
17 (n)l e(n)x(n) + Pte(n)i(n)l

Imposing the condition that 11(n +1)1 < I(n)l for all n2, we get an upper bound on the

time varying step-size parameter r(n) which is given by

2 (' (n~e(n7)x(n7)+ PST (n)e(n)i(n)]
r(n) < (4.12)
Ie(n2)x(n)> + pe(n~i(n)>

Simplifying the above equation using the fact that 5T (n)x(n) = e(n) and

5'(n)ir(n) = e(n), we get

2 e2(n) + PPe (I)]
r(n) < (4.13)
Ie(n)x(n) + fe(n~i(n)ll

which is a more practical upper bound on the step-size as it can be directly estimated

from the input and desired data. As an observation, notice that if P = 0, then, the bound

in (4.13) reduces to










r(n) < (4.14)


which, when included in the update equation, reduces to a variant of the Normalized

LMS (NLMS) algorithm [14]. In general, if the step-size parameter is chosen according

to the bound given by (4. 13), then the norm of the error vector S(n) is a monotonically

decreasing sequence converging asymptotically to zero, i.e., lim l(n) 2 40 which


implies that limw(n) 4 w, (see Appendix G for details). In addition, the upper bound on

the step-size ensures that the weights are always bound with probability one satisfying

the assumption A.3 we made before. Thus the weight vector w(n) converges

asymptotically to w,, which is the only stable stationary point of the ODE in (4.8). Note

that (4.5) is an O(m) algorithm.

Proof of AEC-LMS Convergence for P < 0

We analyze the convergence of the stochastic gradient algorithm for P < 0 in the

presence of white noise because this is the relevant case ( p = -0.5 eliminates the bias

due to noise added to the input). From (4.6), the mean update vector h (w(n)) is given by,

h~~wt (wn)==Ee(n1)x(n2) -P $e(n)i(n)l = Rw(n) -P(nz) p(Sw(n) -Q(n)) (4. 15)


As before, the stationary point of this ODE is

w. = R f Sr P P Q) (4.16)

The eigenvalues of R- PIS decide the nature of the stationary point. If they are all

positive, then we have a global minimum and if they are all negative, we have a global

maximum. In these two cases, the stochastic gradient algorithm in (4.6) with proper fixed









sign step-size would converge to the stationary point, which would be stable. However,

we know that the eigenvalues of R- PIS can also take both positive and negative values

resulting in a saddle stationary point. Thus, the underlying dynamical system would have

both stable and unstable modes making it impossible for the algorithm in (4.6) with fixed

sign step-size to converge. This is well known in the literature [3,14]. However, as will

be shown next, this difficulty can be removed for our case by appropriately utilizing the

sign of the update equation (remember that this saddle point is the only stationary point

of the quadratic performance surface). The general idea is to use a vector step-size (one

step-size per weight) having both positive and negative values. One unrealistic way (for

an on-line algorithm) to achieve this goal is to estimate the eigenvalues of R-I S.

Alternatively, we can derive the conditions on the step-size for guaranteed convergence.

As before, we will define the error vector at time instant n as S(n) = w, w(n). The

norm of the error vector at time instant n+1 is given by

1(n2 + 1)I = 1(n) -z 21?(n)[2 (n~e(n)x (n) f' (n)L'(n)i (n) +
(4.17)
9 (n) e(nz)x(n) f (n)i(l)?

Taking the expectations on both sides, we get

E 5(n + 1) I = E S(n) 2p~) ('(neS1xn)- (n~e(l)i(nZ) +
(4.18)
Il (n)Ele(nz)x(n) fd(,(n~iln

The mean of the error vector norm will monotonically decay to zero over time i.e.,

E 5(n +1)1 < EI (m if and only if the step-size satisfies the following inequality.

2 E ('(n)e(n)x(n) f (n)e(nl)i(n)~
1'(n) < (4.19)
E e(n)x(n) -B d(nr)(n)'









Let x(n) = x(n) + v(n) and d(n) = d (n) + u(n), where x(n) and d (n) be the clean

input and desired data respectively. We will further assume that the input noise vector

v(n) and the noise component in the desired signal u(n) to be uncorrelated. Also the

noise signals are assumed to be independent of the clean input and desired signals.

Furthermore, the lag L is chosen to be more than m, the length of the filter under

consideration. Since the noise is assumed to be purely white,

E[v(n)vT (n L)] = E[v(n L)vT (n)] = 0 and E[v(n)v' (n)] = V We have

(ncenxn) = (w. -w(n)) d (nl) +u(n) -wT')(n)Wr(n) w'~(nl)tv(n)in ~) (4.20)

Simplifying this further and taking the expectations, we get

E[5'(nr)e(nz> x(n1)]= var (nr) -2Prw(n) +wr (n)ftw(n1)
+ wT (n)Vw(nZ)- w, Vw(nZ) (4.21)
=Jnns wIVw(n)

where, 1 = E[x(n)xT (n)], P E[x(n~d(n)] and

Jii = w7(n) 1+ V v(n) + varp(n))- 2P;'w(n) (4.22)

Similarly, we have

S'(n)e(nz~i(n) = (w, w(nZ))l [d(n2)+ u1(n2)- w7'(n2)~(in)+ v(n%))
d (n L) + u(n L) + wT (n)(ji(n L) + v(nZ L))] (4.23)
(ik + Vk k- L Vk-L

Evaluating the expectations on both sides of (4.23) and simplifying, we obtain

E 7(ll(n)ei(n)n)= var p~(n)- d (n -L))- 2Q7w(n)
+ wT(nZ)Sw(n)+ 2wT (n)Vw(nZ)- 2w (n) Vw(n) (4.24)
= J., 2 w Vw(n)

where, we have used the definitions S =E[( (n) i(n -L))( (n) i(n -L)) ],









Q = E[(x(n) X(n L))(d(n) d (n L))] and

Jnr = w (n)+ +2V (n)+ var d"(n)- d(n-L))- 2Q'w(n) (4.25)

Using (4.21) and (4.24) in equation (4.19), we get an expression for the upper bound on

the step-size as

2/,- pJr 1-20 ~Vw(n)
1'(n) < (4.26)
E e(n)x(n) -B e(nr)i(n)'

This expression is not usable in practice as an upper bound because it depends on the

optimal weight vector. However, for P = -0.5, the upper bound on the step-size reduces



21,I- 0.5Ja
17(n)l <~ (4.27)
E e(n)x(n2)- 0.5e(2)(n)i n)

From (4.22) and (4.25), we know that J, and J,,r are positive quantities. However,

Js 0.5J,,r can be negative. Also note that this upper bound is computed by

evaluating the right hand side of (4.27) with the current weight vector w(n) Thus as

expected, it is very clear that the step-size at the nth iteration can take either positive or

negative values based on J, 0.5J,,r; therefore, sgn(r(n)) must be the same as

sgn(A, 0.5J., ) evaluated at w(n). Intuitively speaking, the term As 0.5Jr is

the EWC cost computed with the current weights w(n) and P = -0.5, which tells us

where we are on the performance surface and the sign tells which way to go to reach the

stationary point. It also means that the lower bound on the step-size is not positive as in

traditional gradient algorithms. In general, if the step-size we choose satisfies (4.27),

then, the mean error vector norm decreases asymptotically i.e., E :(n + 1)1 < E 1(n)l









and eventually becomes zero, which implies that limE[w(n)]4w.. Thus the weight


vector E[w(n)] converges asymptotically to w,, which is the only stationary point of the

ODE in (4.15). We conclude that the knowledge of the eigenvalues is not needed to

implement gradient descent in the EWC performance surface, but (4.27) is still not

appropriate for a simple LMS type algorithm.

On-line Implementations of AEC-LMS for P < 0

As mentioned before, computing J,, 0.5J,,r at the current weight vector would

require reusing the entire past data at every iteration. As an alternative, we can extract the

curvature at the operating point and include that information in the gradient algorithm. By

doing so, we obtain the following stochastic algorithm.

w(n+ 1) =w(n) +risgn(w' (n) R(n) -P S(n) () e(n) n)-pe(n)x(nz)) (4.28)

where, R(n) and S(n) are the estimates of R and S respectively at the nth time instant.

Corollazy:l~~~~~11111~~~~ Given any quadratic surface J(w), the following gradient algorithm

converges to its stationary point.


w(n + 1) = w(n) r7sgn(wT (n)Hw(n)) (4.29)
dw (n)

Proofi Without loss of generality, suppose that we are given a quadratic surface of the

form J(w)= wTHw, where He E" "'x, and we W""Y1. H is restricted to be symmetric;

therefore, it is the Hessian matrix of this quadratic surface. The gradient of the


performance surface with respect to the weights, evaluated at point w, is = 2Hw,,
Swn

and the stationary point of J(w) is the origin. Since the performance surface is quadratic,

any cross-section passing through the stationary point is a parabola. Consider the cross-









section of J(w) along the line defined by the local gradient that passes through the point

wo. In general, the Hessian matrix of this surface can be positive or negative definite; it

might as well have mixed eigenvalues. The unique stationary point of J(w), which

makes its gradient zero, can be reached by moving along the direction of the local

gradient. The important issue is the selection of the sign, i.e., whether to move along or

against the gradient direction to reach the stationary point. The decision can be made by

observing the local curvature of the cross-section of J(w) along the gradient direction.

The performance surface cross-section along the gradient direction at wo is

J(wo + 2yHwo) = w ( ~I+2pH)"H(I + 2H)w o = w (H+4 9H-2 + 4r2H3 0W (4.30)

From this, we deduce that the local curvature of the parabolic cross-section at wo is

4w H3 0 If the performance surface is locally convex, then this curvature is positive.

If the performance surface is locally concave, this curvature is negative. Also, note that

sgn(4w H3 0)= Sgn(w Hwo). Thus, the update equation with the curvature

information in (4.30) converges to the stationary point of the quadratic cost function

J(w) irrespective of the nature of the stationary point.

From the above corollary and utilizing the fact that the matrix R PIS is

symmetric, we can conclude that the update equation in (4.29) asymptotically converges

to the stationary point w, = (R- psf S P -fQ). On the down side however, the update

equation in (4.28) requires O(m2) COmputations, which makes the algorithm unwieldy

for real-world applications. Also, we can use the REW algorithm instead, which has a

similar complexity.









For an O(m) algorithm, we have to go back to the update rule in (4.6). We will discuss

only the simple case of P = -0.5, which turns out to be also the more useful. We propose

to use an instantaneous estimate of the sign with the current weights given by

w(n + 1) = w(n) + r(n) sgn(e (n) 0.5e~ (n))[e(n)x(n) 0.5e(Z)i(nZ)] (4.31)

where, where r(n) > 0 and is bound by (4.27). It is possible to make mistakes in the sign

estimation when (4.31) is utilized, which will not affect the convergence in the mean, but

will penalize the misadjustment.

Excess Error Correlation Bound for EWC-LMS

In the next theorem, we will show that the asymptotic excess error correlation at

lags L 2 m is always bounded from above and can be arbitrarily reduced by controlling

the step-size.

Theorem 4.2: With P = -1/2, the steady state excess error autocorrelation at lag L 2 m,

i.e., Ip,(L)I is always bound by


Ip,(L) ~ 2 Z"1 [Tv(R)+ I]l[E(e (k))+o+ flw lO] (4.32)


where R = E[xkX(], and Tr(*) denotes the matrix trace. The term E(e,2(k)) denotes the

excess MSE which is (w, w,)T R(w, wr) The noise variances in the input and

desired signals are represented by oa and o] respectively. Note that the term 'I\I is

always bound because of the step-size bound.

Proofi For convenience, we will adopt the subscript k to denote the time or iteration

index. With this convention, the weight vector at the kth iteration is denoted by wk

Further, the error vector (difference between the true vector wT and the adaptive estimate









at time k) is denoted by Ek T k, w. Throughout the rest of the proof, we will use the

following notations: the noisy input vector ik, the noise-free input vector xk, and the

input noise vector vk Obey ik = Xk +Vk; the noisy desired signal dk,, the noise-free

desired signal dk and noise rek are related by dk~ = dk 1k.We will start from the

equation describing the dynamics of the error vector norm given below.

lik+1 12 = ik 2 -- 2pign(8flT~ + p()(8kk kr~k k kd~,k (4.33)

In (4.33), we have assumed a constant step-size which satisfies the upper bound in (4.27).

Letting E Ak+,1 2 = E ik 12 as ka 00 we see that


E Ik~Xk k k~j 2=E (8k4 kk)(434


We now invoke the Jensen's inequality for convex functions [130] to reduce (4.34)

further, yielding


E2 L&II +/;1 pA>EIf (8k/k k k (4.35)

The noisy error term is given by Sk ae(k)+uk -TVk Where the excess error

e, (k)= ETxk Using the expressions E'^T &kx k] E- e )-wiw +w Vk

E[i: Cik, k= E(e ;) 2wfVwk + 2w Vwk and p = -0.5, we can immediately recognize

that the RHS of (4.35) is simply the steady state excess error autocorrelation at lag

L 2 m, i.e., p, (L)I In order to evaluate the LHS of (4.35), we will assume that the terms

that ik 12 and 1d? are uncorrelated in the steady state. Using, this assumption, we can

wnite










E kk -0.5g k k ci = (f [rR)a (4.36)
2 2

where,, E(8 )2 =- E1/e\ (k) + + |w || a Using (4.36) in equation (4.3.5), we get the


inequality in (4.32).

This assumption (more relaxed than the independence assumptions [11,14]) is used

in computing the steady state excess-MSE for stochastic LMS algorithm [131,132] and

becomes more realistic for long filters. In the estimation of the excess MSE for the LMS

algorithm, Price's theorem [133] for Gaussian random variables can be invoked to derive

closed form expressions. However, even the Gaussianity assumption is questionable as

discussed by Eweda [134] who proposed additional reasonable constraints on the noise

pdf to overcome the Gaussianity and independence assumptions that lead to a more

generic treatment for the sign-LMS algorithm. It is important to realize at this point that

in the analysis presented here, no explicit Gaussianity assumptions have been made.

As a special case, consider L = and noise free input. Then, (4.32) is true with the

equality sign and also Ip,(L)I will be the same as E(ej(k)) which is nothing but the

excess MSE (as ka ~o ) of the LMS algorithm. In other words, (4.32) reduces to


E(ecr(k)) = -Tr(R)[E(ea (k))+ oj] (4.37)

From (4.37), the excess MSE for the LMS algorithm [14] can be deduced as


E(e,2(k))= yo~()(4.38)
2 rTr(R)

which will become to yo- Tr(R)/2 for very small step-sizes. If the adaptive filter is long

enough, the excess error es (k) will be Gaussian and we can easily show that the excess









MSE is bounded by T7(R)E[ so ]~/4 where, a, denotes the error due to the initial

condition [131].

Other Variants of the AEC-LMS Algorithms

It is easy to see that for convergence of the mean, the condition is

II r/Zk (R + PS)I < 1 for all k, where Ak (R + SS) denotes the kth eigenvalue of the matrix

(R + pS) This gives an upper bound on the step-size as 17 < 2/ Amax, (R + PS)I From the

triangle inequality [8], IR + /3 S11 < Amx R) + p Amax (S)S where, *1 1 denotes the

matrix norm. Since, both R and S are positive-definite matrices, we can write

R + S < p
In a stochastic framework, we can include this in the AEC-LMS update equation to result

in a step-size normalized EWC-LMS update rule given by

ysign(e (nZ) + ~2(n))(e(nZ)x(n) + P(nZ)i(n))
w(n +1) = w(n) + (4.40)
Bx(n) I+ / II(n%1)Z

Note that when f = (4.40) reduces to the well-known normalized LMS (NLMS)

algorithm [14]. Alternatively, we can normalize by the norm squared of the gradient and

this gives the following modified update rule.

(e (n) + p~(n))(e(n)x(n) + pC(n)i(n))
w(n +1) = w(n) +2 (4.41)
Ie(n)x(n) + Pt(n)i(n)ll + 3

The term 6, a small positive constant compensates for the numerical instabilities when the

signal has zero power or when the error goes to zero, which can happen in the noiseless

case even with finite number of samples. Once again, we would like to state that with

p = 0, (4.41) defaults to NLMS algorithm. However, the caveat is that, both (4.40) and










(4.41) do not satisfy the principle of minimum disturbance unlike the NLMS3 [14]. But

nevertheless, the algorithms in (4.40) and (4.41) can be used to provide faster

convergence at the expense of increased misadjustment (in the error correlation sense) in

the final solution.

AEC-LMS Algorithm with Multiple Lags

In the previous chapter, we discussed a recursive Newton type algorithm that

included more than one lag in the cost function. With decreasing SNR at the input, the

Hessian matrix H = R + BS is mostly determined by the noise covariance matrix. This

can degrade the performance and we might be forced to use very small step-sizes (slow

convergence) to achieve good results. One way of alleviating this problem is to

incorporate multiple lags in the AEC cost function. The stochastic gradient AEC-LMS

algorithm for the multiple lag case is simply given by


w(n + 1) = w(n) + C 77LSlgn(e (n))+ P (n))(e(n2>x(n) + r.L H XL (n2)) (4.42)
L=1

Lmax is the total number of lags (constraints) used in the AEC cost function. The

additional robustness of using multiple lags comes at an increase in the computational

cost and in the case when the number of lags becomes equal to the length of the adaptive

filter, the complexity will approach that of the recursive Newton type algorithms.

The stochastic AEC algorithms have linear complexity in comparison with the

O(N2) algorithm of the recursive Newton type algorithms discussed in the previous

chapter. At the same time, since the algorithms are all based on the instantaneous


SThe NLMS algorithm is also called the minimum norm update algorithm. It can be formulated as a
constrained minimization problem wherein the actual cost function is the norm of the update, viz.,

I\ I n -w(n-1)||- and the constraint is the error e(n) with the weights w(n) must be zero.










gradients, these algorithms have better tracking abilities when compared with their

Newton counterparts. Hence these algorithms can be expected to perform better in non-

stationary conditions.

Simulation Results

Estimation of System Parameters in White Noise

The experimental setup is the same as the one used to test the REW algorithm. We

varied the Signal-to-Noise-Ratio (SNR) between -10dB to +10dB and changed the

number of filter parameters from 4 to 12. We set P = -0.5 and used the update equation

in (4.31) for the EWC-LMS algorithm. A time varying step-size magnitude was chosen in

accordance with the upper bound given by (4.27) without the expectation operators. This

greatly reduces the computational burden but makes the algorithm noisier. However,

since we are using 50,000 samples for estimating the parameters, we can expect the errors

to average out over iterations. For the LMS algorithm, we chose the step-size that gave

the least error in each trial. Totally 100 Monte Carlo trials were performed and

histograms of normalized error vector norms were plotted. It is possible to use other

statistical measures instead of the error norm, but this is sufficient to demonstrate the bias

removal ability of EWC-LMS. For comparison purposes, we computed the solutions with

LMS as well as the numerical TLS (regular TLS) methods. Figure 4-1 shows the error

histograms for all the three methods. The inset plots in Figure 4-1 show the summary of

the histograms for each method. EWC-LMS performs significantly better than LMS at

low SNR values (-10dB and OdB), while performing equally well for 10dB SNR. The

input noise variances for -10dB, OdB, and 10dB SNR values are 10, 1, and 0.1,

respectively. Thus, we expect (and observe) TLS results to be worst for -10dB and best

































20 11

..Iill 11l II
035 30 25 20 15 10 5
Error norm in dB
Performance with SNR = OdB #taps = 8








I0 I i
15 -20 -0 -8 -B 2 4

9 --meansaarlaan ~ I
830 h us

70-40 20 2
60 mennd eaid
50 I I
40 II
]I LMS
lo 30 []TL
Eg~I EWCLM


for 10dB. As per theory, we observe that TLS performance drops when the noise


variances are not the same in the input and desired signals.


n Performance with SNR = -10dB, #taps= 4 Performance with SNR = OdB, #taps = 4 n Performance with SNR = 10dB, #taps = 4


700 -20 u




30I I
20


-1 -2 sus


Performancewith SNR =10dB,#tap = 8




-40 20 0
meanandspreadinde









-30 -26 -20 -16 -10 -5
Error norm in d
Performance with SNR = 1dB, #1aps = 12


Performance with SNR =-10dB, #taps = 2




35 -6 0 5 0

30 meanandspreadnda


Error norm in dB Error norm indB Error norn in dB



Figure 4-1. Histogram plots showing the error vector norm for EWC-LMS, LMS

algorithms and the numerical TLS solution.




Figure 4-2 shows a sample comparison between the stochastic and the recursive


algorithms for OdB SNR and 4 filter taps. Interestingly, the performance of the EWC-


LMS algorithm is better than the REW algorithm in the presence of noise. Similarly, the


LMS algorithm is much better than the RLS algorithm. This tells us that the stochastic























LMS


3C EWC-LMS







1C I REW

-40 -35 -30 -25 -20 -15 -10 -5 O
error norm in dB


Figure 4-2. Comparison of stochastic versus recursive algorithms.


Weight Tracks and Convergence

The steady state performance of a stochastic gradient algorithm is a matter of great

importance. We will now experimentally verify the steady state behavior of the EWC-

LMS algorithm. The SNR of the input signal is set to 10dB and the number of filter taps

is fixed to two for display convenience. Figure 4-3 shows the contour plot of the EWC

cost function with noisy input data. Clearly, the Hessian of this performance surface has

both positive and negative eigenvalues thus making the stationary point an undesirable

saddle point. On the same plot, we have shown the weight tracks of the EWC-LMS

algorithm with P = -0.5 Also, we used a fixed value of 0.001 for the step-size. From


algorithms presumably reject more noise than the fixed-point algorithms. Researchers

have made this observation before, although no concrete arguments exist to account for

the smartness of the adaptive algorithms [135]. Similar conclusions can be drawn in our

case for EWC-LMS and REW.

Performance of RLS, REW, EWC-LMS, LMS algorithms with SNR = OdB, #taps = 4




























































Weight tracks for the EWC-LMS algorithm




W2 = 0.5









W1 =-0.2


77



the figure, it is clear that the EWC-LMS algorithm converges stably to the saddle point


solution, which is theoretically unstable when a single sign step-size is used. Notice that


due to the constant step-size, there is misadjustment in the final solution. In Figure 4-4,


we show the individual weight tracks for the EWC-LMS algorithm. The weights


converge to the vicinity of the true filter parameters, which are -0.2 and 0.5 respectively


within 1000 samples.


Contour plot of the performance surface with the weight tracks





0.4

0.2



-0.4 I.

-0.6

-0.8

-1 i
S-Uo8 -Ub -U 4 -U.2 U Ux U.4 U.b Uo



Figure 4-3. Contour plots with the weight tracks showing convergence to saddle point.


O O 5 1 1 5 2 2.5
iterations


3 3 5 4 4.5


x 104


Figure 4-4. Weight tracks for the stochastic algorithm.










In order to see if the algorithm converges to the saddle point solution in a robust

manner, we ran the same experiment using different initial conditions on the contours.

Figure 4-5 shows a few plots of the weight tracks originating from different initial values

over the contours of the performance surface. In every case, the algorithm converged to


~/// Y
Y"


06 08 -1 -08 -06 -04 -02


4-02 0 02 04


14-02 0 02 04


1-OB -06 -04 -02 I 02 04 IB IB 1


-IB -IB-04 -02 I 02 04 06 0B


Figure 4-5. Contour plot with weight tracks for different initial values for the weights.


the saddle point in a stable manner. Note that the misadjustment in each case is almost

the same. Finally, to quantify the effect of reducing the SNR, we repeated the experiment

with OdB SNR. Figure 4-6 (left) shows the weight tracks over the contour and we can see

that the misadjustment has increased owing to decrease in the SNR. This is a typical







79


phenomenon observed with most of the stochastic gradient algorithms. However, the

misadjustment is proportional to the step-size. Therefore, by using smaller step-sizes, the

misadjustment can be controlled to be within acceptable values. The drawback is slow

convergence to the optimal solution. Figure 4-6 (right) shows the weight tracks when the

algorithm is used without the sign information for the step-size. Note that convergence is











-1~~~ -D O 0 0 2 0 B O 1 -B -B -4 0 2 0 B D

Fiur 46 Cnturpotwihwegh taksfo ECLMagrith withsg
inomto let n ihutsg nomain(ih)

no civdi hscs hchsbtnitsorpeiu rgmn htafxdsg tp
sizewil neer onvrgeto sadlepoit. o frthr sbstntite hisfac, w reove
the nos fothinuanrathEW-Malrith ihadwtottesg em

Fiue47(et hw h niefe W efrac uraeadFgr rgt
shw h egttak wt adwtottesg nomtin ntecnor.Cery

the wegt ontcneg otedsrdsddepitee nteasneo os.O










the other hand, using the sign information leads the weights to the saddle point in a stable

manner. Since this is the noise-free case, the final misadjustment becomes zero.

Performance surface of EWC without no se in Input and desired








-0 5 -0 5 -41 I
-1 1 0 04 02 04 O6 O
2 W1 W1







Figure 4-7. EWC performance surface (left) and weight tracks for the noise-free case
with and without sign information (right).


Inverse Modeling and Controller Design Using EWC

System identification is the first step in the design of an inverse controller.

Specifically, we wish to design a system that controls the plant to produce a predefined

output. Figure 4-8 shows a block diagram of model reference inverse control [136]. In

this case, the adaptive controller is designed so that the controller-plant pair would track

the response generated by the reference model for any given input (command). Clearly,

we require the plant parameters (which are typically unknown) to devise the controller.

Once we have a model for the plant, the controller can be easily designed using

conventional MSE minimization techniques. In this example, we will assume that the

plant is an all-pole system with transfer functionP(z)= 1/(1+ 0.8z '- 0.5z-2 0.3z-3)

The reference model is chosen to be an FIR filter with 5 taps. The block diagram for the

plant identification is shown in Figure 4-9. Notice that the output of the plant is corrupted









with additive white noise due to measurement errors. The SNR at the plant output was set

to OdB. We then ran the EWC-LMS and LMS algorithms to estimate the model

parameters given noisy input and desired signals. The model parameters thus obtained are

used to derive the controller (see Figure 4-8) using standard backpropagation of error. We

then tested the adaptive controller-plant pair for trajectory tracking by feeding the

controller-plant pair with a non-linear time series and observing the responses. Ideally,

the controller-plant pair must follow the trajectory generated by the reference model.

Figure 4-10 (top) shows the tracking results for both controller-plant pairs along with the

reference output. Figure 4-10 (bottom) shows a histogram of the tracking errors. Note


Figure 4-8. Block diagram for model reference inverse control.


Figure 4-9. Block diagram for inverse modeling.


that the errors with EWC-LMS controller are all concentrated around zero, which is

desirable. In contrast, the errors produced with the MSE based controller are significant

















































Figure 4-10. Plot of tracking results and error histograms.



Magnitude and phase responses of the reference model vs. rnodel-controller pair
co40






-Reference


S-401 111
0 0.1 0.2 O 3 0.4 0.5 0.6 0.7 O 8 0.9 1
Normalized frequency


82



and this can be worse if the SNR levels drop further. Figure 4-11 shows the magnitude


and phase responses of the reference models along with the generated controller-model


Trajectory tracking responses


Error histogram


200

S150 -

S100-




-4


-3 -2 -1 0
errors


1 2 3


Ref~rence
9, t~'




q-~.
C*


-
400


_11


0 0.1 0.2 O 3 O 4 0.5 0.6 0.7
Normalized frequency


O 8 O 9 1


Figure 4-11i. Magnitude and phase responses of the reference model and designed model-
controller pairs.


SEWC
I I MSE I


n -










pairs. Note that, the EWC controller-model pair matches very closely with the desired

transfer function, whereas MSE controller-model pair produces a significantly different

transfer function. This clearly demonstrates the advantages offered by EWC.

More details on the applications of EWC-LMS in system identification and

controller design problems can be found in [137-139].

Summary

In this chapter, we proposed online sample-by-sample stochastic gradient

algorithms for estimating the optimal AEC solution. The detailed derivations of the

update rules were presented and the convergence was proved rigorously using stochastic

approximation theory. We also derived the step-size upper bounds for convergence with

probability one. Further, the theoretical upper bound on the excess error correlation in the

case of EWC-LMS was derived. The AEC stochastic algorithms include the LMS

algorithm for MSE as a special case. Owing to the complexities of the EWC performance

surface (see Chapter 2), additional information like the sign of the instantaneous cost is

required for guaranteed convergence to the unique optimal AEC solution. In this context,

the AEC optimization problem can be pursued as a root-finding problem and the popular

Robbins-Munro method [140] can be adopted to solve for the optimal solution. We have

not explored this method yet for the AEC criterion.

We also presented several variants of the AEC-LMS algorithm. As a special case,

the normalized AEC-LMS algorithm in equation (4.40) reduces to the well-known NLMS

algorithm for MSE. The gradient normalized AEC-LMS algorithm in equation (4.41) has

shown better performance over the simple AEC-LMS algorithm in our simulation studies.

We then presented simulation results to show the noise rejection capability of the

EWC-LMS algorithm. Experiments were also conducted to verify some of the properties









of the proposed gradient algorithms. In particular, we observed the weight tracks and the

verified that the algorithm converges in a stable manner to even saddle stationary points.

This is achieved mainly by utilizing the sign information in the gradient update. We also

showed the amount of misadjustment can be controlled by the step-size parameter. This is

in conformance with the general theory behind stochastic gradient algorithms.

Lastly, we demonstrated the application of EWC in the design of a model-reference

inverse controller. We compared the performance of the EWC controller with the MSE

derived controller and verified the superiority of the former.















CHAPTER 5
LINEAR PARAMETER ESTIMATION IN CORRELATED NOISE

Introduction

In the previous chapters we discussed a new criterion titled augmented error

criterion (AEC) that can potentially replace the popular MSE criterion. In fact we showed

that a special case of the AEC called the error whitening criterion (EWC) can solve the

problem of estimating the parameters of a linear system in the presence of input noise.

We showed extensive simulation results with different EWC adaptation algorithms that

proved beyond doubt, the usefulness of this criterion in solving system identification and

controller design problems.

Two crucial assumptions were made in the theory behind the error whitening

criterion. Firstly, we assumed that the input noise is uncorrelated with itself or is white.

Although, in most problems, we assume that the noise is white, this assumption can be

certainly restrictive in many applications. From the theory we discussed in the previous

chapters, it is easy to conclude that EWC fails to remove the bias in the parameter

estimates when the noise is correlated or colored.

Secondly, we assumed full knowledge of the model order of the unknown system.

This is not just native to the proposed method as most of the competing methods

including Total Least-Squares (TLS) assume exact model order. To the best of our

knowledge, there is no existing solution to the problem of system identification in the

presence of input noise in cases when the model order is unknown. However, till this

point, we have not dealt with the implications of using the proposed EWC when the