<%BANNER%>

Global pptimization algorithms for adaptive infinite impulse response filters

University of Florida Institutional Repository
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110109_AAAAWM INGEST_TIME 2011-01-09T13:52:01Z PACKAGE UFE0000558_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 182224 DFID F20110109_AABFMQ ORIGIN DEPOSITOR PATH lai_c_Page_22.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
bd1e8cb60e2a36c89fdb27a7c108e065
SHA-1
7eb80dae33f8400e5e8bc8daaeb77c44b7bff30f
208894 F20110109_AABFNE lai_c_Page_37.jpg
14ef25dff21f753ac3c9fc1130ec6c70
1069677a7bc6988ee6c6439822a194b3dc3ae893
207245 F20110109_AABFMR lai_c_Page_23.jpg
4f6c97ec5120bf4b771bd4947623be54
8a8637ddfe25d66bd51c4eb207bcdad37dda8740
171728 F20110109_AABFNF lai_c_Page_38.jpg
822e1fcf7b7c0f34802661d455051ad2
ef66e2f313ad4432aa4d41282382cb5694190fc9
199400 F20110109_AABFMS lai_c_Page_25.jpg
a3451d6dd9cc5c8fdf9e38788023eed7
a25d3de39a670f0dcfb3a28a08b78fc293b42aaf
133153 F20110109_AABFNG lai_c_Page_39.jpg
d1c9f8315e326ec6211c2e9954a70a23
c9f4e082dfbd01ba3e8f1a03e540b74afb7d9485
253774 F20110109_AABFMT lai_c_Page_26.jpg
46c2c3b72f2277cab72308ceac91e380
5d27e229288131daa9dcf6337a69db389e88fb36
165973 F20110109_AABFNH lai_c_Page_40.jpg
bfa5d9a95bac050654acdb81dcdeec0c
36fb81629591b79c4d8b912025dd1229191a4d92
197556 F20110109_AABFMU lai_c_Page_27.jpg
30a60d58ee12391a727d4e6abda57bc1
e8beb9a9f12f222afba32403e81c73ae103d4f9a
154370 F20110109_AABFNI lai_c_Page_41.jpg
7fc9623a664eea19e01021e7c7f89bc2
98f03ed74706a32de99582d74bc263c0440daaa2
65250 F20110109_AABFMV lai_c_Page_28.jpg
8a907cc5a951e9872729abaf083273d8
47e44c7eb125f40c8426c0baa5e7ba24cd00aecb
200025 F20110109_AABFNJ lai_c_Page_42.jpg
c34bdf7825e196a6a50c8019c368f062
0d8e5b96f7878c16289e65e57df9cc2c4e0e68c8
250714 F20110109_AABFMW lai_c_Page_29.jpg
ca8e799b1ce4f6d4577751a7efab44df
23e33af8b316ca294efe1cf65c8e975bdc87eb37
135172 F20110109_AABFNK lai_c_Page_43.jpg
6badc6f67fae3ba61b9213d9a7a16e64
b34b832a8c8d14dba0d8783c4f6670e6a0a36f0c
147916 F20110109_AABFNL lai_c_Page_44.jpg
e18044120d725a5bfb44144ba5f1e5d4
aab6d5cd6667709b8f9aef3b3185c4c526d1fc23
258195 F20110109_AABFMX lai_c_Page_30.jpg
bc34552df12d59f680bbc4896e7323d0
e4b03d024a943ddd52f97774daf13c927df24740
195856 F20110109_AABFOA lai_c_Page_64.jpg
0a90f209a340a41c7b5bf8a03d504c31
35b85b5094fab7a5801341b1a0aeb544c6690537
174983 F20110109_AABFNM lai_c_Page_45.jpg
218ee1cdd059d7040ae0c252f3432808
d6ab82cf4b1ed8787db51ffae8ba0188bed5b3fa
224481 F20110109_AABFMY lai_c_Page_31.jpg
7baf2a547ef2c1dadfaecf432c275bd7
be05ff6800980c1a104555aeb475465b707bbdbf
156463 F20110109_AABFOB lai_c_Page_65.jpg
865c27a3621f4611119c9ee93e615af3
9cdcd1106681b0341f171bd6a1f2302f8394c33a
225784 F20110109_AABFNN lai_c_Page_46.jpg
1aec436c35f7cf9403bc07e014b8cbf1
ecf2797eedbcc5f996be1ca23eb50d4fe5c4bfd4
55291 F20110109_AABFMZ lai_c_Page_32.jpg
b8d56a5de2bdc9b763839df28ac820cb
b9157bd94335ba9ef11c5a3e07e082c99f66438e
151037 F20110109_AABFOC lai_c_Page_66.jpg
8d3807b69dbbba9c93393307476fbbe6
b454b12b2c867b9827723d81fff8bf83e43c4d3e
171452 F20110109_AABFNO lai_c_Page_47.jpg
14720b1ee5b414596f02f37d8a35542b
50b6fc5a1824d082ddb58f64f7f49f3ce05a02c1
128303 F20110109_AABFOD lai_c_Page_67.jpg
bda0aa22e6d10814c20a6d6c142e7c10
5100570b7f47e64219fa595d024058ef048e6c53
191919 F20110109_AABFNP lai_c_Page_49.jpg
2608f634c8013ac1a2f5a24b86b66ae7
e09df51c481671cfe3f003bb1212bda25d97e8f9
201849 F20110109_AABFOE lai_c_Page_68.jpg
0a104dfee52bb279a8e4a3f3bd1415c2
28f531052214f2cde47468de74b6fa9038ff09f4
155601 F20110109_AABFNQ lai_c_Page_50.jpg
64e1186c04ba019b6a3b789e75ff73ca
5b0713d23c5660636d5d27074105f2f7b8f1337c
158619 F20110109_AABFOF lai_c_Page_69.jpg
f66dfc955f28fe6e7d7ebe85c7f632cc
9a5224dfabcd370215d45fd693102cec7e3eb890
159142 F20110109_AABFNR lai_c_Page_51.jpg
e2fe5d6696982e7a0a6854afe96298a2
cee26d6c190479d3a266bd0d15b89b9089eeb7c7
151243 F20110109_AABFOG lai_c_Page_70.jpg
790ff01034f41c159a16df1a73ae8deb
8e761715aa70d02f96ef8265e285fda10897da49
195136 F20110109_AABFNS lai_c_Page_52.jpg
aaeef42c74f42045318513a631a6f69b
616a60b54a8560c9d8bedb1e9f80abadb90698c1
201485 F20110109_AABFOH lai_c_Page_71.jpg
9df269ac5dc90e9852db59bb74722949
eef68c5c532526fa49277b269892eb3de75c5f5d
151332 F20110109_AABFNT lai_c_Page_55.jpg
524797203a3f9670cf32227dd86f9e1c
0adbd05f12576f5af96e3839a17615bbb9abfa51
161745 F20110109_AABFOI lai_c_Page_73.jpg
370d6c1d80a552934a3c4a702362746b
770bb23dd2b7feefe4ddef47af12e3e93f0a7ac2
253894 F20110109_AABFNU lai_c_Page_56.jpg
e075c82a5a04c3d11bb3341ecfbb3c56
554b71489783837abb54c00cd450ef0fdf189436
230475 F20110109_AABFOJ lai_c_Page_74.jpg
652d45e45217fac26f77c935a71e21fb
19464d1f4de4ea529667af570887816113267c68
180409 F20110109_AABFNV lai_c_Page_58.jpg
0e7c9cf2f1092ed12ade397fa1f15b65
14994814702ec45c8933ee67ba451b3fe6e9feb9
164737 F20110109_AABFOK lai_c_Page_75.jpg
7a71d7dea62c35022fc703cea58c4344
6139c905be52dbd811982cbd6d5b6957c8b7d235
196121 F20110109_AABFNW lai_c_Page_59.jpg
1511deff361c4058a77ea3bcca5894c4
74164ca7f98d60114d4474d00f925a66dd490788
146822 F20110109_AABFOL lai_c_Page_76.jpg
af2e6b523a15718291c5d3070dc425a8
6273e53119ee5842cc9ebd2582f202bb1894815c
172664 F20110109_AABFNX lai_c_Page_60.jpg
c87aa61c06497527d8b22122cdb0da92
c2de82390aae30f7f9ff08ac56d1c0234248fd07
72405 F20110109_AABFPA lai_c_Page_96.jpg
1cf2902c923b925d55c70afb4eb06538
e3a3ecd761305e477023699d662c440dc95e60e0
167534 F20110109_AABFOM lai_c_Page_77.jpg
2c1375a666a643ac471385e11b7090d4
bbf717c5346799e77c7c9a9eaf037779ab9f8440
72309 F20110109_AABFPB lai_c_Page_97.jpg
ad0f357ffe346ed8f475a8dc5295cf5d
ab31a331ccce8ec71085f91e0f7c82289fe22a13
252357 F20110109_AABFON lai_c_Page_78.jpg
918fdb0c8fc9c1137b7c4932118d37c2
27e22ab82eb5043e6c9283dd82034e342e5d32ee
247017 F20110109_AABFNY lai_c_Page_61.jpg
56bda03d98c793865978ddcf57e8ab1f
69a7e6657ae589b204aab815ffc140759b91e91f
23726 F20110109_AABFPC lai_c_Page_01.jp2
82bd7c35bf4c1eedb0c18fe7c13a6bc5
ea440b4843c5ae92240abc3b860a8a028b225af6
163838 F20110109_AABFOO lai_c_Page_79.jpg
edcaca90b7fae2f0b532afd0e0c34877
ed7372964c22947117ade91a2e942d72f8e51a03
194997 F20110109_AABFNZ lai_c_Page_63.jpg
89457d1c1c428c6168979914fdd4953b
196ef0171b04f83a11862e2af958977e6823a1e0
33697 F20110109_AABFPD lai_c_Page_02.jp2
79f1af4d7babaf7ea9d672c57b58780d
ad83c9093d0dddc3a01a6935c2144cf1b8a49938
85444 F20110109_AABFOP lai_c_Page_80.jpg
c69985d6857cec383e35d7a355f5951d
2639f7443cd37c59d3558f0d9c062aa3be8db087
1051981 F20110109_AABFPE lai_c_Page_03.jp2
e023b9f729b1318dadadded88d2426b6
114d7cedbb65e1f3fb7608257d96138a976d4f9d
172419 F20110109_AABFOQ lai_c_Page_81.jpg
a11c9d00a5449d73de00bb881a7ec562
9af73491e015f88b3c4e3e2c670e006d42778eae
1010668 F20110109_AABFPF lai_c_Page_04.jp2
cf56b23b7148248c42766f9c8211c8b7
cc3bf48ed4cfbb321078c84365b8da2748c98308
203934 F20110109_AABFOR lai_c_Page_83.jpg
c7be106c3eb8f0d95be989b2909ba079
6666a824fb463ad9173d98bcdeec89f4257bd828
728378 F20110109_AABFPG lai_c_Page_05.jp2
de9f9b7847960792fed5e6a380b5bba3
ee045cd71d8b20bc838209ce69488a0f067c482c
170300 F20110109_AABFOS lai_c_Page_84.jpg
10d83009593b53c14a747645cf49f679
cbf2f4fb941d09114ff0383f58f295628554d1ba
313793 F20110109_AABFPH lai_c_Page_07.jp2
b610a77ba7a53455e1ae40e6584ca95a
63a0604fdebabdae5064096ce509615a2007affe
78903 F20110109_AABFOT lai_c_Page_85.jpg
8454ae0c843243b3aa5c56086327e1d8
9c75cc7ff7971c693852abf3c6bd8c599bfb39a2
95084 F20110109_AABFPI lai_c_Page_08.jp2
8a86ddc68ecfeed3d100c6357fe4f428
14f2cdf233dd60158e6fa955cffb0aee57a8af78
213390 F20110109_AABFOU lai_c_Page_87.jpg
e1d474fc4bb1c59aaad76a68a99965d6
9c4f776c3e07d0617201c781199540d98e8a5f2d
65434 F20110109_AABFPJ lai_c_Page_09.jp2
c756da0e3faa721805c175f2ba743cf8
42e4e6287db794738d5278783cc928fd684717f5
264218 F20110109_AABFOV lai_c_Page_88.jpg
a5b45a5b98b128e158e411477e9ac13b
83ad982d8d7a5c4eb2f26d1e25cf97c63420364c
1051919 F20110109_AABFPK lai_c_Page_10.jp2
9e3946055f7c4c5c0a4836acb836996b
5163d5a98ea6efbe2f57dbddddb27c0c4f1152e5
267629 F20110109_AABFOW lai_c_Page_91.jpg
7edaef5d8f403c81acd55fdc6a9ecc93
117a245df0ace91fd660e5e647fe25a992e7ad1c
1051973 F20110109_AABFPL lai_c_Page_11.jp2
91e6008e218d5395988857358a7561a0
70a95426fadf5a795923716307205d84badcb06c
262917 F20110109_AABFOX lai_c_Page_92.jpg
c147b24e52e05c4ee76abbf7328507d7
f122d782f77d64e0d4f0e937e40ce02df5ed5030
1051962 F20110109_AABFPM lai_c_Page_12.jp2
02727fd6a5924b712ab7fbadaced435c
8a4a834a439fbee65e9709e525b373cb7b77eb40
253655 F20110109_AABFOY lai_c_Page_94.jpg
3c9d8486af99a8222823d4a28e39bf79
dfdc71f75a62ab65cca257f5658c07d1c298d6c8
30564 F20110109_AABFQA lai_c_Page_28.jp2
88810c16d8886cc00c0f0b3364d42a2b
86ab67b6c4c5764d91dcb039b025032075cd57ed
1051978 F20110109_AABFPN lai_c_Page_13.jp2
93f71b26ae558e772c7b9e2b22e03ab5
da3e80d90eeb128cfd086b77088e2cbb2f13653c
1051985 F20110109_AABFQB lai_c_Page_29.jp2
28ce92c3a575a290ca9171b4de3588cc
bd5095a2bbd4c60a4eac2ec7c7c1aaf081272acc
1051964 F20110109_AABFPO lai_c_Page_15.jp2
5aa392887ca13524ef6f5b866b366bba
4ac1a8fc6763e174abcad8ee1f7a55c3b94d2319
239649 F20110109_AABFOZ lai_c_Page_95.jpg
b12dbeb574d36579e3dabf6a91596410
95d69ce81f00136e72aa926f5632f603bc9b4324
F20110109_AABFQC lai_c_Page_30.jp2
3296cc3c29f514058528817e9c3ec007
9eaae53a0e7d9e8d7fa3ad1b3ae4ecf557a8fd11
112805 F20110109_AABFQD lai_c_Page_31.jp2
bc246e519bd618a47c8c33911a941b66
2791a5217995942b497c30858c55de3097df18a8
1051986 F20110109_AABFPP lai_c_Page_16.jp2
2d77fbebc9e88ba78646a1677b25fe15
b9a04cf96e3abf6b801fe383c73aed3d408048b2
28405 F20110109_AABFQE lai_c_Page_32.jp2
e83aafd58641602e3f20262b51875fc4
676a046ffd3136b603b95c51a46ed71e07510f85
24661 F20110109_AABFPQ lai_c_Page_17.jp2
6d7c845334557397a63f48f19f926261
f8345659ebbd7bf1b2b9d05b130a373483e85c79
116679 F20110109_AABFQF lai_c_Page_33.jp2
445b9407dd0663b7709fee296cdf7090
6eadd3edbdb6b785d0005ed048099aac6fe5099c
851621 F20110109_AABFPR lai_c_Page_18.jp2
408cf38ee64c1243a8eb6dd47e702701
18df72c888a6a1b34b6a9b3ba4320db8e5891b02
106623 F20110109_AABFQG lai_c_Page_34.jp2
7d1ae595c6ba0be57c705791176bb9b0
c3e80d68abca3132ce34e0ecc0e08f2515f1de8e
981590 F20110109_AABFPS lai_c_Page_19.jp2
e395166d3d82f0692c708da8bee9d925
a6887d928d9ff22b316fa9a3b7e24e11c4ae2bd4
855656 F20110109_AABFQH lai_c_Page_35.jp2
33fd18398a558b9c6144548cfd2e5efa
d2a26c7dc807bf2ad9cf7f40075e008f3d1436f0
75685 F20110109_AABFPT lai_c_Page_20.jp2
870f16de17b0e1ddbfc1d09ff7dc7633
306999bc706ed8ae4d404d63f32bbf661dacf299
68629 F20110109_AABFQI lai_c_Page_36.jp2
42ab4e83281b9902fc34c9407ba9f2b5
76bcdfa66cf67eae9f8731adbb030175bd62bd31
902713 F20110109_AABFPU lai_c_Page_21.jp2
5510bd03edff8b7da0110439759f51d6
636de128a430e545c3e6df24067391386adc0300
947643 F20110109_AABFQJ lai_c_Page_37.jp2
8d4c51c9e5479771b57f6cc44f647156
740100c680b20b7a9e0c49ff4429aabce53a9f95
90351 F20110109_AABFPV lai_c_Page_22.jp2
50e7e7eb40ada8312cb4ba00bdd590bc
b5af1c19c73f5487368e367bf0f2c4b78d438bdd
762905 F20110109_AABFQK lai_c_Page_38.jp2
52ead5c77798e15678750cf74b8b2a6a
fe333d453aef99ae7873a6b09ef9896bd7f0b2c7
848172 F20110109_AABFPW lai_c_Page_24.jp2
c5a42abf9218f8c56a4977daf07f4b46
c2501a9615d14771b410aafe0f8ca7bfc17e2c4f
68465 F20110109_AABFQL lai_c_Page_39.jp2
83b3fc1076d4116df3896df2e56e2d1c
70ca0d214056715b85567675a109be00cf64820f
914131 F20110109_AABFPX lai_c_Page_25.jp2
95b1000c3e6e6678ed4e69d2dd14d262
ab043fe4ac5503edbe37b9edd66e16cc1742a1f9
895741 F20110109_AABFRA lai_c_Page_57.jp2
368bfab8ca5b7d80e5e6e0324c7bf01a
8c8f903b410fcb68b1748469daebcfe29a466b44
84976 F20110109_AABFQM lai_c_Page_40.jp2
7e1e9757a2cf5d21b200e3671eaedc6c
d679654dd99052e5b7524fc8b9ce4dd85452ec08
1051963 F20110109_AABFPY lai_c_Page_26.jp2
f45fb407cf59fd7f764a5ebe8f899afe
8802b176e0798700181a887206258abd5d9206b2
841808 F20110109_AABFRB lai_c_Page_58.jp2
4479a1d5ef35d4479231e0dc14295c3d
556f91974c1bbec81063efd30d0adb380eee546c
79029 F20110109_AABFQN lai_c_Page_41.jp2
a89bb24bcebef0af30ff6a8791829a9b
bfc738f4066abc299c361e246ffbaeae8a2789ee
864856 F20110109_AABFPZ lai_c_Page_27.jp2
e80c2bc303310eb6c33fc2ba4f9779d6
3d8b7285c188db16e98f4daa0e2b37f9acc87a33
908807 F20110109_AABFRC lai_c_Page_59.jp2
b82512e092f8d73e235f36d8958ae831
037e64af648f9f26cffdb2020fc95d12865c7d8d
921765 F20110109_AABFQO lai_c_Page_42.jp2
0b764fadc7396b79f282226f4548d8f7
dc2a592e5fea1411c052a4090cc4aef1d2778972
89246 F20110109_AABFRD lai_c_Page_60.jp2
781971366733e7437b06bc17f956b352
6222b3d55089bfaafb6ce0fd05586b6885e7b902
68922 F20110109_AABFQP lai_c_Page_43.jp2
fca38fc51305719c1d87f9cf8fee90a3
9a2107695a787aa025192843948d02071e116069
99526 F20110109_AABFRE lai_c_Page_62.jp2
7a0c30b21aa776755df9f50ced66b045
14c987da8c20c67dc5ea106bb38f7d4b6aecba91
74855 F20110109_AABFQQ lai_c_Page_44.jp2
8d48b471102f2556f6033f75573f99de
8a7ff56c19fb8ad9e69ac6f67fb2b462090bfe97
98747 F20110109_AABFRF lai_c_Page_63.jp2
ad8719775f9627d87b000b5214d2c2bf
5029da6435298ba0aa8652d81c65f3d1b1048f74
756864 F20110109_AABFQR lai_c_Page_45.jp2
69465484422fd3e070e1cadae063f0e0
aa79b32a6a7c3365478b00cd4334079e24fd84f9
98421 F20110109_AABFRG lai_c_Page_64.jp2
97e21305669afe4e7f01871e863c7c29
502448d751fc0ff77a9c5f72530761f50a6df9bf
109490 F20110109_AABFQS lai_c_Page_46.jp2
9303822e1df2bba36be0f068315cd1ac
d36cde051bd7e209368143587ca971bdc11bca0a
80828 F20110109_AABFRH lai_c_Page_65.jp2
9d1f80beea21c925760830f988b03706
ca2aac1d70fedd002ce345029d6dfabdfcbf2316
927155 F20110109_AABFQT lai_c_Page_48.jp2
7b14e30e0672830ecb01886ca19cc474
15f16e805d98d806bef58e407101bdf32dec2afc
78268 F20110109_AABFRI lai_c_Page_66.jp2
9490a0473e2401e40c5ca4939dd91540
bc75b1c21641f75202723a068240d0076a9ec536
115719 F20110109_AABFQU lai_c_Page_50.jp2
8553d0d212febefc970ee9f221e62a36
300d8d4598d721ed4329ab570caa791b8e998fa2
65579 F20110109_AABFRJ lai_c_Page_67.jp2
7a092f5c710859efc4638b7f49b930e8
5e94a40b4d50dd283c90da21aadc35285e7f921e
87737 F20110109_AABFQV lai_c_Page_51.jp2
628694d5fb0e94430866ce8bb611b335
44b6b488c6e09c77336a380c27c91ee6fa628974
102481 F20110109_AABFRK lai_c_Page_68.jp2
bb37f96b8334424aa9aaf843d73f08fb
c6de11b6cb2768d187e24cba6789dd4c490544b7
110299 F20110109_AABFQW lai_c_Page_52.jp2
096c4030b5e69634429581864378b842
1f89b916cc5cc030f8429dbfc50e7fb9636fc9ed
108631 F20110109_AABFRL lai_c_Page_70.jp2
2ba7886e572f1ef06437772ad0c72656
4e90e2d9180ddee3a56434be04bc71960b33c33a
111524 F20110109_AABFQX lai_c_Page_53.jp2
4560b709b92d1cda86dc90cddfd16086
39c7922c545cb0502380302ec7996ab4c68cb132
100019 F20110109_AABFRM lai_c_Page_71.jp2
cba4e791710e5b2f80770e8b273c6253
792bebb854a99764afee2d6165fb3ce904d01435
78959 F20110109_AABFQY lai_c_Page_54.jp2
96b4a5359aca2c78fb063586e8703f06
1499cd06ba8a8e55430f07cdc42ec2cb77229141
1051983 F20110109_AABFSA lai_c_Page_89.jp2
541ab415c24ee8d5bda49e8d2ed01275
08e8ced9bf60f5ac1cdda39a9b861b273541a1a4
82955 F20110109_AABFRN lai_c_Page_73.jp2
bfc3245b32dafbbe2e52ae25493a9313
8c61a73ea4d700eb2594573d8f7889bc562c3cfa
76440 F20110109_AABFQZ lai_c_Page_55.jp2
50dfcca0a0a1e4e20d5c985431220be7
81ceb5853b81d7ec67a5dba5ec914925377bf417
116994 F20110109_AABFSB lai_c_Page_90.jp2
0250b404856c17dc00c5ee3700a9bf0b
8281262060d821869815131af41eba6bf30f64ac
112275 F20110109_AABFRO lai_c_Page_74.jp2
a5827f8ba8dc114ee35d056c8aeddde6
61bf047f20ad6fae579735afda92e0f216e57622
141890 F20110109_AABFSC lai_c_Page_91.jp2
a1ea7a095dbb5b831999e356ac68fe26
d85d39cc6b837a9b8a47a3c70feaac7019c929f7
80148 F20110109_AABFRP lai_c_Page_75.jp2
38f99f8cf3f63a4cad6266a37f8190f3
029a4f6b9e439ce5a4f624becbc54034d1ddbd67
139587 F20110109_AABFSD lai_c_Page_92.jp2
fce51fa9bdf3947dcc8293f99efcabe6
6d17cf0dc5f7276c4cc376e96b54b84ec92c9738
73619 F20110109_AABFRQ lai_c_Page_76.jp2
fec7f545ae2d01fd25a798ed70b37ec8
ea7107f4449ffa70bf7080e65cf64c168a91772f
146601 F20110109_AABFSE lai_c_Page_93.jp2
3c63f4ec2905eecb41171378170cfa16
ca080a342837a5c5f9539794b8b3bd3143074c35
720539 F20110109_AABFRR lai_c_Page_77.jp2
d087951291e5e5657b194c207bdd3b9d
3a9bdb0da84a5f0fde463a12afbb893c06b37eec
133052 F20110109_AABFSF lai_c_Page_94.jp2
a3f90a68efc58f6996808131518f844e
fedc2ec94582f2eaf9c6d3f0caf0afe800fc3f6f
1051955 F20110109_AABFRS lai_c_Page_78.jp2
9bbb18b4a4e29345027f2f9d5ca40a59
4b7a7803d60cb46a5f2266353ad2c3e76b50e505
126396 F20110109_AABFSG lai_c_Page_95.jp2
f1272595df97d5d2bc177c063b9f3f8c
55f7052a067a2451d818643b7f0d341c3dd518fd
78724 F20110109_AABFRT lai_c_Page_79.jp2
e32da7b5b1bb80a7695c5c07b1a8640b
ed80eb745e1ad843541b7cf5f93a3b83f61153b8
37343 F20110109_AABFSH lai_c_Page_96.jp2
f2e2724ff9490cc5e2da1c517a29219b
dec2e8a8612f22d20fe22c417fce1d1221096039
F20110109_AABFRU lai_c_Page_82.jp2
07bbe22bb42b65267b798e6a682850e3
15fa4210a2e3fb46eb3069cc6da5c1d5398d4d54
33109 F20110109_AABFSI lai_c_Page_97.jp2
f773b5d952c77ea961e97f54837e410e
b5841f196f9cda19c60d36b8e4a5a0c8fa63ce64
94399 F20110109_AABFRV lai_c_Page_83.jp2
4817a71678b144fe6a1b740db894989a
06b948d2c01860727ed1b60db9f7e9eecfc3952c
1053954 F20110109_AABFSJ lai_c_Page_01.tif
c8ff199cbcef53dfba21efcc7835cf8a
46a9e06fcb06919184effdbe346a7c4ea9b8640e
91970 F20110109_AABFRW lai_c_Page_84.jp2
e3bb7dd52273f3ed597f1c5a993af50a
b69de7b223fcd73169a31d567375b64cb7d2dada
F20110109_AABFSK lai_c_Page_02.tif
d43a609f28c21ab5081d4262c3e9f815
6a3788cc1846c7d63f3677cc56ecd3b02b53c784
44079 F20110109_AABFRX lai_c_Page_85.jp2
3c8e16cd1ec230b6ef66e88844834bd7
a15bb63b7ce72115050009485c911757a4c85d89
25271604 F20110109_AABFSL lai_c_Page_04.tif
2ca3df9c23c35dd810cf8862c82f22ff
23a531ef281af41541a464041212f36d6fc100d6
35807 F20110109_AABFRY lai_c_Page_86.jp2
c8c26117293179f0e7a3b7a8a8a0e60c
9ed56c9d12d96cbe655fb3d501ef62fa5e8fd74a
F20110109_AABFTA lai_c_Page_23.tif
521285059131e8806226c4b09e3499d5
d28e5220e575591152b8eb6db6596a7923677dc4
F20110109_AABFSM lai_c_Page_05.tif
ebd526b55bcacc20b3689d55cb6acc3f
c308bc19c7ed63feb75b1a90715c54a036a86001
1051933 F20110109_AABFRZ lai_c_Page_88.jp2
c88c00ffd15414132fbe707357944117
07238b94af627ae479809397649f04853f71f2e7
F20110109_AABFTB lai_c_Page_24.tif
f488c97cfe9d29032e2c49ab795d2de7
2276ab0a1ffdf4e740b0637b5853b077d800d3a0
F20110109_AABFSN lai_c_Page_06.tif
9636d4d3d9ed2590a2aaa87f8f15279b
9fce062fbd19275afd38a19ad868fe2be1cece5a
F20110109_AABFTC lai_c_Page_25.tif
153bc9da962c5162af0785776e22b2a5
b3dcf1acc4f64a9286caac7fedaf61067a87476a
F20110109_AABFSO lai_c_Page_07.tif
9c5858e21455b2152e28aa604071d6e3
58fb7f4a1fb978ff7d34075dc8aa15806c204d5f
F20110109_AABFTD lai_c_Page_26.tif
8f2c74ed0ea92051f6bd3d4b008d61c3
9cd0f7cdb333ccc5cabc0f032a42063da1efc904
F20110109_AABFSP lai_c_Page_08.tif
3d5dc808a3e1fb59b71fa85b04f45dee
d935d7875e8cc096070ff8a186542cb985c483ac
F20110109_AABFTE lai_c_Page_27.tif
a35f19b9b5dd8526fa7f38ac3496a55a
bf61559a490dae569be133adc064e6f42be90210
F20110109_AABFSQ lai_c_Page_09.tif
01b30bd1215c706ac8aa962cd4345f17
69b66f0245d45512d350d4fe0e2a939140db29fb
F20110109_AABFTF lai_c_Page_29.tif
1f6d51f7fdcd30324b9bbf7a103f9dc1
8c390981980ab7ae7ebd4ee9ac37ca1bf2df57ce
F20110109_AABFSR lai_c_Page_10.tif
2c2c7d4fbd108d7e56449bd911b24707
8b68f71d74eaada91d2aa4ef236b9ad8a39e2a30
F20110109_AABFTG lai_c_Page_30.tif
6553aad1cad08c41705f933ca5d4a177
ff90d86bbecdd3bc3ad86e13a7e5722833b866b8
F20110109_AABFSS lai_c_Page_11.tif
ce11f61f14d5bfe30cbb3a8662d6a9cd
2a7089118ac7b55bb61e397fecade079fb8d8027
F20110109_AABFTH lai_c_Page_32.tif
fe7cbdc2a21e879ec98b1c6e0bc35b22
b10eb92ae409ed943e20b02403e3f83295ee3a91
F20110109_AABFST lai_c_Page_13.tif
bbfbb4e2d2fdf66f4ccf862283796270
0aca818ea9d08ae429eb4df134c4b53300c099ae
F20110109_AABFTI lai_c_Page_33.tif
4ffdb844d83a29da7ee2efdf378776b5
d94123d93a22ead88c60389352030a093b48deb7
F20110109_AABFSU lai_c_Page_14.tif
bc7810e500a17e69304a0cb025232f3d
42af1543dbc061b9603a41715b85898e7c27831c
F20110109_AABFTJ lai_c_Page_34.tif
8522b3cc34e6b8535eb51f979da617d0
a3aecec53feeb7afacdfd38574b7890f3756ed11
F20110109_AABFSV lai_c_Page_15.tif
5a799b2a1a7bb836b2e2dc66a95de83d
8ea1cfc04e74754d093ea0879992c268916aa44c
F20110109_AABFTK lai_c_Page_35.tif
26a4f90ff7eaa1996d0382cd14d90bd6
7ac4f40f935f9ce55b3660344f5b20aa10539291
F20110109_AABFSW lai_c_Page_17.tif
8d157854bd2cd94d7a6be76d868ad94f
7b8577cdb57ed8d88c664391114a00fd47f48c63
F20110109_AABFTL lai_c_Page_36.tif
55a5e8dd85eba9ab4fc20a05a6e635df
f28c5354287471f26acbe75124131494c7e844e2
F20110109_AABFSX lai_c_Page_20.tif
b2ad57644af0e2947f05e0a4769cdbf0
5c528b654116e78a03af50e4c78ccdc8e75b001b
F20110109_AABFUA lai_c_Page_57.tif
bd8f825c818b12f6387ae14d15163c89
7c891928c984f7a421443f6b7c1c1c4ffcc7ee77
F20110109_AABFTM lai_c_Page_39.tif
88abdb0f01725584ccda964fd069ebe1
f37c3b8651ae9eb8607ba0dff33485d59a79d94e
F20110109_AABFSY lai_c_Page_21.tif
46c13bccce4110cf2e456f224e1eeacf
c227d13f2c3b07937aeb13106c42720852eba438
F20110109_AABFUB lai_c_Page_60.tif
156ecaa2425380048468b50930be8588
cc8e4bd1e4f16ee2a78796513a2ce3e229e5a816
F20110109_AABFTN lai_c_Page_40.tif
e5d682620bb35c61761f7fa23ea99fdf
ced5006241ae1ab54d4aa2afa664d83ef013c49b
F20110109_AABFSZ lai_c_Page_22.tif
9a6a7fb5af3bc69a007c84e54794df30
60502f197e4fb74359f4df0fabbf65539c3e92b2
F20110109_AABFUC lai_c_Page_61.tif
3834d6b1c9b42087f7021261c1bc904c
b72e70d454ab2dfbb1555202f96542bdee3fb40b
F20110109_AABFTO lai_c_Page_42.tif
43834f5f968c1932938e9fa743efd12e
46da232cb183152fc6721be1cd2d02ab985baf0d
F20110109_AABFUD lai_c_Page_62.tif
5fbb27d232707b19aaea6c4efcec7a72
50f5b3ffaf306862cd66690b9a6d641105620005
F20110109_AABFTP lai_c_Page_43.tif
c518fcec0d2865640d6feee47794022d
311e1df0206a465c018c1efb2cbcedce20942488
F20110109_AABFUE lai_c_Page_63.tif
85b51512ae15bbd8a5c623feb126738e
a89bc4ddfe21e51fe39bbfb6d6efbed17da15a6b
F20110109_AABFTQ lai_c_Page_46.tif
c66addbe786f698b12061b1b957b8afd
00236dd65e6acef9ed668da148739a1cd7f97c12
F20110109_AABFUF lai_c_Page_64.tif
f3aa12030d24913d67a485fc6daa1db3
a022a7a44b31e4ee4fbff7fd870b7ae9f59e9821
F20110109_AABFTR lai_c_Page_47.tif
dba66dea42ae9c8a1408ace5548d56fa
e4fad106bb2d0b499e1e9f52149581bc869c1cbc
2117 F20110109_AABGAA lai_c_Page_37.txt
bd9246f71b26f6b83a2a5ffc95befca2
dc88cf0b5aa8f8442221451185b5a8fa66cad674
F20110109_AABFUG lai_c_Page_66.tif
5f80b1884ce6e4abaa9d19deeed0dee5
3243618dbefc1762d3dc1be1f7e6475d1cea4bde
F20110109_AABFTS lai_c_Page_49.tif
e8d203e0c0529fcfe3ffabf9a8e08a90
5ee29b0be3085a4a850e23ac45cf663f5e9b4077
2139 F20110109_AABGAB lai_c_Page_38.txt
c6ce894315db5abad770dce9fc0b7c5e
54c4d2a030b5cfe66897439c15d74677e6016e3d
F20110109_AABFUH lai_c_Page_67.tif
98fcefb81a5e2ff9dce4e25cee6896a6
07099f84ee12c986b2754f311f9e68a53ddcf142
F20110109_AABFTT lai_c_Page_50.tif
5ea5cfa84d511498df0bd08fac018c66
059bd5789fb2f487b8fd4834d4cb11cdf182052d
1511 F20110109_AABGAC lai_c_Page_39.txt
e90b326abd4e41d2080c29d8d2bfbc98
8aceab731a08ab6e51698683a659a9e0d24f2e0d
F20110109_AABFUI lai_c_Page_68.tif
cb06e056198142d82c8f7660dcb8c9f9
ffb22a743f3bfff7de7bdda512cd48003e6194e1
F20110109_AABFTU lai_c_Page_51.tif
4f887d03b0cfa7a33fcdec8f14dac424
dd6c389702be29db8adf6ebdb9eda9ad3b0a9bde
2072 F20110109_AABGAD lai_c_Page_40.txt
14c20e12a5793b1687c00ad3fea21655
00843ac88385b391922982e1f75eaa0c4f950761
F20110109_AABFUJ lai_c_Page_69.tif
dc47a6050b16e2e736f7961e11412ac0
35e9d0ddc83ceece793b1fba8ac7e359a6375721
F20110109_AABFTV lai_c_Page_52.tif
1fdcad768e32f95e016b0cbafb13cd11
f75572711daf3b6971ecb500c9863085a1cdd657
1983 F20110109_AABGAE lai_c_Page_41.txt
8d25c1b8cd8a04443ad935ec30ed3895
f22d666f88b79526cacbfb7fb0afcd609a6e2e9c
F20110109_AABFUK lai_c_Page_70.tif
2d7c4158f44e3124b9d7307f7413d29c
8823e8c5cdedd080081fb228f93f6d502e34e162
F20110109_AABFTW lai_c_Page_53.tif
ecf5bde82535131aac7ae5406aa53abe
ed04243c30728bc1abc209c90a4b7400cdd1119a
2141 F20110109_AABGAF lai_c_Page_42.txt
be49fc03476422dc43263de03c481ee4
fe5b323e048ba0723fd0f449e6cdbeae11b2617b
F20110109_AABFUL lai_c_Page_71.tif
2c5184788a6cb17edb97b03f72cc5d6c
630018dd20647b205674f590cc731f50e93bbaa1
F20110109_AABFTX lai_c_Page_54.tif
d18b77873316c701a24aa0d783e865ae
60a3af2d0c5ec3e7c11db308f1bdcd0ec16a6760
1882 F20110109_AABGAG lai_c_Page_43.txt
02225dcbdb86fd2e06315841c1021c25
2d377142c3907f8e311df66204548bfefe923bda
F20110109_AABFTY lai_c_Page_55.tif
ffa16505454437a94a450c93be70c752
8d3017665e125d0ca25037bb687d26d5eba08935
1690 F20110109_AABGAH lai_c_Page_44.txt
d1346cee553b97ee268fd972c1bb3356
da33063d397dc985b075b37bcccf57a51a182ab7
F20110109_AABFVA lai_c_Page_87.tif
644a610057827dcf381e834fda9952e8
0bb903fd45b38241dbd62a83663f08b3c372b16e
F20110109_AABFUM lai_c_Page_72.tif
6ed173aae5497bac5430b7328914f861
4859d43f4a636eb01fcfa87ea78bdec038344326
F20110109_AABFTZ lai_c_Page_56.tif
5ad48b97e25694fe6346c8c508e82667
d412ffe248e4727e4f8eed31fce1e29fa422aee2
1592 F20110109_AABGAI lai_c_Page_45.txt
1d09b9978a848429f6d22ad9410a8c52
f13c5262fc5b3886682eabfac47e4bb62d6cb811
F20110109_AABFVB lai_c_Page_89.tif
4471c41b101bd7988ed42c7b3892cb5e
4e42b9a24768c4eb44eaea2f87774b16d0adf7e0
F20110109_AABFUN lai_c_Page_73.tif
d003d13a090278336aeb610f581cbff4
77688a2ade417320c3a55c8e5ece9da70346507d
2253 F20110109_AABGAJ lai_c_Page_46.txt
726537599bf6406feb55b6a2c2dcebe4
0cc80b8d47f1a49412fcdc9a6aa15e2ea1361038
F20110109_AABFVC lai_c_Page_90.tif
b2f5fb4da0561329389f9a55cc4b088f
8b804b31f1e2571bb75d26c4619ab4e58188e187
F20110109_AABFUO lai_c_Page_74.tif
590de04a0b39c7123b228bea4b8a4263
24771822d3b2dc2b8c1c3002e27c269e439e5df3
F20110109_AABFVD lai_c_Page_91.tif
3fa3c2fce61f8c7c90b95c4de3ba654a
d68dc071bfe8a00f352aeca7629082c50abe6735
F20110109_AABFUP lai_c_Page_75.tif
4da0a29fe487d9cdf0cbb0a1d3bbe81e
270d5c5586056fbee2c73b9d697ece454738d185
2043 F20110109_AABGAK lai_c_Page_47.txt
20a5fd1072ec8fe427d8e57c6a0c6cd0
2a01f18408eca30bfd5861b69ac4d6102e5f0d82
F20110109_AABFVE lai_c_Page_92.tif
8c1c596b0cb9eb0fbbaff9fefcf812df
0ce98bed32b0e1362449883e31e5f203231ac7da
F20110109_AABFUQ lai_c_Page_76.tif
3985352b82ba85bbd5e5ef68fd44764c
b014eab562d982f034d4f11ad898395ef33a7fc6
1950 F20110109_AABGBA lai_c_Page_65.txt
b1c4ae67a7aa5292ef44d86c0c468cda
3dbea56ccd1f71e6b226d26ddcc1cdc7f7a362b2
1748 F20110109_AABGAL lai_c_Page_48.txt
8bda4f99d5fa0c783f179066f5f3c828
6591534d46d96056c5c0ac2f2ad2eda52ca066d5
F20110109_AABFVF lai_c_Page_94.tif
928b21a32817cf0de890f4570c55f999
999e14ab7c10f4c9d74bfdf2c7bee1a17a6ccf27
F20110109_AABFUR lai_c_Page_77.tif
0cb160a9844cee929615058fb50b6a4f
5d8f5ad3818414a7a60526c6e96c50360a2b5e25
2106 F20110109_AABGBB lai_c_Page_66.txt
c09d12253fb6a783bf79cee3ec88b5c7
ff4f82aef849be4071fdf730c6e5c166e93bf73a
1311 F20110109_AABGAM lai_c_Page_49.txt
2c84fa25709e1d7659ddcefd692dcc97
f1e07909e1479fcac507531e589f1a59bc6498ea
F20110109_AABFVG lai_c_Page_96.tif
8b6320894338753d6a3a890bf2a66ae3
8530a58c838a3f651ca97737e3b96fc36078fc52
F20110109_AABFUS lai_c_Page_78.tif
739380634df459f9e3a4150dab667dca
417b62c2abdf420e119d9945f4fdc1247b65718a
2358 F20110109_AABGBC lai_c_Page_68.txt
78386141ba1037b7e224a9a433615767
cee0229fd1b899b034bfc93f49be714578b3d630
731 F20110109_AABGAN lai_c_Page_50.txt
22c156356df3b09edef11840ffaafe29
79e17e8e6c05eab2cb3d0d77e994f15f96238ad8
F20110109_AABFVH lai_c_Page_97.tif
3b1b4f4f29a792ca6e8854b5f5b0f9fd
8a905d08be5477151503f326f5927fb30e4b93c1
F20110109_AABFUT lai_c_Page_79.tif
e024f8d3ac4e0fe535dc475a8ea8d407
eb396f41caac5279ef87a2b039fc2e357979b3ed
675 F20110109_AABGBD lai_c_Page_69.txt
0d6c1a3011ee8365e1a741a42a17e47c
6a3ba8db70b8f6702d1da61ce80abd7d118e39e7
2140 F20110109_AABGAO lai_c_Page_51.txt
ade1e6a2aafc1eb1e3e4b4fae043dd46
57e7689726763809eaa955c9f3f553436ee386e8
14636 F20110109_AABFVI lai_c_Page_02.pro
f2c3b58fdb1323218d92783d406c65c6
09293ceaab1a852dcf982b70b77a33be5dc04c6e
F20110109_AABFUU lai_c_Page_80.tif
a14dab78379f949a713028fc4321bbe5
67065b73a2139f0704cf38d5c0c450c82c16f807
414 F20110109_AABGBE lai_c_Page_70.txt
b944d7595cdd06bc364376d37a5a62a6
8ca7dbb2dedb29eb323aececbe1892230be8ff35
1503 F20110109_AABGAP lai_c_Page_52.txt
74e1ee84d36c4e5c79aea38156dfb108
bfa83eb596e5148a2fe1b9c93a1266b48b0086b2
57858 F20110109_AABFVJ lai_c_Page_03.pro
dced11e8c0facd4fedcdc65a0d6d089b
ef8151917a60245eb5e7f9857e5e1b8e720bddc4
F20110109_AABFUV lai_c_Page_81.tif
817670fe98602593364dda6a87daed20
33b76ed1248a94358d37e99ba11e7ecd586ee9c4
2322 F20110109_AABGBF lai_c_Page_71.txt
fab3cbaab0ca18cd0ec2725ad593e67f
a9694279f48b7273b6c3c53f26f4dba2ceeb943e
2238 F20110109_AABGAQ lai_c_Page_53.txt
045039ae68e7e5bcc1e6a33a978b11ab
c805c7a60c0b8e51ce12badf0f1d69a3b04d0448
25992 F20110109_AABFVK lai_c_Page_04.pro
e02171548a3b5bef4e6d08f5192ed2f6
536f5d6ac6564bf0c1d99631adb71091b830bf96
F20110109_AABFUW lai_c_Page_82.tif
806ec09ca049045d685cde23da3d34cd
7296816fbeb296155aed61098308c953642022d0
511 F20110109_AABGBG lai_c_Page_72.txt
383d9a51d612491e051f4029a4734bcf
b4d66510e3dea33888769096d51a7c09588ffd00
37985 F20110109_AABFWA lai_c_Page_21.pro
5634794b83b28134c1d11eb492f38289
99915a1d4d42db6193496f60eee4409567ecf162
1422 F20110109_AABGAR lai_c_Page_55.txt
6d0883fd60bdf56acf07be98b1208cc5
b83eec0c3c25b1bf315b98499ba6799730e0ba8b
15204 F20110109_AABFVL lai_c_Page_05.pro
3d4c4fc7d57edd6af07970b882ed5942
1215eb0a29b73ca0527ea278aeb3bd6d47602b45
F20110109_AABFUX lai_c_Page_83.tif
e8868a99192e407604d06ce22f4e3622
d39656f49c369cc73a9a8116182e10acd89b7cce
2327 F20110109_AABGBH lai_c_Page_73.txt
4a7b065f1524842fd1d60ae03ac69ecb
596aaaeb876ccd41de8af556ae01da29e74d8b56
2217 F20110109_AABGAS lai_c_Page_56.txt
4c1a9475c31bf22bb06df6a0ad420e1e
f388aac6d05740316adbab5fcbf2184d5233464e
40646 F20110109_AABFVM lai_c_Page_06.pro
fe18f565a9d70a80cddaf00e62bc190c
d40b76a50952f580f5ea6ebdc459a16d9dd70c5e
F20110109_AABFUY lai_c_Page_85.tif
e908311ed32929e6c0ad90e54e3acc43
1a67e5479474479c476c21503be4aa26a50b22f6
2470 F20110109_AABGBI lai_c_Page_74.txt
59917becfbdc7593871f52e3abec4ccc
952e5b8654dfc3496184d65c5f91071df5cd71a8
43151 F20110109_AABFWB lai_c_Page_22.pro
a201066c60f19106dfa0d44501c49465
6151b5b0355d6cbf1eb2905047be506c41a0d64b
2172 F20110109_AABGAT lai_c_Page_57.txt
ec7db285c7255d3d1830374d5ec85344
e2d370847eca0753089bb5f508504dee67f7076d
6823 F20110109_AABFVN lai_c_Page_07.pro
8f5e1570d1ebb3209a1a90c893be007e
3e725abf7f5a1547820e0ccdbe704dcc8321fbb5
F20110109_AABFUZ lai_c_Page_86.tif
4d6f245ad30edfb15dc56e417148af04
44ea806a69defdf9e985c425af4f0510fd1c21a0
1619 F20110109_AABGBJ lai_c_Page_75.txt
38280cfb5cb64b82936ceaff9f79a675
789b83ed6f65972d9bcabaef770aaafa0d3aa609
45142 F20110109_AABFWC lai_c_Page_23.pro
71b91f922b33f5bb10331b60c124b0cb
d132a4c4d9f0c6d486b2ad2198c1e34e8cdc888b
2138 F20110109_AABGAU lai_c_Page_59.txt
feab8855d35c1c25b20d54daa03792bd
6395b21a92405c957781fa980ed05da688a9de20
44089 F20110109_AABFVO lai_c_Page_08.pro
d851b04b3496969b90f1388a02c4654f
527cb7cf5f24fac5c7294184be37b4a791cd77f1
37691 F20110109_AABFWD lai_c_Page_24.pro
acdcb86dc0716596b85e012f252b8218
625175eb7ccf27e33d0a65b28147da79cc4912a4
2017 F20110109_AABGAV lai_c_Page_60.txt
d2587f2ff78765621e70b16435454fbc
f9bce141b5cb1f933c3547252c3ecb99f59054e0
29798 F20110109_AABFVP lai_c_Page_09.pro
913f35379e16fad5589bebdc1b6cef3d
504c28b21394b088a2d475b939f01a3d84c7bb70
1747 F20110109_AABGBK lai_c_Page_76.txt
b2ce9579dcbed57d89e2fa49d2053296
d6fb169e6540b2038a1e3a57bb432c3b63472f75
41373 F20110109_AABFWE lai_c_Page_25.pro
98455f429a3c8cdea85a46ef2ed334d0
98f8fcb9f7574929b10bc4f12595caedd7038fa6
2382 F20110109_AABGAW lai_c_Page_61.txt
eab5da8cf0bb8c88db2ca6fa442bec58
cd485bd059f36657784a5eeca21ce88cb398a48f
74060 F20110109_AABFVQ lai_c_Page_10.pro
bd6aff7505cde8bc580857e5e89a8b6d
bd339889d4a3c2597821d221199711e9606ab2a2
56895 F20110109_AABFWF lai_c_Page_26.pro
72b99014f6372ddbb730fd3fb81b513e
17768c2b665cb696a131f18c3869c6e573e599c1
2258 F20110109_AABGAX lai_c_Page_62.txt
9bec8e1932406d775a9e71da8a50588c
95d30a409bc68eac0a188dc4856029d6488c9899
72177 F20110109_AABFVR lai_c_Page_11.pro
790db54ffd91cd339c9b158049025d9a
06b5bc148dac12bfce4e497114e586aa6ab7d4bd
2775 F20110109_AABGCA lai_c_Page_93.txt
cc3c88f70349e0dcb84ece28f837df33
e59fcd0c38c1f124a56a9711fa4dfc8fd6f547c1
1528 F20110109_AABGBL lai_c_Page_77.txt
a48229ff8c1cd191628e98270dd86e99
0b4ddcc4d18b5eeffbfd85194b7319f1dbfe6e11
41624 F20110109_AABFWG lai_c_Page_27.pro
9645ece74a5adcc6743592746d85d079
991003c8ae3334e8cf1ff27c2cf074fa31c1b62a
1957 F20110109_AABGAY lai_c_Page_63.txt
3362531d239a6b1e31e7ed1ec4f919e2
d9c65dc7b9cfa5a6feba6bf79e30c3d33f02994f
61614 F20110109_AABFVS lai_c_Page_13.pro
a7e24a3f685620c61a54e5cca00b8c14
a32fa78c7d89e9cbfdb1ace1de119b8e06f9b685
2545 F20110109_AABGCB lai_c_Page_94.txt
ed3290ca63400686b1f25f7648495d88
4ed6d9ab59148abc01109367504dfbfecef19a7d
2365 F20110109_AABGBM lai_c_Page_78.txt
0cd456141100014e94756ca7f7fdfb0f
709bb5f80cfacac6991acd1e0ba5ccd0a6aa7b81
9261 F20110109_AABFWH lai_c_Page_28.pro
bf98d8776168255a04cb16e50befc97f
f9098c90ac5bbb3ff20308c131e6ff2aecad69b3
1951 F20110109_AABGAZ lai_c_Page_64.txt
2b4d75585694b0cbd4c4200c8ceeafbe
aea9443f605011443eaaa5f0df6ec07a37be3563
64926 F20110109_AABFVT lai_c_Page_14.pro
490f077b1df659761192b7ec109da2e3
272084871ed2ca32a84e75bb096fa7314a199d32
2367 F20110109_AABGCC lai_c_Page_95.txt
c04979f6f88037132d848aba2ab6bfb2
e53ddaed9861f56dbc601a292440c7b977de2182
1288 F20110109_AABGBN lai_c_Page_80.txt
99e0e17c2b2dc9bf337ea1439666a505
6a2d21cfd3c20660281bd3ffb18d851499d5138a
51572 F20110109_AABFWI lai_c_Page_29.pro
a153eab6508e6fe9e62b02ec61c43fc9
5fa7985d229332a68dc915d5fd6c7f96b007a05b
60454 F20110109_AABFVU lai_c_Page_15.pro
632adbe49e1becba7a88c8f726284abb
1894df7b25c982ba885efeff9667be4816a837b8
750 F20110109_AABGCD lai_c_Page_96.txt
25f24f188346303f325227c6138fb948
dbc2ab67bed7e81d4390b906df9d91dd5e72d6e1
2033 F20110109_AABGBO lai_c_Page_81.txt
83ad2b92df57cb0ccbb9fb302d55d6f8
fb2472c870e1e22a0d0a89cf6e0affec4bd28a04
55868 F20110109_AABFWJ lai_c_Page_30.pro
41db9c0f7fa00a565eaeb1c883d133ed
8e7ed6d88717c42c07ae4ea306cca57594d4b895
60197 F20110109_AABFVV lai_c_Page_16.pro
74982d8be894b18f4088722069e82403
9991e18838bb6ba7bb74babe1cfc7c8eaadabbbb
595 F20110109_AABGCE lai_c_Page_97.txt
8f9e356371ea09d77ac4084404aeedc6
6ad61f9e3ec9d6c3f6b513c457cd20d8032b96e8
2267 F20110109_AABGBP lai_c_Page_82.txt
61ac3e7f1e733c2522d2d4ba1bb591c8
9583bd7ec8bd65c341d9ceb9d53fb8639d557520
54835 F20110109_AABFWK lai_c_Page_31.pro
bbbc897d03a5075eeb20e7580f8d25b2
8f53b1e4fb4cdae0555b4ffa5d612345e7da247b
10192 F20110109_AABFVW lai_c_Page_17.pro
1db4fd9c47bc80424444399eb8585b3c
9fdb7d376e0a6884c75a8bf64899e2f5354a801e
3779365 F20110109_AABGCF lai_c.pdf
89b4e2f00b0c57337bb994f64a6eba9c
fef1db64646608971e0081617aead393a3a6205b
1593 F20110109_AABGBQ lai_c_Page_83.txt
340f5c6c3490778007ce02c3a26ac343
4f89d0d2c8a14e77d9a118001e6ca64a96e13ede
5241 F20110109_AABFWL lai_c_Page_32.pro
4918a8c81764fa178de2da0304c8e647
d8ab2ceb03ac8ba0d042674686f9d8cdca88a2e9
36134 F20110109_AABFVX lai_c_Page_18.pro
3872f96ea31a57a8cec089de0148f6a0
403590a917673f62de47710ccbf45f5dbf4456af
18629 F20110109_AABGCG lai_c_Page_01.QC.jpg
0714a75f212117e47aba0d543998421b
549d8738f8c9c4dc0f79faa082937185485b9f94
43406 F20110109_AABFXA lai_c_Page_47.pro
cd0b70e0b704d3f4a60ed76510c4c9ed
d96551c67be153a7f5fd4504da403f79b3713c13
F20110109_AABGBR lai_c_Page_84.txt
ffb8f221fa47aa5a42614f53cfa57bf5
c77be9ce1788121c55413d37ae0e7991eabaa96c
55933 F20110109_AABFWM lai_c_Page_33.pro
cf133e506cb1aff0dd10622ecbf9c2bb
5e58f7fa541a81154da7f2b3937263b000c1f5ce
45846 F20110109_AABFVY lai_c_Page_19.pro
e0c84852d836896a0da553c92c28cd09
90e5babc57434411002c9355eed7794930f16177
6708 F20110109_AABGCH lai_c_Page_01thm.jpg
d6460d53030279d0dce57204357d7091
9ad531d2acc303d7d265c984f9b0a9ec1844434b
40207 F20110109_AABFXB lai_c_Page_48.pro
a5ea383f7e7f08a0028fc9dc9b236a8e
a34144efeaaa781bfd90c477dac641def5dcfe70
597 F20110109_AABGBS lai_c_Page_85.txt
4083799898f6ffad80794b33a899d568
fe07fb2e4745226088a109ebae2b9aa8493f53d0
50855 F20110109_AABFWN lai_c_Page_34.pro
33e5e2879a14ddc0c01b0575e6b40e54
53162fe2aff10d1899ff4c544d6a6b2ba035771d
34298 F20110109_AABFVZ lai_c_Page_20.pro
44148f3373436684660243e5cb7a9727
308323ead97c8f7b70dcb02b80dfe5427860afc7
26935 F20110109_AABGCI lai_c_Page_02.QC.jpg
90cdf1176aa1167707b8ad2bd0fab5e0
96ecc1deaee5b291cb87f4a2167ab3d93deee719
716 F20110109_AABGBT lai_c_Page_86.txt
98735011f12eadee95e88dd638dc3b0b
f02deb1735e7c6da5dad947d56d5807d3b800839
38665 F20110109_AABFWO lai_c_Page_35.pro
58f0a6dd7e41c16ac0b4a8504d024e6b
bd592f5893765c7f91849dc8ebd61092df008dee
101734 F20110109_AABGCJ lai_c_Page_03.QC.jpg
12d8e04d2642f14b6e918b40af45eaca
7d630da811da9facb9330056c368f6c013f12107
29997 F20110109_AABFXC lai_c_Page_49.pro
5c8d0ae203a1b4cbf5230a08090713f3
29f43197f9063ad50cfd6b297abfa2ec5ee4720a
2049 F20110109_AABGBU lai_c_Page_87.txt
39c6ee2af4b548709ada34b154ed99e5
ee9a0a1c01eebff6e00f007f6fc941793794d46f
30142 F20110109_AABFWP lai_c_Page_36.pro
723689c1a87150be0c2446a78cf11929
d19378ea53c7a018db6f064fcc747480e9e5605a
47652 F20110109_AABGCK lai_c_Page_03thm.jpg
c650d6c32a6c5e89b988a75a0b0d3ab4
2844dede0ac9c0e7c7f5889f10d97069581f9f4c
33979 F20110109_AABFXD lai_c_Page_51.pro
261e6169c04dc9d1e74bb43c9f382464
488323b2353dd10f021ec2c3a6cdf30c383b36c9
2288 F20110109_AABGBV lai_c_Page_88.txt
7cf7181ab98920943b168059532931ac
e697af6282ec9216bbcf67e5ace168d3cbeb6c95
43786 F20110109_AABFWQ lai_c_Page_37.pro
20ea5e7554776345383eb6e24369099e
16c7e02aa84d864d84e157a3861c1bf10c2c7d6e
69889 F20110109_AABGCL lai_c_Page_04.QC.jpg
e8851096edbe9c993ebfdcd1c14f50b7
8a46df51dd4c3f61c40259868c2d3bfd33709922
32477 F20110109_AABFXE lai_c_Page_52.pro
9d7af45db695579e78b4ec9380ad5469
feb8821b669a9e4c87da0109a98d52028528b370
1977 F20110109_AABGBW lai_c_Page_89.txt
c84ac05ee13604e72cb6dc3f87452685
81997a9260c80fc797552053e9220126101cef37
37878 F20110109_AABFWR lai_c_Page_38.pro
03ce84879f4645682fe3d88e40fd9c90
56b753ef34918339607db32d165cd71de2ee1538
52957 F20110109_AABGDA lai_c_Page_12thm.jpg
65999240ae7cc097f1e74667677c2d49
fd933e32b4ba8bfe43933cef3cea3d1266b94b02
53107 F20110109_AABFXF lai_c_Page_53.pro
c8da592a5dfe532a53cba8b6fdba7f05
f5b4dc74f877d7bf4d59f4ddf2922581f831f2a1
2225 F20110109_AABGBX lai_c_Page_90.txt
36f8473d5826b46f2a90267b5c4f501d
a77b450f50ea3766791bb3b2e3d346fea10990e2
32420 F20110109_AABFWS lai_c_Page_39.pro
80c088d951f94a4994a3856df0b02ca6
188d5733c55b2dcc68a95ae083f0df486bac700d
117266 F20110109_AABGDB lai_c_Page_13.QC.jpg
0af7d714ff4c6eabfa0b43255689c313
0a59d71b4d907c4cc736d5279cab17784d0a4c44
39947 F20110109_AABGCM lai_c_Page_04thm.jpg
5333ac6992c3ec75fe9ac9266675509e
f92b12763112d930a37a4fe9d8e6a82ca81d0e08
39048 F20110109_AABFXG lai_c_Page_54.pro
7a8b7a8e54138cb35f2d836eedd3d6cc
3717a737d65d11eebbc229a9ad8143de9d98b44d
2704 F20110109_AABGBY lai_c_Page_91.txt
d7f9404c5f14bd90bb3ef47d4cb6f8db
88c73274822f7d70add67a7beb45e81ab776a75e
41447 F20110109_AABFWT lai_c_Page_40.pro
bc833699c5e348686e705506ed5c5f0e
189462cb3a95a6b14185e73ef2458f6bd64804ea
52264 F20110109_AABGDC lai_c_Page_13thm.jpg
21bc80271dbbe1bb831d4446672ff652
27b58d36d3bcc21f63936081773f3ca621056288
59958 F20110109_AABGCN lai_c_Page_05.QC.jpg
2278431f6d0a2c308209b3c2768df389
45364ed941152598b50e8ee20265e4a760e01d16
35453 F20110109_AABFXH lai_c_Page_55.pro
24af257c51bdd9a91952e4b557c9c159
f30413f48201ebe4b130462dba8c3fcc8723b676
2650 F20110109_AABGBZ lai_c_Page_92.txt
c961b63e4b89280d6757416fdff78e92
3b832730c7f917bca9c4080a4948ab3db323abc0
37988 F20110109_AABFWU lai_c_Page_41.pro
2a2600238607e8e3018c70ea37e08c3b
139aa739b05279cbe6022449e16feb87e023b8a2
125200 F20110109_AABGDD lai_c_Page_14.QC.jpg
ea0a4c7cdd36032f704b3523fd0b1adf
9e4d24a26c383b0e609bad02d73b55613f30681f
36578 F20110109_AABGCO lai_c_Page_05thm.jpg
a0bb35c14e0c2f207e2951e9b75639ed
2c0d66f2142d1dbe08349e6455d0d08002c1acf3
54163 F20110109_AABFXI lai_c_Page_56.pro
7d0d8aea78e9f4c768d1e1e69ed4426a
e7d93bd267d8e378f7558cf214f4f782cb63607d
43519 F20110109_AABFWV lai_c_Page_42.pro
43a0431240cd07751dde77bcb12b7156
010940a78e7a61ff2fa05acd58c3ec3f5c2bbed4
52865 F20110109_AABGDE lai_c_Page_14thm.jpg
eedf546b479dcda93443df64cc013f88
fd8cafd2c5b8aa69ed47217811e63e3b26bb9ab3
111913 F20110109_AABGCP lai_c_Page_06.QC.jpg
2fefda6f6eec50a14bbf67288591bb71
4925e53f3bfa1c2fd7f4a3a1ae2340f6043618d8
41749 F20110109_AABFXJ lai_c_Page_57.pro
da1c3472d8a1bb7f7807da7e0e8d96b1
c72b27b2715f7504d37cdaaed865e9d4834d7838
32530 F20110109_AABFWW lai_c_Page_43.pro
d480b21d41f6c94bf192b6a75aa655cb
02131eb908366fb75ae28575b143ccafc2bb5cc6
118117 F20110109_AABGDF lai_c_Page_15.QC.jpg
3aeea523b096f1bd4c7e752f0c64c00c
74d270af9676d774a76ad395118bc50328ec3fef
49538 F20110109_AABGCQ lai_c_Page_06thm.jpg
b2675ea3bc4f2a1306d594d47359cea0
e0b18cba82f1cc58181014d86c2b8b56ffc4c256
41056 F20110109_AABFXK lai_c_Page_59.pro
cca1912b2aa8af67d0fa2de7542d5a00
ee31364cb2d5d1bfb90d2f4aac8ea76ed87a1a59
33645 F20110109_AABFWX lai_c_Page_44.pro
95a3fc86ac2dd6b2b51f53cf7f463880
1c2de2cb49e582efd9ceee97dafa8567c83dc0b0
52097 F20110109_AABGDG lai_c_Page_15thm.jpg
183ec8cf158cb79c090b7e6096fa687c
2f01c5d160b20e6a0b32feb33e7c8b3c9b7bacbc
37468 F20110109_AABFYA lai_c_Page_75.pro
6b41f26119a678ad52796b3df7f8f95a
27567f76eb34e8c520ec6ebbd2220f837aa69ab5
40611 F20110109_AABGCR lai_c_Page_07.QC.jpg
07d974dc33d30a410755226b785d86ac
77a242a212873bb3108cbf6119c8bae1ce892a00
41170 F20110109_AABFXL lai_c_Page_60.pro
13709d49faddc1eeedf963fb2dc7da15
6d34a3f37915b8bd1e0f07f888a9cf6165526839
33128 F20110109_AABFWY lai_c_Page_45.pro
aa0490f98038f90e900c4c34aa6c049b
6bbcd1a3102f5b39ca0957a4002f9817739cfafc
114684 F20110109_AABGDH lai_c_Page_16.QC.jpg
74d8385fff7a54852f348771f6819186
868ebdfad5a2f2f2bdab71d4f637a236c5bb7730
34466 F20110109_AABFYB lai_c_Page_76.pro
4529929d09c56d2356c036a79dbd3c13
d2c26dfdd5eedcdb58465fa9e0ca5d9c456a7b4f
31586 F20110109_AABGCS lai_c_Page_07thm.jpg
4668cc094414cfabd7eaca9357de2cb1
1116bf4898a801c1e7fab9490d8fa5f5bc8431e5
54910 F20110109_AABFXM lai_c_Page_61.pro
b78123b82d09b1219d3734ed39d08974
34c09dbad27464272732a325d8f0037acb3706c7
51757 F20110109_AABFWZ lai_c_Page_46.pro
8cd8cd2f1109a795971c18bebf8f8f14
dc6be3f85dd8c07e3e4645e914c1ac6ce2dd0900
51795 F20110109_AABGDI lai_c_Page_16thm.jpg
71d51cd8f94e6f41a1ec0a6c5d7399ec
056c59574753d5036f66c7c2987ca2f4e350a283
30501 F20110109_AABFYC lai_c_Page_77.pro
5f84f4ab66de2f2e2481142992abb0f2
0469056e2ec8a64107c6d74e141569cacee7258d
66230 F20110109_AABGCT lai_c_Page_08.QC.jpg
e5c1dfc8ce5fd5354b6d063e4f8b7ba8
f1a18381c83bdc4a3f6e69af4da3da7f3cb643a3
48430 F20110109_AABFXN lai_c_Page_62.pro
fd86c598cbd7a5cf1fd5726652ff9f5e
2f4c9aa5cd043343f46bda3b146336353985a2ca
86996 F20110109_AABGDJ lai_c_Page_18.QC.jpg
d75a7ceca0ba809bbb6038a43a91f308
a7dbe8c22ae98a2ebd581c1fcd9e06b563eb4d7b
19986 F20110109_AABGCU lai_c_Page_08thm.jpg
d0f5612f8f4f6cef5bdfe773593fb548
9d6f2eb13dc19957bbf089d9a058e8b0dda9fc5b
47206 F20110109_AABFXO lai_c_Page_63.pro
1f5db5f57f0dfbc6ad6f028bce5f7e6a
7f7374cc086e4e4b40b1a0e7eea6d92f5eb64e4c
43314 F20110109_AABGDK lai_c_Page_18thm.jpg
485b86aecc8e55f0d17887c83deac09c
734869c7d22bb35f57690bb4a15078c22252fc7c
13877 F20110109_AABFYD lai_c_Page_80.pro
a2fb86cb5415d6584bf60295de46fe92
5abd538c10cb31aa6b96548fd659bd6b1eecbca4
14201 F20110109_AABGCV lai_c_Page_09thm.jpg
6fe070d69075daf5a47a44d917ca35c5
9366b72914a3993b16196ba2af4b7753f1cdd4f9
46810 F20110109_AABFXP lai_c_Page_64.pro
798c29b7ef6933b60d16b7178e223b9f
42f294538e83ee534666630934f4c310bb784aa9
96786 F20110109_AABGDL lai_c_Page_19.QC.jpg
5157836037a7c9653d972a71d0983b5c
c9c53fdf4c0acda8c38d3b0c7e7e30f344cf204f
42247 F20110109_AABFYE lai_c_Page_81.pro
d44ca29d8b623034999e1fd0489b26ed
a7051d23c5293bdc77b45e56659841f9406486b2
113992 F20110109_AABGCW lai_c_Page_10.QC.jpg
3df27dc21ba61a410e19ee98908fd0fb
da9db24faaa345f2b3dfdabaa404aa49b2cf2005
38318 F20110109_AABFXQ lai_c_Page_65.pro
ce42a42e952a7b349b992c9d073b8dfb
27cf7ece455bc5e57d1dbb2cd4439d57cda71346
47137 F20110109_AABGDM lai_c_Page_19thm.jpg
7b22ea1d9702a955880264a32c75af8b
a22c88b76c0e663477c48a4c992979a67d190be8
52446 F20110109_AABFYF lai_c_Page_82.pro
5544ce000a559f708dc7ea7cfdb0eff2
b1cdbcfee2d68512d86c30a958cd77098813fcaf
49368 F20110109_AABGCX lai_c_Page_10thm.jpg
8075f6869baaef32e0b3b0a2bce76ce5
c19a0fdbb0d60dea53c8cd90b1bd26298a6f9fae
38101 F20110109_AABFXR lai_c_Page_66.pro
79c9df3cf822b85f9197a82da8106efd
77577c3f7e34e74a323c9d97955a261450772382
44965 F20110109_AABGEA lai_c_Page_27thm.jpg
20a9636e50d38087e01ddb89b2bc6070
deea312fdeb1b2d71f7de8fd7f62099468e8bb96
35714 F20110109_AABFYG lai_c_Page_83.pro
df3b3bc0aa2a23d2210d46fafd0e0d6e
17aef0070ef796e4fbb66397ba4340c8e43c236b
123863 F20110109_AABGCY lai_c_Page_11.QC.jpg
4fe91538151864fbf450e14fa839dbf0
51a71b9c01b5bcf5c4332aaabf87646f25277e0b
31923 F20110109_AABFXS lai_c_Page_67.pro
d01eb3418d4b47f97ddea8415964eadb
6bc0bfbf2e5c98f75e177c3e3374cc68a8ff22ee
25216 F20110109_AABGEB lai_c_Page_28.QC.jpg
06bd32f3373dcf372d11e296f5cc12db
90884f35c4faa5ce95b6fd34b8ec5e28af1547d6
53039 F20110109_AABGDN lai_c_Page_20.QC.jpg
f06f10d2d3d4c42abd79e7f93f53e8bb
15aaa11ff6ad09e3422be6b173e40264190243da
36274 F20110109_AABFYH lai_c_Page_84.pro
fc04d8406f8eed86bab67ba48aa06a4a
893e86f1545e83b940984fb3342e54385b809136
124765 F20110109_AABGCZ lai_c_Page_12.QC.jpg
a9933d2a53a8388c3f20ee4c5bdacbbf
363a200fd32bd429032ca244d167bb9fd9f89138
51053 F20110109_AABFXT lai_c_Page_68.pro
287260c71093ffce35bf121509893704
94754d8051846a684c7be18714d611d6385ec7e4
8719 F20110109_AABGEC lai_c_Page_28thm.jpg
512a05659947034beb70c7c490e4567f
a06392b399d5b076676237d0f7013802c9de4c99
17600 F20110109_AABGDO lai_c_Page_20thm.jpg
efef7eca3f90db548e37aeffd9db8c4b
29a734b4d412c8d63740bd6df67e9a5bde1c6973
15833 F20110109_AABFYI lai_c_Page_86.pro
493eb4808dd05a250b70ff4daf383cc7
286501f1c3dbb9b9fb8d1a8b47733e1b99b8be57
12405 F20110109_AABFXU lai_c_Page_69.pro
bb817b51b71a010d47b35c2ed863543e
3e634e8f5ae95dfa4b0591511a27ee80dcbb886f
106104 F20110109_AABGED lai_c_Page_29.QC.jpg
99e25930a97ec0fc17410c488fd989f8
5c2f99a6acf5a1ddf844a5c3d02fc50410635d79
44405 F20110109_AABGDP lai_c_Page_21thm.jpg
03076d8e5d0847ad56b4b02c913dbe6e
a48ed8da896b8bf01b6376bf4433fa5e4bce7801
49438 F20110109_AABFYJ lai_c_Page_87.pro
ebde569f57db6756cfb408f83bb2aca6
5b1bd701a3d004fd8affe682ecb0c56d8d1286bc
7829 F20110109_AABFXV lai_c_Page_70.pro
79ccf46d002fc222e69cbb7d38cf08ab
4d271fcfcc7919a3ac3ead70aa668cb488b94758
47764 F20110109_AABGEE lai_c_Page_29thm.jpg
3a849198755a170cfb89358455c631c0
ffb51e5d59212184ed04ab070a5387854c322fbb
64921 F20110109_AABGDQ lai_c_Page_22.QC.jpg
ba36a234cbebb83d5291a357931c85cf
b520de76e62c11e9eed4796efd4e0212e97577c1
58137 F20110109_AABFYK lai_c_Page_88.pro
b1c46ffc52fbb7177cf225edb5c8233f
07d8aeba81803c83f47a1a264c372f66e65d1987
49831 F20110109_AABFXW lai_c_Page_71.pro
dd92f12933b2960072dfe9a48152aa56
cf50a20c3a7eb26ad5ff6d883419e3896807caf3
110204 F20110109_AABGEF lai_c_Page_30.QC.jpg
10bc785343ddb374ad08f34b3e84d899
9dc49a1f7fdbce9181a9323aa8a2763be99952b5
1193 F20110109_AABFZA lai_c_Page_09.txt
abef69224a1fc7ff1e2c77f1615a3e27
72160c994f9ff2eb46a5ad8866ea0301ae643983
19572 F20110109_AABGDR lai_c_Page_22thm.jpg
74b8c916d8d6bc3aaa544569467e0575
788b6ba44a30f53112f3a5f683c97cbbdda2366b
48693 F20110109_AABFYL lai_c_Page_89.pro
87c6397bef13952d9a44730eba75347f
774a693e4db4df8d85fa7c6a08aae1ca1f9f311e
7558 F20110109_AABFXX lai_c_Page_72.pro
7402db6d11c903abf98c7a2d1f9377a0
0babb65a1138a2f61540eb48b1b597e2ec471e18
50829 F20110109_AABGEG lai_c_Page_30thm.jpg
22437050c19901217eea2d6fd30b29fd
0d9e3ef00763a02b9ccfd7ba2333d3536608ead8
3007 F20110109_AABFZB lai_c_Page_10.txt
2da3986f35941166d49f33e37c246514
ad5658160ea1f278d3053ed644ed03a99935684b
89029 F20110109_AABGDS lai_c_Page_23.QC.jpg
5e8157ceb88672d3b698706bc8aa7ce4
f53ff14b2e3602b90f01595fc8be8a3439063690
55174 F20110109_AABFYM lai_c_Page_90.pro
c665518787f3062e7bc7289838d3f493
a51e1735f05ba301c25aca7b15258dab5323d609
36240 F20110109_AABFXY lai_c_Page_73.pro
832f060774e90845362b1753c1498b81
3c28d8da63bc9b2c82e212dc1ce8da4a9bab018e
79771 F20110109_AABGEH lai_c_Page_31.QC.jpg
9f0ef9b977a9e5557f72cc4871264ffe
180aa9cbfcb602ec55e2d0f4065deb29fac9ee39
2874 F20110109_AABFZC lai_c_Page_11.txt
d9ec9d81a78f1890c1924b78c39de838
375a788bda8c11608b3a6d06d7be7facd2462e94
45190 F20110109_AABGDT lai_c_Page_23thm.jpg
2e75c49eea0c32db9c7fea153530054e
2e73032417fc81620d6ca8ff1f5905206551b485
67605 F20110109_AABFYN lai_c_Page_91.pro
26ebf55cae326a3172014b683b0922d2
0a9aec61de6766b28a84ac7bdf68e55966e70fc5
55745 F20110109_AABFXZ lai_c_Page_74.pro
87bf3655cb9531bfba4d202c2af102fd
1f31ed4d4043c9e8548e42809ec7d2b8ad6b0b5f
24239 F20110109_AABGEI lai_c_Page_31thm.jpg
e0a1bfacd5ef25f65e3cf9f25cd1e4ae
20b402058bc7824c1a433bb4f4e89774768a8f26
3286 F20110109_AABFZD lai_c_Page_12.txt
e52ea21ac648acb030359f1eb00a31c7
98f20500f573c82021a62a149630d9070388bd2a
82598 F20110109_AABGDU lai_c_Page_24.QC.jpg
f371db963df4bc747f81ae2c7dabbde2
3c142443c82254f1f622cb79c2b6104b50a41566
66020 F20110109_AABFYO lai_c_Page_92.pro
a74827f0689119466166da05e417f34b
4b380b3fa2f1470324a7a76d22b58d65520bf1dd
21589 F20110109_AABGEJ lai_c_Page_32.QC.jpg
f2c86c9bdcf5a46dfd8b85c843442b53
69d2f656d1e090fbc7656621c02ae027a33b53a4
44148 F20110109_AABGDV lai_c_Page_24thm.jpg
a46ffcf829ee32d7a46e3aba5b634793
070eec50dd7c6d3ed1ce78bbb4ec37906e781fe6
69377 F20110109_AABFYP lai_c_Page_93.pro
05e38b63bfbd459b47825107fae0dcf9
2830ba171da5dcf77a0ed235f4d2ef2f6220fc7c
8115 F20110109_AABGEK lai_c_Page_32thm.jpg
b8d674ea10e3afa2320e905de4faef96
d89c046ba1e67b9095c1912ddea30095fe5332f6
2433 F20110109_AABFZE lai_c_Page_13.txt
63a169b3da42d3af5c02f342a5b6fb2d
164b27c6ceec6258cdb7d445731eb670cdde3800
89782 F20110109_AABGDW lai_c_Page_25.QC.jpg
f8ea762c028890cf7eb576aa25764b0d
e6e616110b46bb05bda2534ebbd68f81263990a3
63598 F20110109_AABFYQ lai_c_Page_94.pro
d7f95bcf08cd0a1fe1fe5accff27fb0d
8ec1a082de945663dec8a545f595a7ee98197a49
84585 F20110109_AABGEL lai_c_Page_33.QC.jpg
0927551369b92577587cbae2488839ae
678936270c5a0d35b31e106d5b4c8c8beda0edf4
2616 F20110109_AABFZF lai_c_Page_14.txt
4668ede5851075c89d1a715dfc883264
df4463371c0be6ed3662cbc6334439f36e823cbe
45849 F20110109_AABGDX lai_c_Page_25thm.jpg
c56e015a797f505ece1e07038760beda
8ddb12547eacdb3f993b16da29211c75ea4aeff1
59106 F20110109_AABFYR lai_c_Page_95.pro
386f0ce7d77797e4e9d9b2cbacdb34a6
8749cb2d426051142d128f3993b8e202f6553b81
17918 F20110109_AABGFA lai_c_Page_41thm.jpg
f679644aedb7b4abe239d1919610a814
6675f50a1b593beecdd908fb8292d2d01571cdae
24727 F20110109_AABGEM lai_c_Page_33thm.jpg
43a7f8a64c5349d8238af2d7a4b1259d
2389b95dc1c419ead8a33c252ea4ae0790de1e22
2373 F20110109_AABFZG lai_c_Page_15.txt
ca38303d073004f03b0ae49101601141
1246779741a03cd09daaa01983811ab9a0c1272e
112326 F20110109_AABGDY lai_c_Page_26.QC.jpg
704739e46f42d7dce17ef29d404c3892
9deda4ed16a5f9def552a8fde1a0afbe44fb97b6
16505 F20110109_AABFYS lai_c_Page_96.pro
308bc10dd7fc2369df2a57f187a41d0c
7fd01a8d16d1a6a7346ad930abba3fae2dad9e65
92389 F20110109_AABGFB lai_c_Page_42.QC.jpg
3ebd8df5043944a32265114d70fca301
4a442353cf0f2d1a153173fc3d11b7e879812bf4
78019 F20110109_AABGEN lai_c_Page_34.QC.jpg
1bb2e072eaad04039bff026fe8f5753b
78d944f2bac0fcccc443735155b109a4b242adab
493 F20110109_AABFZH lai_c_Page_17.txt
471e6f66e1c912488cf6812f1294e865
0a3fb63ce6be7341753706a5ce3b44dde5c054dc
50686 F20110109_AABGDZ lai_c_Page_26thm.jpg
535639b15e386a2253ce7305114bca0e
141040f54179149b861fa65eba0819ef6a17333d
13841 F20110109_AABFYT lai_c_Page_97.pro
4f1c50c0845f3b833895fd584cf08f2c
b5cb78b1495b896f38e814f0ebc1d2b52e5fc8ff
45803 F20110109_AABGFC lai_c_Page_42thm.jpg
28f7cc152cdf1c3de955ab1238ccd63c
46f1ee17dcada89b80b094ce99e96ab407afa872
1551 F20110109_AABFZI lai_c_Page_18.txt
e0d903597a0a12b0fab9ebcd261ae939
7358c46f2a7503c1d336c9a7a412560a5e60cd99
632 F20110109_AABFYU lai_c_Page_02.txt
4b0becafbe7798afc71715c4ff8ed387
54c431cb27487502f4a9b3f6c7b4d9d2550d0e32
49746 F20110109_AABGFD lai_c_Page_43.QC.jpg
9b786ecf6232e00d00df36d8f8344347
e00aa2f1aa4a8f0e3bfbf8853db589aa2de4e817
22960 F20110109_AABGEO lai_c_Page_34thm.jpg
e47d19697c237939eaab99bead630277
0f8ae00908930b20151401712c6d17ccbc025adf
2517 F20110109_AABFYV lai_c_Page_03.txt
e3a60b3eedc20c26c5b6d67b5ccf52f4
24545c7efb49b398437cb6713bd7c876640a4108
17161 F20110109_AABGFE lai_c_Page_43thm.jpg
954f96444192404716ff428fabe31d7e
bd1305028b416ecb290438cfbc426d77428d5cc1
82715 F20110109_AABGEP lai_c_Page_35.QC.jpg
f58a43c969d105c1da818b1142f8ead0
c2877b2236b29ace7f157fcb7e5bc04814346d9c
2057 F20110109_AABFZJ lai_c_Page_19.txt
d792350246a90bcd774861a37eb01e79
32591f46434b3f4fe2c006e2582aba58f4ec7d15
1061 F20110109_AABFYW lai_c_Page_04.txt
47ec560535f6be13c39a24acc643eee5
0578c50a688680e84929a5d68536fda9af35b701
18248 F20110109_AABGFF lai_c_Page_44thm.jpg
e5424733e2bc652df8dad7a89216b4ef
2b47594d15e90b6d4cfd20a00e0b4d1810b9e5dd
44444 F20110109_AABGEQ lai_c_Page_35thm.jpg
9fba079127b092104cb4562a4a82a0db
c3afc320628a77204731ee6d23f925b0da47359a
1591 F20110109_AABFZK lai_c_Page_20.txt
060e238b30246ff434990a8716068d5f
d153b9b04950de63f23374f131f03d7590348b3d
642 F20110109_AABFYX lai_c_Page_05.txt
7a3e57ea46647a9315df7e7e68e1e6ce
b5e2f07d88a026486392d1faf3314273cd0b95a8
79228 F20110109_AABGFG lai_c_Page_45.QC.jpg
92a98f710f9cf059b68e3d6eccb16075
11c8a9dc05274e330643d5247ecaf768aaaa5f76
47987 F20110109_AABGER lai_c_Page_36.QC.jpg
6355f978f4442c2b0c12c171208ee66b
73a84d6ccbf90171661e82137b9140ac730d7a00
1541 F20110109_AABFZL lai_c_Page_21.txt
3c109e4d132e58c3a177baf042299bfd
29d36892e061dab0adb5db0ae742d85a0b26a138
1746 F20110109_AABFYY lai_c_Page_06.txt
de26ffc814bfa82f1e92615e37996f49
536fd872fba635efa44597f9fb225574f7022a74
44272 F20110109_AABGFH lai_c_Page_45thm.jpg
9b05542e270ba6bb018628de155f4ddb
9b7a29585d1cfa86aa29e0fa1532f4f7e9d1da46
17734 F20110109_AABGES lai_c_Page_36thm.jpg
8bcfb1f11502357e77d1796faddc4ffb
3c9d9d4297b611e9ebc0ca332907561a0bb35a38
1932 F20110109_AABFZM lai_c_Page_22.txt
6bc41ce7df737eec1db41df5e9f2bbf0
3139a7ea279c6a7163a1fdc50a5cef5be888352d
283 F20110109_AABFYZ lai_c_Page_07.txt
9a915244681e1c584cfc12cce587f7f5
4823d460d2a5a0455b2867d4eea7471e0abc74e7
80186 F20110109_AABGFI lai_c_Page_46.QC.jpg
0634634608f933b88118b0f0d59090b7
adcdc40b1f92c0dcd8805f845b6c413246dc273e
92173 F20110109_AABGET lai_c_Page_37.QC.jpg
2eaddb91a4b3b5846720c98af62c0698
f497f3879a589e642de830ac95d638f0cb8015c5
2078 F20110109_AABFZN lai_c_Page_23.txt
8994b3ca2c3fafea78d861977c4595e0
b9fd18c1be37f15653959249ffdce4ca25a38edf
23858 F20110109_AABGFJ lai_c_Page_46thm.jpg
5bd618d4bc34ef3c3c89af04cbe46266
c43ea49b8e0361a3868f260b1125ef01cddb122f
45895 F20110109_AABGEU lai_c_Page_37thm.jpg
9fe26322c45a47c66f9f54d3945204eb
03bf5d0192ceff3cb33928e01b2d5286b7402935
1848 F20110109_AABFZO lai_c_Page_24.txt
10e7884499b6a9a1d683f9d366a85300
493d4fc3cd103926c3c36ef88dab862da2d38944
62237 F20110109_AABGFK lai_c_Page_47.QC.jpg
36b3175a66f4db149a453e6e826013c4
4753ad57ab58245ddfcaaa680cf1f933eaa9525e
44207 F20110109_AABGEV lai_c_Page_38thm.jpg
bdd792b8c8dc65993c14fcd95efd01fa
bd8ed2a98772f938cf59bf521bf537cb43f71ef2
1918 F20110109_AABFZP lai_c_Page_25.txt
4763afc99cc3da18219a642a12c1e953
ad97c38589912e9db52610905d0332a1f35d914b
20681 F20110109_AABGFL lai_c_Page_47thm.jpg
97996fbc102c4d0a7ba8a053dabc633b
aa55c16fd31471edf38b358da2b5433344056858
46303 F20110109_AABGEW lai_c_Page_39.QC.jpg
b44da4827d36c0e4ff74761cb7e010e9
83111dda92e18453a6cf5e65053a0bfb14f810f3
2257 F20110109_AABFZQ lai_c_Page_26.txt
5cf432650dbed78b25f63cefbddba27e
521b2164b14bb568dee8399b891c8ce271f9f224
16742 F20110109_AABGGA lai_c_Page_55thm.jpg
631e43904daa7db7baa0077d256e4719
3756bd05cd77c51b1f4dac79fdeb92850a957a94
89230 F20110109_AABGFM lai_c_Page_48.QC.jpg
1a144b6cd0a990759f44a3fd60a71700
4d2d5ad692e8bf2a57823d834407984c3f8feeef
16656 F20110109_AABGEX lai_c_Page_39thm.jpg
6e0ba34a9b4ff7c498c2627ef555f58c
f9bfd7795bfd66a8332123b918fdeb7e02bc8965
2273 F20110109_AABFZR lai_c_Page_27.txt
688870135b95d84b56652b79071d2a07
931790aa65b79d4be01f6f5bc5dff231f3b58d06
107126 F20110109_AABGGB lai_c_Page_56.QC.jpg
549b3c9ae4d28df40c8372197fd9726b
8b82262f05163e8452f0a105e52ab3eca6f9e2a2
45237 F20110109_AABGFN lai_c_Page_48thm.jpg
fa879c26c891efa727fd8b5617f62105
371020eb0a81a926a245f933505c0c1982b1c631
60066 F20110109_AABGEY lai_c_Page_40.QC.jpg
24fe3942610cc052aa66bd3f92e5566f
1b27c560d517a6dedad72757cf0cfb3a15544feb
473 F20110109_AABFZS lai_c_Page_28.txt
64a448c5dbfe7c48999404d6c0a9ea6e
d18034f3c5d1ffdeec6d20db2e9546733a2399ae
86712 F20110109_AABGGC lai_c_Page_57.QC.jpg
ae230186c292dac0a4a951c9ad3544e5
b49c111f5af14936e7034b141d8b0c3aab00f69c
62993 F20110109_AABGFO lai_c_Page_49.QC.jpg
8222afe9d08f594989bea5811e132e22
58bac01945009080c289840a8b6caf2ee027ab3d
56801 F20110109_AABGEZ lai_c_Page_41.QC.jpg
fc80b3299bac3db8f8900fa63eddc9fd
b0641ebf8c3c62f61d1a5aab3b849c50201c60bb
2107 F20110109_AABFZT lai_c_Page_29.txt
53ac689f5c4dee76dadd877405ecbf6a
d490623b24d74567cfabf6d77fa1254a16650fb5
45143 F20110109_AABGGD lai_c_Page_57thm.jpg
0f669bdc9aecf00afbc2c777128e27e8
91c56da8b5adbb8b8f488504b1f12ebca12e5d39
2317 F20110109_AABFZU lai_c_Page_30.txt
6d1aad93e35964cb25471ee976818c3a
f4f8f60d6a99b7ed62ad11797620d10d2a4fb5d5
85107 F20110109_AABGGE lai_c_Page_58.QC.jpg
4dadd49d95a8af027d57a4af29769510
313771857bac22570c5a022e1ddafe08422b3a9c
20926 F20110109_AABGFP lai_c_Page_49thm.jpg
8c5d4150babab410891cb8f8c473e6ff
308787e89be707c2a1270eba390a09eb92c11e8c
2344 F20110109_AABFZV lai_c_Page_31.txt
17b921f02ac59c03c82e208a4d01df13
60cff954e13c3bce10ee2827214b8d6d39682c47
44990 F20110109_AABGGF lai_c_Page_58thm.jpg
b9251fdbfb6715b78e0890c4e44b4d97
52d83422fe9e742dadec3dcb0ddaa3da78525d3b
52465 F20110109_AABGFQ lai_c_Page_50.QC.jpg
19bdd9a24aab8f199dc016c0136cd096
d291325fd0f25c307fbcbca1e0333f1e2659f0d5
379 F20110109_AABFZW lai_c_Page_32.txt
baa5ec5f6611cf3d5c8bb9f8435fe37b
c9e7408ec532ef574288ea6931586f787f38c225
87620 F20110109_AABGGG lai_c_Page_59.QC.jpg
fbf33f25eaaeb4d955225a4f2a643627
5aee5ebe72691281bc53afbce27948e048cafdf4
58019 F20110109_AABGFR lai_c_Page_51.QC.jpg
a3a37dae9d66957d29cff6f9542616c2
98ccbd8c170e0ef1acf053f623975e09499bc491
2432 F20110109_AABFZX lai_c_Page_33.txt
ad921b330c62f82b43e4afe494b07811
b99c66cbe0aa017903af533692ee33ecda95965e
18943 F20110109_AABGFS lai_c_Page_51thm.jpg
9c89bdee32e031358bea1bfdd536e212
b1d181f1c2f14a4f58342d78ea93a454504ca0dc
2151 F20110109_AABFZY lai_c_Page_34.txt
19e9b5d4f70eb292d906dc377a69fe6f
f9f3749ed1664d670d93cca1112fa55da77f112d
46726 F20110109_AABGGH lai_c_Page_59thm.jpg
68f9a0a1cd782e9348768a22e5b5b5e5
b02f78f0dd1713e70c0af813a5658e417e353931
71139 F20110109_AABGFT lai_c_Page_52.QC.jpg
062cb90ab967efc4ed211dd6629dab79
1aba07d07d6bf26c386679c94d31894e9152ff99
1844 F20110109_AABFZZ lai_c_Page_36.txt
5f40a92409fe47411b1d98283de9a5a9
29325ffc352be9eb223bf83e67d42f186a0f4a67
21187 F20110109_AABGGI lai_c_Page_60thm.jpg
603571f8fa9095d0ab79aa2b0f0c3fc6
26591da02f102fd626b22a7d1680fb889484568e
22483 F20110109_AABGFU lai_c_Page_52thm.jpg
9281c231e5da673d4cc8b734a4886eca
eab94e269c4beaf0cb4482fac7d92f6d7d361ac0
105474 F20110109_AABGGJ lai_c_Page_61.QC.jpg
58d991a3105f1c195c8043e7165fc21a
b7cc6dcda6153454d5d603b5a0488d02d8073658
76103 F20110109_AABGFV lai_c_Page_53.QC.jpg
97126b7f3ffc816da3846c4fffa8abae
fa8b68f24eebef6c0b95d97f6e66808f798c1e7e
49182 F20110109_AABGGK lai_c_Page_61thm.jpg
b685bf0729341e60a319b6978a4c8760
ab2998435704b321501e9c89e8530a4d704d5594
23669 F20110109_AABGFW lai_c_Page_53thm.jpg
b0a3902658d3c9bcce6fea280cd80b79
fca1b3a2b4d96e965bbb13b628dcdd2da5c9bd73
73593 F20110109_AABGGL lai_c_Page_62.QC.jpg
dcc85139d880da9be03ba6c1aae76cc4
1ec14eb0d81bc3836bec938811d4b19450b9c704
50619 F20110109_AABGFX lai_c_Page_54.QC.jpg
c5c1d9f99af8de9d52f2c921f41fa33d
b6b2c168019893c4a69d8220851e1d530a704639
14663 F20110109_AABGHA lai_c_Page_70thm.jpg
dfcad2d98e238ab7e50f62bf3990143d
0f132262fdfd1359c01abd1d6f263f77bf8c1986
23831 F20110109_AABGGM lai_c_Page_62thm.jpg
c514c86e74bf63e5a0080328bb0613b1
6d14102fcd9b32f3b4f09f330607ab2adccf0575
16467 F20110109_AABGFY lai_c_Page_54thm.jpg
28a81f5dae3efa39a510e538ffec5ccf
2ae15a7f6b9d6fa86a7dd1a70737b4ea0942c850
75012 F20110109_AABGHB lai_c_Page_71.QC.jpg
d7a2cb99af6eebe363a5bff287fa18e2
8a856b475ba6d1037d2939e0fbae3e293492d512
69990 F20110109_AABGGN lai_c_Page_63.QC.jpg
44500e90e338f8fea451f054da3d74b5
db16b73ba7bd3b951368900f248d5a1690a3213f
56633 F20110109_AABGFZ lai_c_Page_55.QC.jpg
2411e2a1c609d217a5cd45cd6d84c170
d798971ca6fc94982f7dbfe9cb1231ee6cfa3f7c
21800 F20110109_AABGHC lai_c_Page_71thm.jpg
52ea9334f9917fcb5929e4035631afd5
1bad8399043298c682cdc2d75bcc745c9e819b4d
21448 F20110109_AABGGO lai_c_Page_63thm.jpg
4dc27611c355c0a62428e0cf5b8f30d5
b3cdfd17973a89d576fe17b713f9a55c4fbc7396
32807 F20110109_AABGHD lai_c_Page_72.QC.jpg
0c8093385756c3d42a5d79fddba04db8
65c37f5cf15693e6225c940158aca3f298539a1e
71426 F20110109_AABGGP lai_c_Page_64.QC.jpg
f14f32d2996250a0f3c2ee4f46abe4a3
8da4cea0b628eba6028b835fb54ba7e6666fe4fb
56965 F20110109_AABGHE lai_c_Page_73.QC.jpg
b8ea8463b019e4030c1d6e4e21bdcec3
674c3f406ad94e9f48389d88efffae7e17a180d6
20059 F20110109_AABGHF lai_c_Page_73thm.jpg
5e456a894c8e2f0724d9b6bfbda03af3
1118d00af7cdda647d8224cbcf0ecabcaaeb9c25
21388 F20110109_AABGGQ lai_c_Page_64thm.jpg
2794740f48802749233a617e324678a6
106018468e051f5a053fab821858bb1dcb696298
83188 F20110109_AABGHG lai_c_Page_74.QC.jpg
938f3b1c58c8fa988a31ce7ddf0792db
9130339a0d8ed1314209ba678be69af0cf9a180d
60318 F20110109_AABGGR lai_c_Page_65.QC.jpg
faace4712b39741b246357bb0048b5a1
9c144e8808df7ec16c7d007e9fb570d25709728c
25733 F20110109_AABGHH lai_c_Page_74thm.jpg
01549dbe94828bdc29fe69eb17936b78
7996c0cf842d9494a158fd4f594966595e43f13c
19002 F20110109_AABGGS lai_c_Page_65thm.jpg
26c8c4053a8c79dc414df35670942944
108c8e173433323ba55076a7902b27c1ba0a6c69
62048 F20110109_AABGHI lai_c_Page_75.QC.jpg
80e9589d6f34ef468c4dd19beab97220
cd3718d6e00bc2e91e245c6619f512d18e909935
54670 F20110109_AABGGT lai_c_Page_66.QC.jpg
ff058bc1a70b132e5bb025ef8228b55d
fe735100757aaa8a4ef3b9363c73fa63aca32b26
18106 F20110109_AABGHJ lai_c_Page_75thm.jpg
5e3f4930c9289133f3fbce13bb5c99c8
b39baa6926a1a020311dee4d9a50ed1479bda1d9
47617 F20110109_AABGGU lai_c_Page_67.QC.jpg
0a4a05f22963b3790b67bad87dbcfec0
f7bab3d4c88a7a0bdc82c967e6443ea455bf97fa
54757 F20110109_AABGHK lai_c_Page_76.QC.jpg
1e0483c0b9c4a4ad6beddc5eb5481a76
97ab3bc8aa4763cfe394755841bfd80354a4ab2b
16364 F20110109_AABGGV lai_c_Page_67thm.jpg
a5bd3fca7e31a7564d769048422a00dc
bdf6da778b7f242bc4990e089344c4a5143037f3
16691 F20110109_AABGHL lai_c_Page_76thm.jpg
a2054b348efd961cf778b5ef704a1c78
23cb05da4be97f95ec75b84617f732f3675a56e6
21532 F20110109_AABGGW lai_c_Page_68thm.jpg
382ebd981e69f8a7be8684e5889c947c
df8f6f4443febd45f8d441b779307e65bdedc417
26686 F20110109_AABGIA lai_c_Page_86.QC.jpg
4a8102ac75c438abd4ea2a7e514c964b
32083121ac1dcac519449c1b29c39e88c6421fc9
74840 F20110109_AABGHM lai_c_Page_77.QC.jpg
c3d382ce9c627fcabe70ca9564a07299
59ee8546be5a9e0dc77d8edfe92072a467a7d58d
56232 F20110109_AABGGX lai_c_Page_69.QC.jpg
d07de699690005538c168254a4357b0c
95e4295cad99f1a402c15ad4f0fecaeb1c8bfaf3
9265 F20110109_AABGIB lai_c_Page_86thm.jpg
bd8d924257a12f6eb3d712a0eb1fbf66
b02499e6b97899e3502e06e7abfc4a7771b0fb0a
43319 F20110109_AABGHN lai_c_Page_77thm.jpg
488fb54753b97888cdf9dfbe09d522c3
3ccd9f6ddf13c35c8ae0cd2d03a251d44f29ab7b
17455 F20110109_AABGGY lai_c_Page_69thm.jpg
cf5f0e34e0bec83e77f97a2ad9538183
8a8334133d23732391296113df7d41df536428c9
77525 F20110109_AABGIC lai_c_Page_87.QC.jpg
df5f76e29b6d2dfcea7acfc8c5e8be10
d73710a6fe5638ce49baf766dbfda9e99bce4091
105526 F20110109_AABGHO lai_c_Page_78.QC.jpg
bafdf0297e6ecb418dfcbac4d0587461
d7e4c3fb64ada25ad83302484c078e73d617da6f
47097 F20110109_AABGGZ lai_c_Page_70.QC.jpg
a731a6020990c56beefdc0f40bf1b9f6
0375f47d38006bc328944cfaa2f714e3a21c1658
21074 F20110109_AABGID lai_c_Page_87thm.jpg
11556bba7179f856899b25daae32327f
c8de47a364f458a9324bbabcb38b3fd4b9522cac
48442 F20110109_AABGHP lai_c_Page_78thm.jpg
dc23bf87da8eb5b8669267f540d5a451
4301984ff2a69696d731ec7c6a986e43386ba56b
112188 F20110109_AABGIE lai_c_Page_88.QC.jpg
e748a2c20ef62f4b8c9adf854523e086
0b58381b3824b483804137d8c873ea4a868d838a
54167 F20110109_AABGHQ lai_c_Page_79.QC.jpg
c9a462a0ae86f3991105f6f40732b350
c858515d5d80891245beedc78df6e6d76faf6d24
101209 F20110109_AABGIF lai_c_Page_89.QC.jpg
47eb2004a6bfb8e4387a06153fb71811
c6f2740ce5259c21857bd0475409c3af6bf33e9c
46861 F20110109_AABGIG lai_c_Page_89thm.jpg
78f089227acfe059fd4bfc8e5049738c
7c61cc51cd921c6b4ca4e11ee8162ae51851fe37
31626 F20110109_AABGHR lai_c_Page_80.QC.jpg
1646dd4756c5909d64913e2c702cc9b9
1ace1469ec75e685f78c3eb1af1686ecd6725353
74108 F20110109_AABGIH lai_c_Page_90.QC.jpg
6bf029ce46884622d2592c307ccd7912
84dcb761a4876cafbff46a6553fd0abf995864df
21079 F20110109_AABGHS lai_c_Page_81thm.jpg
620cf740f888b890c69262b4f84e9de6
6b058600c64265b2ab1d1f7803f0d0b2c9aad526
23045 F20110109_AABGII lai_c_Page_90thm.jpg
a67eaa9dbf6a0ee92b6e470e2cbe73b1
0cb05a7ee39bf403e69cc3e16126192f2cb90d13
103031 F20110109_AABGHT lai_c_Page_82.QC.jpg
c5c680f08feeac726973fb73c3c638d8
e7d4c93d27d005b76838ba819d337bdddf6eafba
26556 F20110109_AABGIJ lai_c_Page_91thm.jpg
999472ca5812c93d53b9170cbf70a55c
d325eb93a9a516b73de90adb654afde510cabda3
48751 F20110109_AABGHU lai_c_Page_82thm.jpg
a378f69c225f295515feff2a3088c7f2
af960a67d6b152ff9d0e0226e5b4bb57fa9cc732
87605 F20110109_AABGIK lai_c_Page_92.QC.jpg
d09a14bb8d8454ee9ca01295b320491f
03ebaea761ff69aaf9a87b20b3ae3a5cbba89d2f
72652 F20110109_AABGHV lai_c_Page_83.QC.jpg
0a33e092d77e427c4654a61ecdc81a4d
c29db82db3d09856cf2989316ec5e1e687c3bbd6
26507 F20110109_AABGIL lai_c_Page_92thm.jpg
4865b0a43e6f5f91d78287d7dbecb7c9
ff2a8e0a37dab308007435beee5c10915dc5b09b
22142 F20110109_AABGHW lai_c_Page_83thm.jpg
c3b216ff7ef6b20224d532ef25e740ad
15dd156aebbf6a31e10ab53d26694de5dd925c71
92534 F20110109_AABGIM lai_c_Page_93.QC.jpg
a6b8aec3f7cb1979450a2648b6b524e8
ad2ad6e2eb1a669908186eb8a3c300e1d886dbbb
19671 F20110109_AABGHX lai_c_Page_84thm.jpg
1f05c4f8ddc1ffc4d59f73098a2b8f5b
fac4448e7e075dd7265f17b161b456574725748f
26867 F20110109_AABGIN lai_c_Page_93thm.jpg
d060110ae1c257e224fc1070d609efb0
6482f57f4c07f9735d7b261e00821f00a5186696
31404 F20110109_AABGHY lai_c_Page_85.QC.jpg
ebc4d988eefd0d02113406a55b3be06f
dbad7237b6a9b95f3049dc39f3dd6d0ac3b80105
85459 F20110109_AABGIO lai_c_Page_94.QC.jpg
362179c3da5dce11094c67f861b01615
21f3b1e3b58360e29bb4c16b5db8528995cfea94
10679 F20110109_AABGHZ lai_c_Page_85thm.jpg
27b2ed7aa0b7521a224fc586b8b2a8e7
71dda2d35bec9c7610cb139249b02791fb989351
25334 F20110109_AABGIP lai_c_Page_94thm.jpg
b04047b9b40ad7ad27e6df108a0f790e
1a516db1153045208ae02e0014e283032dcdf039
82003 F20110109_AABGIQ lai_c_Page_95.QC.jpg
fd957e0015de07cc9ad8e73eeb04d121
2dd63563965694e294601e710a9f56015f008050
23999 F20110109_AABGIR lai_c_Page_96.QC.jpg
37a5ac98d6efbf622443cfba1eb41157
da777c9f8421dbff86d59c4712ecf5b1e207bbb1
25801 F20110109_AABGIS lai_c_Page_97.QC.jpg
3caaf71ecd06417ce0b7967dbd856be3
b1350ba7df39a45704012b9d36a55e4592ab2f21
8098 F20110109_AABGIT lai_c_Page_97thm.jpg
2e57f0ce32d24f85f456053b314eb258
548b913a1ec3b5f2a4c03c8bdbe4a2ed24337bd6
111074 F20110109_AABGIU UFE0000558_00001.mets FULL
f634821ceea3c92f681557a6697ea647
b826d93ee8d218ef0318888cc6e845349a303aa1
F20110109_AABFJA lai_c_Page_44.tif
4509f341ea03ec98479a4b1ed0c037d2
8458bc916eb1d314f26f6721c781cd77b09f1e09
F20110109_AABFJB lai_c_Page_93.tif
2c9b37fd0bef16cc3480b331e12a627c
07487e4a7599190748a53ed257d580261c9d5b59
75164 F20110109_AABFIN lai_c_Page_86.jpg
3542e8c45445dfd524c65f4331223b18
ead5577f02a158b769f6a9d0f73fe04e141d0b1b
88919 F20110109_AABFJC lai_c_Page_81.jp2
dcb3066e639f74340387a6094c3fa848
5ab091e8ce3efd51d6fe6781584aaf67e5dcfa67
209132 F20110109_AABFIO lai_c_Page_48.jpg
0ca334f31887ca3558e2a3838dbfe054
4e5cb0cd3817f6221267afe079964768ea412b9c
85797 F20110109_AABFJD lai_c_Page_27.QC.jpg
9dd792d2c6ec583bfa5ad165d55b93ff
d414421a509ce5575f1bc6c211ca66bea57fc17b
23631 F20110109_AABFIP lai_c_Page_79.pro
bc2bdec15e2770337787777189d188e8
ddb13b131c22b87988ffb1a6382a8e3d21724de1
F20110109_AABFJE lai_c_Page_18.tif
1ef8938c5352983399b5c1955e1bde1f
f3db268b972affd0f37b428c94a59cfc1dc9fcc4
53799 F20110109_AABFIQ lai_c_Page_78.pro
56d145477cbb0c41b76cdc9e67dfaccd
3cff6d480835a7d0e0ccf109368b7d88050c24e0
80056 F20110109_AABFJF lai_c_Page_38.QC.jpg
e73f63851ebdd141fd715047b73bea9e
469b76f490cf2065c6ecf7e0000fcf55f52070ad
1051984 F20110109_AABFIR lai_c_Page_06.jp2
5c9b9aa23665c0de1672c588f8ffb18a
5af72ad837e39cb348442297ed8898a9ed59a212
42614 F20110109_AABFJG lai_c_Page_80.jp2
cc15df2043944dc1a7447ec4c73b365b
ef03f47961fbe5333f36c52c9bfe6409e8a6c2ae
10963 F20110109_AABFIS lai_c_Page_80thm.jpg
076f848263487940d71222e96454518c
011954c385329e3172dd8698bcafeb690383911a
F20110109_AABFJH lai_c_Page_38.tif
8e8aef916e5803d8fb8ed7f7893e1b06
b2a6d530acf599902a99b3d761bb25968f533e1c
66574 F20110109_AABFJI lai_c_Page_60.QC.jpg
a94c20b2869911676b7b5183b25cb7b0
0c3af44837c6c59ad710cc0f3302be9359197c87
86572 F20110109_AABFIT lai_c_Page_72.jpg
418a5c001c187afb44f11d8d06df8b69
501f213145bbf774e16cb4f986d97cac015abb7e
19241 F20110109_AABFJJ lai_c_Page_66thm.jpg
3b3e48203926535302e24c0815f296b6
0442ceb13a433feab2fb887f046a229266506ce5
181069 F20110109_AABFIU lai_c_Page_24.jpg
954faa13e2bb6830377a5b60b3d9eef2
0c49f4d016e6ffd2a3c69d219693b84b89ff91a2
18572 F20110109_AABFJK lai_c_Page_50thm.jpg
9aac3873ce98d995fbe687ee3a35a0be
09f313de4abcc0191cc0a28944a036d4af437671
1938 F20110109_AABFIV lai_c_Page_08.txt
6f35f720fd4a4f4b0debb25be81be009
987f6b942cfc1ba35f24c4c04c09f28a85f68f4b
F20110109_AABFJL lai_c_Page_03.tif
cdbad4ec878aea0f95740401342fb9b0
a500f90c1c5901ac084d8045b809d6fe6b77ef62
F20110109_AABFIW lai_c_Page_48.tif
35714760801e28a3e665f9d7377d3217
faf2b30153f93319c107f1262ab6c1e4c3d4f8dd
276444 F20110109_AABFKA lai_c_Page_03.jpg
cc569116340dcca59188c5d642b9b62f
3e314da77abd5adff89f3505dd7e4991eb6db0e2
F20110109_AABFJM lai_c_Page_84.tif
4fc878c107a5221ee0910881893da73f
1f337476d648c4dcc81c573680f0db544460da57
F20110109_AABFIX lai_c_Page_19.tif
4490cb5dbabd5303a4be4b74bb0d395d
cb3576aec5abd9dae14206d105bf3a01e8b36b74
74677 F20110109_AABFKB lai_c_Page_68.QC.jpg
d150721ef3b3fd1a3832dd7a0e71dda6
40a065526a724238b4b54308e0a67e5b7da96734
18456 F20110109_AABFJN lai_c_Page_79thm.jpg
328bd87073a1d7235eef454fd60e8192
571af4fb7f87c5cb989c44625754b61d6976b5c0
1051972 F20110109_AABFIY lai_c_Page_61.jp2
f152aff4b2128c232b6552e86cde5efe
694202872ac1ff08abfb09fd0b96a371f134334d
87305 F20110109_AABFKC lai_c_Page_47.jp2
0c4765257f5f4cd577a78f34dbae3e70
d869139bd571310251f0a6b9185b32a6d3be8610
F20110109_AABFJO lai_c_Page_58.tif
fc266071c64bfb24f2d13f67286d043b
421224ba6976e85575f3925440a43b5cfa5e9434
19833 F20110109_AABFIZ lai_c_Page_40thm.jpg
08f8a181479e03757c721021828189a6
f84b2d74ecb30f73ccf00df02f0428b9f9392a86
49843 F20110109_AABFKD lai_c_Page_88thm.jpg
336c6f43fe320c28142db8c0b0c7f9e6
073187362875d6a1c8f60699dd65665d0afe6708
199404 F20110109_AABFJP lai_c_Page_57.jpg
a6849b60bc83e1a975c95b93edbf583e
33b8f38cb8ba8eb2bd4cd5879c19fce35842cc45
115313 F20110109_AABFKE lai_c_Page_49.jp2
163afa72ab9a97527c047b387888d594
165ddff7e8b35fffd3eb8ae176f8aabeb94bb475
18410 F20110109_AABFJQ lai_c_Page_17.QC.jpg
7fd50ddbdf1fa93887d05c3832d77f56
eb3d438165ebfbe3c256a8113bc5c8344eb14044
1685 F20110109_AABFKF lai_c_Page_54.txt
2987633f82780b67b67e3a3887e0cc82
21b3f403db18ab094af354ecb76622477bd76f13
F20110109_AABFJR lai_c_Page_28.tif
f1fe51e01ce39fdb52b14105bb8d232f
9a0c1aa631eaf7779ebbc6b5f3429c2d1b501339
103237 F20110109_AABFKG lai_c_Page_87.jp2
6664b2e92364be3e11140ded3203f2f2
4a1440187f49057b1a3f0e63c23b0283a6b68935
65881 F20110109_AABFJS lai_c_Page_81.QC.jpg
a48f88abe0dcb7d9d1228809a479695c
b32ba08525f709002123eeeea02d585290394b09
160909 F20110109_AABFKH lai_c_Page_54.jpg
d3ec179f2c5e9a6d2db4797a666e894a
f465c0ffdce670850f231e6ad5931c18ebd75dd1
222086 F20110109_AABFJT lai_c_Page_90.jpg
4d747720e9b4256501aa0baffeb943dc
00d33bc5c41f217c4ee6110e6978c44abaad5903
226296 F20110109_AABFKI lai_c_Page_53.jpg
aae1545424fe782d483f7ed761d706fe
d484530add4e515f42e56dedc32b295c35b06f39
1798 F20110109_AABFKJ lai_c_Page_35.txt
02e583673f91ca22004bae041e76bcc3
803023ac3d90c7f60e4d8ba1e3638aa1a23aa939
52840 F20110109_AABFJU lai_c_Page_11thm.jpg
fa0bafdad4306e1b02094d64b10fc2b4
03ade3469b79a0fe836d785e4aa156de336e0af9
53878 F20110109_AABFKK lai_c_Page_44.QC.jpg
ad8ca007dc5dab4262780b524de3ee53
3f350e73bfd922916c37ea50885cc764ac559bc7
88709 F20110109_AABFJV lai_c_Page_91.QC.jpg
7c9df26cb02971335dbe49bd93815084
d4754ba1cc85d28d531f2f8d81cf17fa54718dc4
200176 F20110109_AABFKL lai_c_Page_62.jpg
ff74f6dee6556c823884daa1098473be
7b9d4b7fad0228fcf954e70753a4180d2c47623c
1051970 F20110109_AABFJW lai_c_Page_14.jp2
cbe7a8789283ec2154c012adfe8489fc
b43a2acced804cd46e42c8c75007e89fdaadd725
F20110109_AABFKM lai_c_Page_12.tif
c897128092a8c1947299c8c1f4cb63df
4a853fd499bf8b58668c605a267843bc41cb1617
F20110109_AABFJX lai_c_Page_37.tif
465685cb3d69b3b4307192599f67fd5b
ffeba59268a744d764ac3dd5ce2efc23969dfc55
47179 F20110109_AABFLA lai_c_Page_09.QC.jpg
334ee4cf3c8b76ce03b1ba1a328c3d27
1e7d5d9d52b70964d637917bb037e9395b5c32c0
46054 F20110109_AABFKN lai_c_Page_72.jp2
82d75ea78aa771d923c7d692353737bd
f67ec323ef3221d6faff55df8a9da7eee509c54a
2068 F20110109_AABFJY lai_c_Page_58.txt
6defb5dc6019933b3f5c3db8b2b1833e
de5921bf5ce9d04d56f8070fac7395e782f991da
48086 F20110109_AABFLB lai_c_Page_56thm.jpg
480f1cedccbdb410f67054574824502b
18d89345e799b2e5a962426b71f7d9c75953861f
945972 F20110109_AABFKO lai_c_Page_23.jp2
aeced368354127e02241644a406e7465
cad62fe38bd0f1e78122aad83068e0f440356053
F20110109_AABFJZ lai_c_Page_16.tif
80fd676e4bb63aead99020eceda526d8
0f5803537a2e825ab595dff7c70511fbab22d244
F20110109_AABFLC lai_c_Page_59.tif
30b4b860e3ce55e282a3f39a09edbed5
a9628744e4077692f19365829995857f1f03f917
235750 F20110109_AABFKP lai_c_Page_82.jpg
24d505cfcd8d0b9dfea87b20a5d4ea65
59b5338f34074a14fabe9d46b094077b0bb3ef38
F20110109_AABFLD lai_c_Page_95.tif
e4f483626836b356bcde33b510854103
0ac1a6fc3bf24afef4bdb7b58cfbb833c545e651
25357 F20110109_AABFKQ lai_c_Page_95thm.jpg
f8f243ec6dc1423117fea891a3bcc8a1
b87aaffebb8f42d05c036a52e950466add200694
90798 F20110109_AABFLE lai_c_Page_21.QC.jpg
bcd3cf5bdba622f2acbb45ef2c8c7367
1f1b193afbb51472269a2b4857c670bb1dfd441d
233353 F20110109_AABFKR lai_c_Page_89.jpg
c7ec59eed9b0f97041ac1237ceb380ed
c581a2c9a48e3e24b63c55698eb14232b35718c8
1051977 F20110109_AABFLF lai_c_Page_56.jp2
7aa36981420ab36b6a9d50fc64c72eea
632ed9af3bf8048ddeff499adb1241a031b741a5
6695 F20110109_AABFLG lai_c_Page_17thm.jpg
c69e7172f287a1acd7f36f06b7a46fb4
b76531944b5501ee7d2c9b7552974d205294e538
2396 F20110109_AABFKS lai_c_Page_16.txt
53c8b16daadfefbbeecd7c519e31eeff
c96c8e52cfc8eed1af76cfdd154be9cdcd04dbef
83696 F20110109_AABFLH lai_c_Page_12.pro
55c02406022056bafc082151a425c095
9fe6e445c259ae1a2aa1f277a2b908ac93133f9c
F20110109_AABFKT lai_c_Page_65.tif
f4d4e1202c904fb7db5b6a7ada943c68
07c4a811662403abc65d98908319acfdd95ca63d
62495 F20110109_AABFLI lai_c_Page_84.QC.jpg
81be5f920634d3db5025cccfcc0e46dc
de6c0b8878e9b9f9fd8685522bbd73df857a3029
1799 F20110109_AABFKU lai_c_Page_67.txt
2d2b26ab8511345ab508eb5f2a14a93b
a0fb28ddafbce3859e75520e9aea838c8a964c6f
84524 F20110109_AABFLJ lai_c_Page_69.jp2
d39b64fbaddc383fbbeccc86be59c466
51b1c046060e355b89f55ebc9d3c2a6cb527769a
8461 F20110109_AABFLK lai_c_Page_96thm.jpg
a80a51d752c5c4e043dfae7671e938a8
6e87305ca6086930f491736245cad79be40e2259
F20110109_AABFKV lai_c_Page_45.tif
a6c976456a3e811f5aac3e36e10eb4e6
3b3017d5f0d4e8003f4cca6f595a2fa8c5386ee6
1238 F20110109_AABFLL lai_c_Page_79.txt
9918fd3839da66fd9771e7bfcd87b14d
e8f34348f398c3e952bde25a09ac269b61da85fe
355129 F20110109_AABFKW lai_c_Page_12.jpg
11fe00f5cf44e006c01649671df9be96
2b1150b206c58650c88b34e32ba19f0eb66adffb
122761 F20110109_AABFMA lai_c_Page_05.jpg
9edc2eaee70f8a8cbdb09f9da725f2ea
afcd61f6d3c8d38d5f285b916f45cb8397757d30
8640 F20110109_AABFLM lai_c_Page_02thm.jpg
99d5e733c11c4f0919424e52062424cc
e91d2f8f5adc4ace706d4c71e371553503177769
F20110109_AABFKX lai_c_Page_88.tif
e453089a101fdeff3bbaa5053192e058
8c09c7c44a282f96d1cf5765b52b3f2359683a93
275111 F20110109_AABFMB lai_c_Page_06.jpg
311b1233ce47f924220b6f6e5e64d5ef
cd266c3de4612c33e9fa8d887af0cf5d309dd29b
F20110109_AABFLN lai_c_Page_41.tif
79f0193bbbafc9f7474b1b646de84519
5012c992bf45cb307cdd6f28a351e0c3b737be26
8174 F20110109_AABFKY lai_c_Page_01.pro
20e08bc78faf1f5a99253fcf62448809
418182cc49ddbdba74ded6e568c24e1229b49ad8
67871 F20110109_AABFMC lai_c_Page_07.jpg
accfa184da8fc25dda08af18e69a5f5c
da4643062ed7718d05a1d45281a7e3ddf5157fc2
F20110109_AABFLO lai_c_Page_31.tif
2cea6f69ca6653c8001258f502c054b8
f4e1aba2710864ad2786d3d3cf98d89a9ff1bd66
52600 F20110109_AABFKZ lai_c_Page_01.jpg
007e8958581e653156579bad9c4e58ad
a5ab864f59121039596f6a4f5e2c0c2cc892ddf9
190331 F20110109_AABFMD lai_c_Page_08.jpg
8223fb3a548a10f979c30dacc70829f7
0c4ef6121e88ec4f0354d627ce53a88f074f5e7d
280156 F20110109_AABFLP lai_c_Page_93.jpg
acb436711e044615b50e251a86cfa310
47a6ce3e004f09b0c217b141bc0d7669789a6848
128818 F20110109_AABFME lai_c_Page_09.jpg
452990ab634834aa3fad83749ca0d2ad
946e0eec59e20c8e25c3c5daa46c1bdf5e1230f8
15336 F20110109_AABFLQ lai_c_Page_50.pro
770a9abaad3f65b79a1dbc32d1e0e9e1
36ecb241e74a1665d1480c90b74a03a35099fbe6
321074 F20110109_AABFMF lai_c_Page_10.jpg
ee6d381af5947f4b568c4dcc8b60cbb3
ba7cd4148ae18aef2b503a3baa275e639541b3d8
9895 F20110109_AABFLR lai_c_Page_85.pro
107d5bb97a15fe4592d8268d7d8f4b4c
f356160f98a7ca52000925d779c017a4dfc39b6c
311613 F20110109_AABFMG lai_c_Page_11.jpg
5881cb458d530a8c25d1981eb2d38e49
903e82a9cd866e882dde3ef5f675a71ccecf5baa
10841 F20110109_AABFLS lai_c_Page_72thm.jpg
cc59c17e8527d68c6df3b2f1e6b421bc
2a7b4fe096aaf30fed0a44b1936cc535a57d16a8
285264 F20110109_AABFMH lai_c_Page_13.jpg
44a7c19e6cf92f1188ebf1b5a2e3311d
7c9c6b3c982e03fba737daeaee9136b6c973821a
466 F20110109_AABFLT lai_c_Page_01.txt
ba69d9c971f8d550ead3db2cd2bdc495
8f62ff624e02cd97e8a3a5de3b632146dd2ff3be
290146 F20110109_AABFMI lai_c_Page_14.jpg
62f9f0c0631089299482695a760f1259
4a32fcdd7f4baac6db184003ed0d3dd6a67bf0d7
40504 F20110109_AABFLU lai_c_Page_58.pro
905ac14b7e07ee696672d569b5b5199c
911836be4cdedfdd9879357ce0bb63d31ee6bb73
272085 F20110109_AABFMJ lai_c_Page_15.jpg
e3db3da45206dc94af4c882146e5260d
7d6d258a0922cfce35b17964d9ebcde30f99a5d5
143796 F20110109_AABFLV UFE0000558_00001.xml
baf5e8ead305d58e7828796ac9358c31
b992c3893a94e502417f9601b1831a70b7a4e1ea
271174 F20110109_AABFMK lai_c_Page_16.jpg
8d091f210bdacffdbb6b9a6141715024
4f92d878fdf95b3dc612cc72e7a43b438ebeeff4
51444 F20110109_AABFML lai_c_Page_17.jpg
92ed51dd7dcff49b9903d1059c26fcaa
7434181b91ec7854434fe1f4b7ef059b4e091f23
188414 F20110109_AABFMM lai_c_Page_18.jpg
11dced787fbbe27c330374310700af87
ad11b6b1a974c4ff5711e675710d309155f6bff4
230970 F20110109_AABFNA lai_c_Page_33.jpg
18b748f1f4ec36e52c7d5f59fff71b30
99fc110502699bf278484c6a1a2155bb25b1160f
209138 F20110109_AABFMN lai_c_Page_19.jpg
330525c849ad6d4e2b4b25e2c817b286
93a32ca0d990df54aecce94d1915bd0e88a5e4d8
76205 F20110109_AABFLY lai_c_Page_02.jpg
4ad07b3aac1b664b8651e20c4f5ea287
1c718a7e54e7b3602f9d4e4f0d3c906779e8b6de
210167 F20110109_AABFNB lai_c_Page_34.jpg
54702abf749c60a97b1792f9feb39655
bc1ec2c602d67425817b7c407189771f243cfb4c
149061 F20110109_AABFMO lai_c_Page_20.jpg
2115f996b4bb02a7c3286f3fbb98d3fb
e5d1ce263ae9b6efe16ead0b4d5fca92e09720b1
166541 F20110109_AABFLZ lai_c_Page_04.jpg
a03cfdafdaa0a4eea5971f6b000f371b
3f7e24d202943d0d01705d8b96499c15fe3395a5
183921 F20110109_AABFNC lai_c_Page_35.jpg
f47503ca49bd2609c6773425bf313db2
a7ee122b396ba02afaa37679117c1bff469008a3
200752 F20110109_AABFMP lai_c_Page_21.jpg
c902dd7bd3a1cb79334b38b783645206
546494d8c00163b84fceab02f9b876b35bbb1eea
130052 F20110109_AABFND lai_c_Page_36.jpg
8fec773b2b7cd5237a8de4f7964d8a3e
12c56cfca9d9ecb351144e2ba8a5ebcb63c4a421



PAGE 3

TABLEOFCONTENTS page ACKNOWLEDGMENTS................................ii LISTOFTABLES....................................v LISTOFFIGURES...................................vi ABSTRACT.......................................viii CHAPTER 1INTRODUCTION.................................1 1.1Motivation..................................1 1.2LiteratureSurvey..............................2 1.2.1AdaptiveFiltering..........................2 1.2.2OptimizationMethod........................4 1.2.3ProposedOptimizationMethod..................6 1.3Outline....................................7 2ADAPTIVEIIRFILTERING...........................9 2.1Introduction.................................9 2.2SystemIdenticationwiththeAdaptiveIIRFilter............12 2.3SystemIdenticationwithKautzFilter..................17 3STOCHASTICAPPROXIMATIONWITHCONVOLUTIONSMOOTHING.20 3.1Introduction.................................20 3.2ConvolutionFunctionSmoothing.....................21 3.3DerivationoftheGradientEstimate....................24 3.4LMS-SASAlgorithm............................26 3.5AnalysisofWeakConvergencetotheGlobalOptimumforLMS-SAS.28 3.6NormalizedLMSAlgorithm........................33 3.7RelationshipbetweenLMS-SASandNLMSAlgorithms.........36 3.8SimulationResults.............................37 3.9ComparisonofLMS-SASandNLMSAlgorithm.............40 3.10Conclusion..................................44 4INFORMATIONTHEORETICLEARNING...................47 4.1Introduction.................................47 4.2EntropyandMutualInformation.....................48 4.3AdaptiveIIRFilterwithEuclideanDistanceCriterion..........51 iii

PAGE 4

4.4ParzenWindowEstimatorandConvolutionSmoothingFunction....53 4.4.1Similarity...............................53 4.4.2Dierence..............................55 4.5AnalysisofWeakConvergencetotheGlobalOptimumforITL.....55 4.6ContourofEuclideanDistanceCriterion.................57 4.7SimulationResults.............................59 4.8ComparisonofNLMSandITLAlgorithms................64 4.9Conclusion..................................66 5RESULTS......................................67 5.1SystemIdenticationwithKautzFilter..................67 5.2NonlinearEqualization...........................72 5.3Conclusion..................................75 6CONCLUSIONANDFUTURERESEARCH..................78 6.1Conclusion..................................78 6.2FutureResearch...............................80 REFERENCES......................................81 BIOGRAPHICALSKETCH...............................88 iv

PAGE 5

LISTOFTABLES Table page 3-1NLMSalgorithm..................................35 3-2Systemidenticationofreducedordermodel...................38 3-3ExampleIforsystemidentication........................44 3-4ExampleIIforsystemidentication........................45 3-5ExampleIIIforsystemidentication.......................45 4-1SystemidenticationofadaptiveIIRlterbyNLMSandITLalgorithm...65 4-2 L p forbothMSEandITLcriterion........................66 5-1SystemidenticationofKautzltermodel....................69 5-2 L p forbothMSEandITLcriteriaintheKautzexample............72 v

PAGE 6

LISTOFFIGURES Figure page 2-1Adaptiveltermodel................................9 2-2Blockdiagramofthesystemidenticationconguration............12 2-3Kautzltermodel.................................19 3-1SmoothedfunctionusingGaussianpdf......................23 3-2Stepsize ( n )forSASalgorithm.........................39 3-3Globalconvergenceof intheGLMSalgorithm..................40 3-4Globalconvergenceof intheLMS-SASalgorithm................40 3-5Globalconvergenceof intheNLMSalgorithm..................41 3-6Localconvergenceof intheLMSalgorithm...................41 3-7Localconvergenceof intheGLMSalgorithm..................41 3-8Localconvergenceof intheLMS-SASalgorithm................42 3-9ContourofMSE..................................43 3-10Weight(top)and kr y ( n ) k (bottom).......................43 4-1ConvergencecharacteristicsofweightforExampleIbyITL..........60 4-2EuclideandistanceofExampleI..........................60 4-3Entropy R 1 1 f 2 ( ) d" ofExampleI........................61 4-4EuclideandistanceofExampleII.........................63 4-5ConvergencecharacteristicsofweightforExampleIIbyITL..........64 5-1ConvergencecharacteristicsofweightforKautzlterbyLMSalgorithm...70 5-2ConvergencecharacteristicsofweightforKautzlterbyLMS-SASalgorithm.70 5-3ConvergencecharacteristicsofweightforKautzlterbyNLMSalgorithm...70 5-4ConvergencecharacteristicsofweightforKautzlterbyITLalgorithm....71 5-5Impulseresponse..................................71 5-6Channelequalizationsystem............................72 vi

PAGE 7

5-7Convergencecharacteristicsofadaptivealgorithmsforanonlinearequalizer..74 5-8Performancecomparisonofglobaloptimizationsfornonlinearequalizer....75 5-9AverageBERforanonlinearequalizer......................76 vii

PAGE 8

ThemajorgoalofthisdissertationistodevelopglobaloptimizationalgorithmsforadaptiveIIRlters.SincetheperformancesurfaceofadaptiveIIRltersisnonconvexwithrespecttotheltercoecients,conventionalgradient-basedalgorithmscaneasilybetrappedatanunacceptablelocaloptimum.WeneedtoexploitglobaloptimizationmethodsinadaptiveIIRlteringandovercometheproblemofconvergingtothelocalminima,preservingstabilitythroughoutadaptation. OneapproachforadaptiveIIRlteringsuggestsastochasticapproximationwithconvolutionsmoothing(SAS).Wemodifytheperturbingnoisebymultiplyingitwithitscostfunction.Themodiedalgorithmresultsinbetterperformancewhencomparedtotheoriginalalgorithm.Wealsoanalyzetheglobaloptimizationbehavioroftheproposalalgorithmbyanalyzingthetransitionprobabilitydensityofescapingfromalocalminimum. Agradientestimationerrorcanbeusedtoactastheperturbingnoise,provideditisproperlynormalized.Consequently,anotherapproachforglobalIIRlteroptimizationisthenormalizedLMS(NLMS)algorithm.ThebehavioroftheNLMSalgorithmwithdecreasingstepsizeissimilartothatoftheLMS-SASalgorithmfromaglobaloptimizationperspective.viii

PAGE 9

Oneissueintheidenticationoftheautoregressivemovingaverage(ARMA)systemisthatlterstructuresareusedtoavoidinstabilitiesduringtraining.HereweusetheclassoforthogonallterscalledtheKautzltersforARMAmodeling.TheproposedglobaloptimizationalgorithmshavebeenappliedtosystemidenticationtogetherwithKautzltersandnonlinearequalizationtoshowtheglobaloptimumsearchcapability.ix

PAGE 10

CHAPTER1 INTRODUCTION 1.1Motivation Theobjectiveofthisdissertationistodevelopglobaloptimizationalgorithmsfor adaptiveinniteimpulseresponse(IIR)lteringbyusingthestochasticapproximation withconvolutionsmoothingfunction(SAS)andinformationtheoreticlearning(ITL). Thisworkisparticularlymotivatedbythefollowingfacts. Adaptivelteringhaswideapplicationinthedigitalsignalprocessing,communication, andcontrolelds.Aniteimpulseresponse(FIR)lter[1,2]isasimplestructure foradaptivelteringandhasbeenextensivelydeveloped.Recentlyresearchershave attemptedtouseIIRstructuresbecausetheyperformbetterthanFIRstructures withthesamenumberofcoecients.However,somemajordrawbacksinherent toadaptiveIIRstructuresareslowconvergence,possibleconvergencetoabiasor unacceptablesuboptimalsolutions,andtheneedforstabilitymonitoring. Stochasticapproximationmethods[3]havethepropertyofconvergingtotheglobal optimumwithaprobabilityofone,asthenumberofiterationstendstoinnity. Thesemethodsarebasedonarandomperturbationtondtheabsoluteoptimum ofthecostfunction.Inparticular,themethodofstochasticapproximationwith convolutionsmoothinghasbeensuccessfulinseveralapplications.Ithasbeen empiricallyproventobeecientinconvergingtotheglobaloptimumintermsof computationandaccuracy.Theconvolutionsmoothingfunctioncan\smoothout" anonconvexobjectivefunctionbyconvolvingitwithasuitableprobabilitydensity function(pdf).Inthebeginningofadaptation,thevarianceofthepdfissettoa sucientlargevalue,suchthattheconvolutionsmoothingfunctioncan\smoothout" thenonconvexobjectivefunctionintoaconvexfunction.Thenthevarianceisslowly reducedtozero,wherebythesmoothedobjectivefunctionreturnstotheoriginal objectivefunction,asthealgorithmconvergestotheglobaloptimum.Suchvariance isdeterminedbyacoolingscheduleparameter.Thiscoolingscheduleisacritical factoringlobaloptimization,becauseitaectstheperformanceoftheglobalsearch capability. Convolutionsmoothinghasbeenusedexclusivelywiththemeansquareerror(MSE) criterion.MSEhasbeenusedextensivelyinthetheoryofadaptivesystemsbecause ofitsanalyticalsimplicityandthecommonassumptionofGaussiandistributed error.However,recentlymoresophisticatedapplications(suchasindependent componentanalysisandblindsourceseparation)requireacriterionthatconsiders higher-orderstatisticsforthetrainingofadaptivesystems.Thecomputationalneural engineeringlaboratorystudiedentropycostfunction[4].Shannonrstintroduced 1

PAGE 11

2 entropyofagivenprobabilitydistributionfunction,whichprovidesameasureofthe averageinformationinthatdistribution.ByusingtheParzenwindowestimator, wecanestimatethepdfdirectlyfromasetofsamples.Itisquitestraightforward toapplytheentropycriteriontothesystemidenticationframework[5].Asshown inthisthesis,thekernelsizeoftheParzenwindowestimatorbecomesanimportant parameterintheglobaloptimizationprocedure.Denizetal.[6]conjecturedthat forasucientlylargekernelsize,thelocalminimaoftheerrorentropycriterion canbeeliminated.Itwassuggestedthatstartingwithalargekernelsize,andthen slowlydecreasingthisparametertoapredeterminedsuitablevalue,thetraining algorithmcanconvergetotheglobalminimumofthecostfunction.Theerrorentropy criterionconsideredbyDenizetal.[6],however,doesnotconsiderthemeanofthe errorsignal,sinceentropyisinvarianttotranslation.Herewemodifythecriterion andstudythereasonwhyannealingthekernelsizeproducesglobaloptimization algorithms. 1.2LiteratureSurvey Wesurveyedtheliteratureintheareasofadaptiveltering,optimizationmethod, andmathematicsusedintheanalysisofthealgorithm. 1.2.1AdaptiveFiltering Numerousalgorithmsofadaptivelteringareproposedintheliterature[7,8], especiallyforsystemidentication[9,10].Somevaluablegeneralpapersonthetopic ofadaptivelteringarepresentedbyJohnson[11],Shynk[12],Geeetal.[13]and Netto[14].Johnson'spaperfocusedonthecommontheoreticalbasisbetweenadaptive lteringandsystemidentication.Shynk'spaperdealtwithvariousalgorithmsof adaptiveIIRlteringfortheirerrorformulaandrealization.Neto'spaperpresented thecharacteristicsofthemostcommonlyusedalgorithmsforadaptiveIIRlteringina simpleanduniedframework.RecentlyafullbookwaspublishedonIIRlters[15]. Themajorgoalofanadaptivelteringalgorithmistoadjusttheadaptivelter coecientsinordertominimizeagivenperformancecriterion.Literatureabout adaptivelteringcanbeclassiedintothreecategories:adaptivelterstructures, adaptivealgorithms,andapplications. Adaptivelterstructure. Thechoiceoftheadaptivelterstructuresaectthe computationalcomplexityandtheconvergencespeed.Basically,therearetwokindof adaptivelterstructures. { AdaptiveFIRlterstructure. ThemostcommonlyusedadaptiveFIRlter structureisthetransversallterwhichimplementsanall-zerolterwithacanonic directform(withoutanyfeedback).ForthisadaptiveFIRlterstructure,the

PAGE 12

3 outputisalinearcombinationoftheadaptiveltercoecients.Theperformance surfaceoftheobjectivecostfunctionisquadratic[1]whichyieldsasingleoptimal point.AlternativeadaptiveFIRlterstructures[16]improveperformanceinterms ofcomputationalcomplexity[17,18]andconvergencespeed[19,20]. { AdaptiveIIRlterstructure. White[21]rstpresentedanimplementation ofanadaptiveIIRlterstructure.Later,manyarticleswerepublishedinthis area.Forsimpleimplementationandeasyanalysis,mostadaptiveIIRlter structuresusethecanonicdirectformrealization.Someotherrealizationsare alsopresentedtoovercomesomedrawbacksofcanonicdirectformrealization,like slowconvergencerateandtheneedforstablemonitoring[22].Commonlyused realizationsarecascade[23,24],lattice[25,26],andparallel[27,28]realizations. OtherrealizationshavealsobeenpresentedrecentlybyShynketal.[29]andJenkin etal.[30]. Algorithm. Analgorithmisaprocedureusedtoadjustadaptiveltercoecients inordertominimizethecostfunction.Thealgorithmdeterminesseveralimportant featuresofthewholeadaptiveprocedure,suchascomputationalcomplexity, convergencetosuboptimalsolutions,biasedsolutions,objectivecostfunction anderrorsignal.EarlylocaladaptivelteralgorithmswereNewtonmethod, Quasi-Newtonmethod,andgradientmethod.Newton'smethodseekstheminimum ofasecond-orderapproximationoftheobjectivecostfunction.Quasi-Newtonisa simpleversionoftheNewtonmethodusingarecursivelycalculatedestimateofthe inverseofasecond-ordermatrix.Thegradientmethodsearchestheminimumof theobjectivecostfunctionbytrackingtheoppositedirectionofthegradientvector oftheobjectivefunction[31].Itiswellknownthatthestepsizecontrolsstability, convergencespeed,andmisadjustment[1].ForFIRadaptiveltering,localmethods weresucientsincetheoptimizationwaslinearintheweights.HoweverinIIR adaptivelteringthisisnolongerthecase.Themostcommonlyknownapproaches foradaptiveIIRlteringareequationerroralgorithm[32],outputerroralgorithm [12,11],andcompositealgorithms[33,34]suchastheSteiglitz-McBridealgorithm [35]. { Themaincharacteristicsoftheequationerroralgorithmareunimodalityofthe Mean-Square-Error(MSE)performancesurfacebecauseofthelinearrelationship ofthesignalandtheadaptiveltercoecients,goodconvergence,andguaranteed stability.However,itcomesalongwithabiasedsolutioninthepresenceofnoise. { Themaincharacteristicsoftheoutput-erroralgorithmarethepossibleexistenceof themultiplelocalminima,whichaecttheconvergencespeed,anunbiasedglobal optimalsolutioneveninthepresenceofnoise,andtherequirementofstability checkingduringtheadaptiveprocessing. { Thecompositeerroralgorithmattemptstocombinethegoodindividual characteristicsofbothoutputerroralgorithmandequationerroralgorithm [36].Consequently,manypaperswerewrittentoovercometheproblemmentioned above.

PAGE 13

4 { Cousseauetal.[37]proposedanorthogonalltertoovercometheinstability problemofadaptiveIIRlters,whileRadenkovicetal.[38]usedanoutputerror methodtoavoidit. { Thequadraticconstraintequationerrormethod[39]wasproposedtoremovethe biasedsolutionsfortheequation-erroradaptiveIIRlters[40,41].Newcomposite adaptiveIIRalgorithmsarepresentedinliterature[42,36]. Application. Adaptivelteringhasbeensuccessfulinmanyapplications,such asechocancellation,noisecancellation,signaldetection,systemidentication, channelequalization,andcontrol.Someusefulinformationaboutadaptiveltering applicationappearsintheliterature[1,2,43]. Inthisdissertation,wefocusonadaptiveIIRlteralgorithmsforsystem identication. 1.2.2OptimizationMethod TherearetwoadaptationmethodologiesforIIRlters:gradientdescentandglobal optimization.Themostcommonlyusedmethodisthegradientdescentmethod,such asleastmeansquare(LMS)[1].Thesemethodsarewellestablishedfortheadaptation ofFIRltersandhavetheadvantageofbeinglesscomputationallyexpensive.The problemwithgradientdescentmethodsisthattheymightconvergetoanylocal minima.Thelocalminimanormallyimplypoorperformance.Thisproblemcanbe overcomethroughglobaloptimizationmethods.Suchglobaloptimizationalgorithms includesimulatedannealing(SA)[44],geneticalgorithm[45],randommethod[46],and stochasticapproximation[3].However,globaloptimizationmethodshavetheproblemof computationalcomplexity,especiallyforhighorderadaptivelter. Severalrecentresearchershavemodiedglobaloptimizationalgorithmstoimprove theirperformance.Khargonekar[47]usedanadaptiverandomsearchalgorithmfor theglobaloptimizationofcontrolsystems.Thistypeofglobaloptimizationalgorithm propagatesacollectionorasimplexofpointsbutusesmoregeometricallyintuitive heuristics.Themostcommonlyuseddirectsearchmethodforoptimizationisthe Nelder-Meadalgorithm[46].DespitethepopularityoftheNelder-Meadalgorithm,it doesnotprovideanyguaranteeofconvergenceorperformance.Recentstudiesrelied onnumericalresultstodeterminetheeectivenessofthealgorithm.Duanproposed

PAGE 14

5 theshuedcomplexevolutionalgorithm[48],whichusesseveralNelder-Meadsimplex algorithmsrunninginparallel(thatalsoshareinformationwitheachother).Tang[49] proposedarandomsearchthatpartitionsthesearchregionoftheobjectivefunctioninto acertainnumberofsubregions.Tang[49]showedthattheadaptivepartitionedrandom searchingeneralcanprovideabetter-than-averagesolutionwithinamodestnumberof functionevaluations. Yim[50]usedageneticalgorithminhisadaptiveIIRlteringalgorithmforactive noisecontrol.Heshowedthatgeneticalgorithmsovercometheproblemofconvergingto thelocalminimumforgradientdecentalgorithms.Wah[51]improvedconstrained simulatedannealing,adiscreteglobalminimizationalgorithmwithasymptotic convergencetodiscreteconstrainedglobalminimawithaprobabilityofone.The algorithmisbasedonthenecessaryandsucientconditionsfordiscreteconstrained localminimainthetheoryofdiscreteLagrangemultipliers.Heextendedthisalgorithm tosolvenonlinearcontinuousconstrainedoptimizationproblems.Maryak[52]injected extranoisetermsintotherecursivealgorithm,whichmayallowthealgorithmto escapethelocaloptimumpoints,andensureglobalconvergence.Theamplitudeofthe injectednoiseisdecreasedovertime(aprocesscalledannealing),sothatthealgorithm cannallyconvergetotheglobaloptimumpoint.Hearguesthat,insomecases,the naturallyoccurringerrorinthegradientapproximationeectivelyintroducedinjected noisethatpromotesconvergenceofthealgorithmtotheglobaloptimum.Treadgold[53] combinedgradientdescentandtheglobaloptimizationtechniqueofsimulatedannealing (SA).Thiscombinationescapeslocalminimaandcanimprovetrainingtime.Staus[54] usedspatialbranchandboundmethodologytosolvetheglobaloptimizationproblem. Thespatialbranchandboundtechniqueisnotpracticalforidentication.Advances inconvexalgorithmdesignusinginteriorpointmethods,exploitationofstructure,and fastercomputingspeedshavealteredthispicture.Largeproblems,includinginteresting classesofidenticationproblemscanbesolvedeciently.Fujita[55]proposedamethod (takingadvantageofchaoticbehaviorofthenonlineardissipationsystem)thathas inertiaandnonlineardampingterms.Thetimehistoryofthesystem,whoseenergy

PAGE 15

6 functioncorrespondstotheobjectivefunctionoftheunconstrainedoptimization problem,convergesattheglobalminimaofenergyfunctionofthesystembymeansof appropriatecontrolofparametersdominatingoccurrenceofchaos.Howevernoneof theseglobaloptimizationtechniquescanrevealgradientdescentintermsofeciency innumberofcomputations.thereforeinthisthesiswerevisittheproblemofstochastic gradientdescentforIIRltering. 1.2.3ProposedOptimizationMethod Theproposedglobaloptimizationmethodsinthisdissertationarebasedon stochasticapproximationmethodsontheMSEcostfunctionandininformation theoreticlearning.Thestochasticapproximationrepresentsasimpleapproachto minimizinganonconvexfunction,whichisbasedonarandomlydistributedprocess inevaluatingthesearchspace[56].Inparticular,twomethodswereinvestigated.The rstmethod[57]isimplementedbyaddingrandomperturbationsestimateofthe system'sdynamicequation.Varianceoftherandomuctuationmustdecayaccording toaspecicannealingschedule,whichcanensureconvergencetoaglobaloptimum. Thegoaloftheearlylargeperturbationsistoallowthesystemtoquicklyescape fromthelocalminima.Thesecondmethodisbasedonstochasticapproximationwith convolutionsmoothing[56].Theobjectiveofconvolutionsmoothingistosmoothout thenonconvexobjectivefunctionbyconvolutingitwithanoiseprobabilitydensity function(pdf).Alsointhismethod,thevarianceofthepdfmustdecayaccordingto acoolingschedule.Theamountofsmoothingisproportionaltothevarianceofthe noisepdf.Theideaofthismethodistocreateasucientamountofsmoothinginthe beginningoftheoptimizationprocesssothattheoutcomeisaconvexperformance surface.Whenthevarianceofthenoisepdfisgraduallyreducedtozero,the performancesurfacegraduallyconvergestotheoriginalnonconvexform.Bothof thesemethodsusetheMSEcostfunction. Wealsoproposeannealingthekernelsizeinentropyoptimization.Entropy canbeestimateddirectlyfromdatausingtheParzenestimationifRenyi'sentropy denitionsareissued[58,59].Itispossibletoalsoderiveagradient-basedalgorithmto

PAGE 16

7 searchtheminimumofthisnewcostfunction.Recently,Erdogmus[4,5]usedITLin adaptivesignalprocessing.Wedevelopedaglobaloptimizationalgorithmforentropy minimizationbyannealingkernelsize(similartothestochasticapproximationwith convolutionsmoothingmethodforMSEcriterion).Weshowedthatthisisequivalent toaddinganadditivenoisesourcetothetheoreticalcostfunction.Howeverthetwo methodsdiersincethekernelfunctionsmoothstheentropycostfunction. 1.3Outline InChapter2,thebasicideaofanadaptivelterandadaptivealgorithmis reviewed.Especially,wereviewedtheLMSalgorithmforadaptiveIIRltering,which isthebasicformofourproposalalgorithms.Sincewefocusonglobaloptimization algorithmsforadaptiveIIRltering,someimportantpropertiesonglobaloptimization forsystemidenticationarereviewed.ThesystemidenticationframeworkwithKautz ltersisalsopresented. InChapter3,weintroducethestochasticapproximationwithconvolution smoothing(SAS)techniqueandapplyittoadaptiveIIRltering.Similartothe GLMSalgorithmbySrinivasan[56],wederivetheLMS-SASalgorithm.Theglobal optimizationbehavioroftheLMS-SASalgorithmhasbeenanalyzedbyevaluatingthe transitionprobabilitydensityofescapingoutfromasteadystatepointforthescalar case.Becauseofthenoisyestimategradient,thebehavioroftheNLMSalgorithmwith decreasingstepsizeisshowntobesimilartothatoftheLMS-SASalgorithmfroma globaloptimizationperspective.TheglobalsearchcapabilityofLMS-SASandNLMS algorithmsarethencompared. InChapter4,theentropycriterionisproposedasanalternativetoMSEfor adaptiveIIRltering.Thedenitionofentropy(mutualinformation)isrstreviewed. ByusingtheParzenwindowestimatorfortheerrorpdf,thesteepestdescentalgorithm (ITLalgorithm)withtheentropycriterionforthesystemidenticationframeworkof adaptivelteringisderived.TheweakglobaloptimalconvergenceofITLalgorithmin simulationexamplesisgiven.Finally,wecomparetheperformanceoftheITLalgorithm withthatofLMS-SASandNLMSalgorithmsintermsofglobaloptimizationcapability.

PAGE 17

InChapter5,theassociatedLMS,LMS-SAS,NLMS,andITLalgorithmsfortheKautzlterarerstderived.Similarly,wecomparetheglobaloptimizationperformanceofproposedglobaloptimizationalgorithmsfortheKautzlters.Finally,theassociatedalgorithmsareappliedtononlinearequalization.InChapter6,weconcludethedissertationandoutlinefuturework.

PAGE 18

CHAPTER2 ADAPTIVEIIRFILTERING 2.1Introduction Figure2-1showsthebasicblockdiagramofanadaptivelter.Ateachiteration, asampledinputsignal x ( n )ispassedthroughanadaptiveltertogeneratetheoutput signal y ( n ).Thisoutputsignaliscomparedtoadesiredsignal d ( n )togeneratethe errorsignal ( n ).Finally,anadaptivealgorithmusesthiserrorsignaltoadjustthe adaptiveltercoecientsinordertominimizeagivenobjectivefunction.Themost widelyusedlteristheniteimpulseresponse(FIR)lterstructure. Inrecentyears,activeresearchhasattemptedtoextendtheFIRlterintothe moregeneralinniteimpulseresponsecongurationthatoerspotentialperformance improvementsandlesscomputationalcostthanequivalentFIRlters[60].However, somepracticalproblemsstillexistintheuseofadaptiveIIRlters.Astheerror surfaceofIIRltersisusuallymultimodalwithrespecttotheltercoecients,learning algorithmsforIIRlterscaneasilybetrappedatlocalminimaandbeunableto convergetotheglobaloptimum[1].Oneofthecommonlearningalgorithmsfor adaptivelteringisthegradient-basedalgorithm,forinstancetheleast-mean-square algorithm(LMS)[61].Thealgorithmaimstondtheminimumpointoftheerror Adaptive Filter ? x(n)y(n) d(n) + ( n ) Adaptive Algorithm Figure2-1:Adaptiveltermodel. 9

PAGE 19

10 surfacebymovinginthedirectionofthenegativegradient.Likemostofthesteepest descentalgorithms,itmayleadtheltertoalocalminimumwhentheerrorsurface ismultimodal.Inaddition,theconvergencebehavioroftheLMSalgorithmdepends heavilyonthechoicesofstepsizeandtheinitialvaluesofltercoecients. Learningalgorithmssuchasmaximumlikelihood[62],LMS[1],least-square[2], andrecursive-least-square[2]arewellestablishedfortheadaptationofFIRlters. Inparticular,thegradient-descentalgorithms(suchasLMS)areverysuitablefor adaptiveFIRltering,iftheerrorsurfaceisunimodalandquadratic.Generally,LMS isthebestchoiceformanyapplicationsofadaptivesignalprocessing[1],becauseofits simplicity,itseaseofcomputation,andthefactthatitdoesnotrequireo-linegradient estimationsofdata.ItisalsopossibletoextendtheLMSalgorithmtoadaptiveIIR lters;however,itmayfacethelocalminimumproblemwhentheerrorsurfaceis multimodal.TheLMSalgorithmadaptstheweight(ltercoecients)vectoralongthe negativegradientofthemean-square-errorperformancesurfaceuntiltheminimumof theMSEisreached.Inthefollowing,wewillpresenttheformulationoftheIIR-LMS algorithm.TheIIRlterkernelindirectformisconstructedas y ( n )= L X i =0 a i x ( n i )+ M X j =1 b j y ( n j )(2-1) Lettheweightvector X ( n )bedenedas =[ a 0 ; ;a L ;b 1 ; ;b M ] T (2-2) X ( n )=[ x ( n ) ; ;x ( n L ) ;y ( n 1) ; ;y ( n M )] T (2-3) and d ( n )isthedesiredoutput.Theoutputis y ( n )= T ( n ) X ( n )(2-4) Wecanwritetheerror as ( n )= d ( n ) y ( n )= d ( n ) T ( n ) X ( n )(2-5)

PAGE 20

Sothegradientisr=@"2 @(2-6)=2"(n)[@"(n) Letusdenery(n)=[@y(n) FromEquation(2-1),obtainry(n)=[x(n);;x(nL);y(n1);;y(nM)]T+[MXj=1bj@y(nj) Wherethegradientestimateisgivenbyr=2"(n)ry(n)(2-12) Basedonthegradientdescentalgorithm,thecoecientsupdateis(n+1)=(n)r(2-13) Therefore,inIIR-LMS,thecoecientupdatebecomes(n+1)=(n)+2[d(n)y(n)]ry(n)(2-14) where2isaconstantstepsize. Foreachvalueofn,Equation(2-4)producesthelteroutputandEquation(2-10)and(2-14)arethenusedtocomputethenextsetofcoecients^(n+1).Regardingthecomputationalcomplexity,theIIR-LMSalgorithmasdescribedinEquation(2-4)through(2-14)requiresapproximately(L+M)(L+2)calculationsforeachiterationwhiletheFIR-LMSrequiresonly2Ncalculationsforeachiteration(withlterlength

PAGE 21

12 B ( z 1 ) A ( z 1 ) ? x(n) + v ( n ) ^ B ( z 1 ) ^ A ( z 1 ) ? ^ y ( n ) y ( n ) + ( n ) Adaptive Algorithm Figure2-2:Blockdiagramofthesystemidenticationconguration. = N ).Beingoneofthegradient-descentalgorithms,theLMSalgorithmmayleadthe ltertoalocalminimumwhenerrorsurfaceismultimodal,andtheperformanceofthe LMSalgorithmwilldependheavilyontheinitialchoicesofstepsizeandweightvector. Stabilitycheck. Jury'sstabilitytest[63]wasusedinthisthesis.Thisstability testensurethatallrootslieinsidetheunitcircle.Sincethetestdoesnotrevealwhich polesareunstable,thepolynomialmustbefactoredtoobtainthisinformation.Ifthe polynomialorderislargerthen2( M> 2),thetestbecomescomputationallyexpensive. Ifthiswasdone,anyunstablesetofweightscouldeasilybeprojectedbackintotheunit circle.Thedicultyofthestabilitycheckispolynomialfactorization. Tosimplifythestabilitycheck,onemayusethecascadeofrst-orsecond-order sectionsinsteadofthecanonicaldirectform.Inparticular,thestabilityoftheKautz lter,astructureofcascadesofsecond-ordersectionswithcomplexpoles,iseasily checked. 2.2SystemIdenticationwiththeAdaptiveIIRFilter Inthesystemidenticationconguration,theadaptivealgorithmadaptsthe coecientsoftheltersuchthattheadaptiveltermatchestheunknownsystem ascloselyaspossible.Figure2-2isageneralblockdiagramoftheadaptivesystem

PAGE 22

whereA(z1)=1Pnai=1aiziandB(z1)=Pnbj=1bjzjarepolynomials,andx(n)andv(n)aretheinputsignalandtheperturbationnoise,respectively.Theadaptivelterisdescribedas^y(n)=[^B(z1) ^A(z1)]x(n)(2-16) where^A(z1)=1P^nai=1^aiziand^B(z1)=P^nbj=1^bjzj.Theissuesinsystemidenticationwithadaptiveltersareusuallydividedintothefollowing: insucientorder:n<0;{ strictlysucientordern=0;{ morethansucientordern>0; wheren=min[(na^na);(nb^nb)].Inmanycases,features(b)and(c)aregroupedinoneclass,calledsucientorder,wheren0. withoutadditionalnoise;{ withadditionalnoisecorrelatedwiththeinputsignal;{ withadditionalnoiseuncorrelatedwiththeinputsignal; Thebasicobjectivefunctionoftheadaptivelteristoadaptthecoecientsoftheadaptiveltersuchthatitdescribestheunknownsysteminanequivalentform.TheequivalenceisusuallydeterminedbyanobjectivefunctionW(n)oftheinput,availableunknownsystemoutput,andtheadaptivelteroutputsignals.TheobjectivefunctionW(n)mustsatisfythefollowingpropertiesinordertottheconsistentdenition: Therearemanywaystodescribeanobjectivefunctionthatsatisestheoptimalityandnonnegativityproperties.Thefollowingformsoftheobjectivefunctionarethemostcommonlyusedinderivingtheadaptivealgorithm:

PAGE 23

14 Meansquareerror(MSE) W [ ( n )]= E [ 2 ( n )]. Leastsquare(LS) W [ ( n )]= 1 N +1 P N i =1 2 ( n i ) Instantaneoussquareerror(ISV) W [ ( n )]= 2 ( n ). Inastrictsense,MSEisatheoreticalvaluethatisnoteasyestimated.Inpractice, itcanbeapproximatedbytheothertwoobjectivefunctions.Ingeneral,ISViseasily implementedbutitisheavilyaectedbyperturbationnoise.Laterwepresentthe entropyoftheerrorasanotherobjectivefunction,butrstwemustdiscussMSE. Theadaptivealgorithmattemptstominimizethemeansquarevalueoftheoutput errorsignal,wheretheoutputerrorisgivenbythedierencebetweentheunknown systemandtheadaptivelteroutputsignal.Thatis, ( n )=[ B ( z 1 ) A ( z 1 ) ^ B ( z 1 ) ^ A ( z 1 ) ] x ( n )+ v ( n )(2-17) Thegradientoftheobjectivefunctionestimatewithrespecttotheadaptivelter coecientsisgivenas r ^ [ 2 ( n )]=2 ( n ) r ^ [ ( n )]=2 ( n ) r ^ [^ y ( n )](2-18) with r ^ [^ y ( n )]= 2 6 4 ^ y ( n i )+ P ^ na k =1 ^ a k ( n ) ^ y ( n k ) @ ^ a i j ^ a i =^ a i ( n ) x ( n j )+ P ^ na k =1 ^ a k ( n ) ^ y ( n k ) @ ^ b j j ^ b j = ^ b j ( n ) 3 7 5 (2-19) where istheadaptiveltercoecientvector. Thisequationrequiresarelativelylargememoryallocationtostoredata.In practice,asmallstepapproximationthatconsiderstheadaptiveltercoecients slowlyvaryingcanovercomethisproblem[64].Therefore,byusingthesmallstep approximation,theadaptivealgorithmisdescribedas ^ ( n +1)= ^ + ( n ) ( n )(2-20) where ( n )= f ^ y ( n i ) j x ( n j ) g T for i =1 ; ; ^ na ; j =1 ; ; ^ nb ,and isasmallstep sizethatsatisesthefollowingproperty.Theadaptivealgorithmischaracterizedbythe followingproperties:

PAGE 24

15 Property1 [65] TheEuclideansquare-normoftheerrorparametervectordenedby k ^ ( n ) ( n ) k isconvergentif satises 0 2 k ^ ( n ) k 2 (2-21) Property2 [31,66,67] ThestationarypointsoftheMSEperformancesurfacearegiven by E f [ ^ A ( z 1 ;n ) B ( z 1 ) A ( z 1 ;n ) ^ B ( z 1 ) A ( z 1 ;n ) ^ A ( z 1 ;n ) ] x ( n ) gf [ ^ B ( z 1 ;n ) ^ A 2 ( z 1 ;n ) ] x ( n j ) g =0(2-22) E f [ ^ A ( z 1 ;n ) B ( z 1 ) A ( z 1 ;n ) ^ B ( z 1 ) A ( z 1 ;n ) ^ A ( z 1 ;n ) ] x ( n ) gf [ 1 ^ A ( z 1 ;n ) ] x ( n j ) g =0(2-23) Inpractice,onlythestablestationarypoints,socalledequilibria,areofinterestand usuallythesepointsareclassiedas Degeneratedpoint:Thedegeneratedpointsaretheequilibriumpointswhere 8 > < > : ^ B ( z 1 ;n )=0:^ nb< ^ na ^ B ( z 1 ;n )= L ( z 1 ) ^ A ( z 1 ;n ):^ nb ^ na (2-24) where L ( z 1 )= P nb na k =0 l k z k Nondegeneratedpoints:Alltheequilibriathatarenotdegeneratedpoints. Theequilibriumpointsthatinuencetheformoftheerrorperformancesurface havethefollowingproperty. Property3 [12] If n 0 ,allglobalminimaoftheMSEperformancesurfacearegiven by 8 > < > : ^ A ( z 1 )= A ( z 1 ) C ( z 1 ) ^ B ( z 1 )= B ( z 1 ) C ( z 1 ) (2-25) where C ( z 1 )= P n k =0 c k z k .Itmeansthatallglobalminimumsolutionshaveincluded thepolynomialsdescribingtheunknownsystemplusacommfactor C ( z 1 ) presentinthe numeratoranddenominatorpolynomialsoftheadaptivelter.

PAGE 25

16 Property4 [68] If n 0 ,allequilibriumpointsthatsatisfythestrictlypositive realnesscondition Re [ ^ A ( z 1 ) A ( z 1 ) ] > 0: j z j =1(2-26) areglobalminima. Property5 [68] Lettheinputsignal x ( n ) begivenby x ( n )=[ F ( z 1 ) G ( z 1 ) ] w ( n ) ,where F ( z 1 )= P nf k =0 f k z k and G ( z 1 )=1 P ng k =1 g k z k arecoprimepolynomials,and w ( n ) isawhitenoise.Thenif 8 > < > : n nf ^ nb ^ na +1 ng (2-27) allequilibriumpointsareglobalminima. ThispropertyisactuallythemostcommonusedresultfortheunimodalityoftheMSE performancesurfaceincasesofidenticationwithsucientordermodels.Ithastwo importantfactswhichare If^ na = na =1and^ nb nb 1,thenthereisonlyoneequilibriumpoint,whichis theglobalminimum. If x ( n )iswhitenoise( nf = ng =0),andtheordersoftheadaptivelterare strictlysucient(^ na = na and^ nb = nb ,and^ nb na +1 0),thenthereisonly oneequilibriumpoint,whichistheglobalminimum. Nayeri[69]furtherinvestigatedthispropertyandheobtainedalessrestrictive sucientconditiontoguaranteeunimodalityoftheadaptivealgorithm,whentheinput signalisawhitenoiseandtheorderoftheadaptivelterexactlymatchtheunknown system.Theresultisgivenas Property6 [69] If x ( n ) isawhitenoisesequence ( nf = ng =0) ,theordersofthe adaptivelterarestrictlysucient( ^ na = na and ^ nb = nb ,and ^ nb na +2 0 ),then thereisonlyoneequilibrium,whichistheglobalminimum. Thereisanotherimportantpropertywhichis

PAGE 26

17 Property7 [67] Alldegeneratedequilibriumpointsaresaddlepointsandtheirexistence impliesmultimodality(existenceofstablelocalminimum)oftheperformancesurfaceif either ^ na> ^ nb =0 or ^ na =1 Thispropertyisalsovalidfortheinsucientordercases. In1981,Stearns[70]conjecturedthatif n 0andtheinputsignal x ( n )iswhite noise,thentheperformancesurfacedenedbyMSEobjectivefunctionisunimodal. ThisconjecturestayedvaliduntilFanoerednumericalcounterexamplesforitin1989 [71]. ThemostimportantcharacteristicofIIRadaptationisthepossibleexistence ofmultiplelocalminimawhichcanaecttheoverallconvergence.Moreover,global minimumsolutionisunbiasedbythepresenceofzero-meanperturbationnoiseinthe unknownsystemoutputsignal.AnotherimportantcharacteristicofIIRadaptation istherequirementforstabilitycheckingduringtheadaptiveprocess.Thisstability checkingrequirementcanbesimpliedbychoosinganappropriateadaptivelter realization. 2.3SystemIdenticationwithKautzFilter OneofthemajordrawbacksinadaptiveIIRlteringisthestabilityissue.Since thelterparametersarechangingduringadaptation,apracticalapproachistouse cascadesofrstandsecondorderARMAsections,wherestabilitycanstillbechecked simplyandlocally.AprincipledwaytoachievetheexpansionofgeneralARMA systemsisthroughorthogonallterstructures[72].HereweusesKautzlters,because theyareveryversatile(cascadesofsecondordersectionswithcomplexpolesbut stillwithareasonablenumberofparameters).TheKautzlter,whichcanbetraced backtotheoriginalworkofKautz[73],isbasedonthediscretetimeKautzbasis functions.TheKautzlterisageneralizedfeedforwardlterwhichproducesanoutput y ( n )= ( n; ) T ,where issetofweightsandtheentriesof ( n; )aretheoutputsof rstorderIIRlterswithacomplexpoleat [74].StabilityoftheKautzlteriseasily guaranteedifthepoleislocatedwithintheunitcircle(thatis j j < 1).Althoughthe

PAGE 27

18 adaptationislinearin i ,itisnonlinearinthepoles,yieldinganonconvexoptimization problemwithlocalminima. ThecontinuoustimeKautzbasisfunctionsaretheLaplacetransformofcontinuous timeorthonormalexponentialfunctionswhichcanbetracedbacktotheoriginalworks ofKautz[73].ThediscretetimeKautzbasisfunctionsaretheZ-transformsofdiscrete timeorthonormalexponentialfunctions[74].ThediscretetimeKautzbasisfunctions aredescribedas 2 k ( z k ; k )= j 1+ k j r 1 k k 2 z 1 1 (1 k z 1 )(1 l z 1 ) k 1 Y l =0 ( z 1 l )( z 1 l ) (1 l z 1 )(1 l z 1 ) (2-28) 2 k +1 ( z k ; k )= j 1 k j r 1 k k 2 z 1 1 (1 k z 1 )(1 l z 1 ) k 1 Y l =0 ( z 1 l )( z 1 l ) (1 l z 1 )(1 l z 1 ) (2-29) where k = k + j k ,( k k )arethe k thpairofcomplexconjugatepoles,and j k j < 1 becauseofitsstability,and k isalwayseven. TheorthonormalityofthediscretetimeKautzbasisfunctionsisrepresentedas 1 2 j I p ( z; k ) q (1 =z; k ) dz z = p;q (2-30) wheretheintegralunitcircletourisanalyticintheexteriorofthecircle. Allpairsofcomplexconjugatepolescanbeintegratedinrealsecondordersections toreducethedegreesoffreedom.Theresultingbasisfunctionscanbedescribesas discrete-time2-poleKautzbasisfunctions.Thediscrete-timeKautzbasisfunctionscan besimpliedasFigure2-3,where ^ y ( n )= ( n ) T (2-31) ( n )=[ 0 ( n ) ; ;' d 1 ( n )] T (2-32) K 2 k ( z; )= K 2 k 2 ( z; ) A ( z; )(2-33) K 2 k +1 ( z; )= K 2 k 1 ( z; ) A ( z; )(2-34)

PAGE 28

Figure2-3:Kautzltermodel.K0(z;)=0z11 (1z1)(1z1)(2-35)K1(z;)=1z1+1 (1z1)(1z1)(2-36) andA(z;)=(z1)(z1+) (1z1)(1z1)(2-37)0=j1+jr Hereisacomplexconjugatepole(thatis=+j).

PAGE 29

CHAPTER3 STOCHASTICAPPROXIMATIONWITHCONVOLUTIONSMOOTHING 3.1Introduction Adaptivelteringhasbecomeamajorresearchareaindigitalsignalprocessing, communicationandcontrol,withmanyapplications,suchasadaptivenoisecancellation, echocancellation,andadaptiveequalizationandsystemidentication[1,2].For simplicity,niteimpulseresponse(FIR)structuresareusedforadaptivelteringand havemanymaturepracticalimplementations.However,inniteimpulseresponse structurescanreducecomputationalcomplexityandincreaseaccuracy.Unfortunately, IIRlteringhassomedrawbacks,suchasslowconvergence,possibleconvergencetoa biasorunacceptablesuboptimalsolutions,andtheneedforstabilitymonitoring.The majorissueisthattheobjectivefunctionoftheIIRlteringwithrespecttothelter coecientsisusuallymultimodal.Thetraditionalgradientsearchmethodmayconverge toalocalminimumdependingonitsinitialconditions.Theotherunresolvedproblems ofadaptiveIIRlteringarediscussedbyJohnson[11]andRegalia[15]. Severalmethodshavebeenproposedfortheglobaloptimizationoftheadaptive IIRltering[75,45,76].Srinivasanetal.[56]usedstochasticapproximationwith convolutionsmoothing(SAS)intheglobaloptimizationalgorithm[3,76,77]for adaptiveIIRltering.Theyshowedthatthesmoothingbehaviorcanbeachievedby appendingavariableperturbingnoisesourcetotheerrorsignal.Here,wemodifythis perturbingnoisebymultiplyingitwithitscostfunction.Themodiedalgorithm, whichisreferredtoastheLMS-SASalgorithminthisdissertation,resultsinbetter performanceinglobaloptimizationthantheoriginalalgorithmbySrinivasanetal. Wehavealsoanalyzedtheglobaloptimizationalgorithmbehaviorbylookingattheir transitionprobabilitydensityofescapingoutfromasteadystatepoint. 20

PAGE 30

21 Sinceweusetheinstantaneous(stochastic)gradientinsteadoftheexpected valueofthegradient,errorinestimatingthegradientnaturallyoccurs.Thisgradient estimationerror,whenproperlynormalized,canbeusedtoactastheperturbingnoise. Consequently,anotherapproachinglobalIIRlteroptimizationisthenormalizedLMS (NLMS)algorithm.ThebehavioroftheNLMSalgorithmwithdecreasingstepsizeis similartothatoftheLMS-SASalgorithmfromaglobaloptimizationperspective. 3.2ConvolutionFunctionSmoothing AccordingtoStyblinski[3],amulti-optimalfunction f ( ) 2 R 1 ; 2 R n canbe representedasasuperpositionofaconvexfunction(i.e.,havingjustoneminimum) andothermulti-optimalfunctionsthataddsome\noise"totheconvexfunction.The objectiveofconvolutionsmoothingcanbeviewedas\lteringout"thenoiseand performingminimizationonthe\smoothed"convexfunction(oronafamilyofthese function),inordertoreachtheglobaloptimum.Sincetheoptimumofthesmoothed convexfunctiondoesnot,ingeneral,coincidewiththeglobalfunctionminimum,a sequenceofoptimizationstepsarerequiredwiththeamountofsmoothingeventually reducedtozerointheneighborhoodoftheglobaloptimum.Thesmoothingprocess isperformedbyaveraging f ( )oversomeregionoftheparameterspace R n usingthe properweighting(orsmoothing)function ^ h ( )denedbelow.Formally,letusintroduce avectorofrandomperturbation 2 R n ,andadd to ,thuscreatingtheconvolution function. ^ f ( ; )= Z R n ^ h ( ; ) f ( ) d = Z R n ^ h ( ; ) f ( ) d (3-1) Hence, ^ f ( ; )= E [ f ( )](3-2) where ^ f ( ; )isthesmoothedapproximationtotheoriginalmulti-optimalfunction f ( ),andthekernelfunction ^ h ( ; )isthepdfusedtosample .Notethat ^ f ( ; )can beregardedasanaveragedversionof f ( )weightedby ^ h ( ; ). Theparameter controlsthedispersionof ^ h ,i.e.,thedegreeof f ( )smoothing (e.g., cancontrolthestandarddeviationof 1 n ). E [ f ( )]istheexpectation

PAGE 31

whereissampledwiththepdf^h(;). Thekernelfunctionh(;)shouldhavethefollowingproperties: )ispiecewisedierentiablewithrespectto. Undertheseconditionslim!0^f(;)=RRn()f()d=f(0)=f(). Numerouspdf'ssatisfyaboveconditions,e.g.,theGaussian,uniform,orCauchypdf's.Letusconsiderthefunctionoff(x)=x416x2+5x,whichiscontinuousanddierentiable,andithastwoseparatedminima.Figure3-1showsthesmoothedfunction,whichistheconvolutionbetweenf(x)andaGaussianpdf.

PAGE 32

Figure3-1:SmoothedfunctionusingGaussianpdf.

PAGE 33

wherethereectedvalueissubstitutedbytheempiricalaverage.Likewise.theunbiaseddouble-sidedgradientestimateofthesmoothedfunctional^f(;)canberepresentedasr^f(;)=1 2NNXi=1[rf(+i)+rf(i)](3-5) InordertoimplementeitherEquation(3-4)or(3-5)wewouldusedtoevaluatethegradientatmanypointsintheneighborhoodoftheoperatingpoint,yieldingeectivelyano-lineiterativeglobaloptimizationalgorithm.Wewillcombinethe

PAGE 34

ThekeytoimplementingapracticalalgorithmforadaptiveIIRltersistodevelopanon-linegradientestimater^"(),where"()istheerrorbetweenthederivedsignalandtheoutputoftheadaptiveIIRlter.HereweusetheSASderivedsingle-sidedgradientestimatetogetherwiththeLMSalgorithm,wherethegradientestimateisr^"(;)=1 AmajorcharacteristicoftheLMSalgorithmisitssimplicity.WeholdtothisattributebysettingN=1inEquation(3-6)andsubstitutetheneighborhoodaveragingbythesequentialpresentationofdataasdoneintheLMSalgorithm.Hence,weobtaintheone-samplegradientestimateasr^"(;)=r"()(3-7) Thisequationisiteratedforeachinputsample.Theoretically,Equation(3-7)showsthattheon-lineversionoftheSASisgivenbythegradientvalueattherandomly-selectedneighborhoodofthepresentoperatingpoint.Thevarianceoftheneighborhoodiscontrolledby,whichdecreasesalongwiththeadaptationprocedure.ImplementingEquation(3-7)requirestwolters;oneforcomputingtheinput-outputrelationshipandtheotherforcomputingthegradientestimateattheperturbingpoint().Forlarge-ordersystems,thisrequirementisimpractical.Weinvestigatethefollowingsimplication,whichinvolvestherepresentationofthegradientestimateat()asaTaylorseriesaroundtheoperatingpoint.Thatisr"()=["0()+"00()+()2 Underthisequation,wecanusethesameltertocomputeboththeinput-outputrelationshipandthegradientestimate.Asarstapproximation,weonlykeepthersttwotermsandassumeadiagonalHessian.Thisresultsinthefollowinggradient

PAGE 35

26 estimate r ( ) 0 ( ) (3-9) Thisextremeapproximationassumesthatthesecondderivativeofthegradientvector isindependentof sothatitsvarianceisconstantthroughouttheadaptationprocess. Thesecondterm oftherighthandsideoftheaboveequationcanbeinterpreted asaperturbingnoise,whichistheimportanttermtoavoidconvergencetothelocal minimum. RecallthattheGLMSalgorithmis ( n +1)= ( n ) ( n ) ( n ) r ( n; ) ( n ) (3-10) wheretheappendingperturbationnoisesourceis ( n ) 3.4LMS-SASAlgorithm SrinivasanusedEquation(3-9)toestimatethegradientintheGlobalLMS(GLMS) algorithmofEquation(3-10)[56].SimilartotheGLMSalgorithm,wederivenowthe novelLMS-SASalgorithm.TheadaptiveIIRlteringbasedonthegradientsearch essentiallyminimizesthemean-squaredierencebetweenadesiredsequence d ( n ) andtheoutputoftheadaptivelter y ( n ).ThedevelopmentofGLMSandLMS-SAS algorithmsinvolveevaluatingtheMSEobjectivefunction.TheMSEobjectivefunction canbedescribedas ( )= 1 2 E f 2 ( ) g = 1 2 E f [ d ( n ) y ( n )] 2 g (3-11) where E isthestatisticalexpectation.TheoutputsignaloftheadaptiveIIRlters, representedadirect-formrealizationofalinearsystem,is y ( n )= a 0 x ( n )+ + a n N +1 x ( n N +1) + b 1 y ( n 1)+ + b n M +1 y ( n M +1)(3-12) Whichcanberewrittenas y ( n )= T ( n )( n )(3-13)

PAGE 36

where(n)istheparametervectorand(n)istheinputvector.(n)=[a0(n);;aN1(n);b1(n);;bM1(n)]T(3-14)(n)=[x(n);;x(nN+1);y(n1);;y(nM+1)]T(3-15) TheMSEobjectivefunctionis(n;)=1 2Ef[d(n)T(n)(n)]2g(3-16) NowweusetheinstantaneousvalueastheexpectationofEf"2(n)g"2(n)suchthat(n;)=1 2"2(n;)=1 2[d(n)T(n)(n)]2(3-17) ConsideringtheLMSalgorithm,wemustestimatethegradientvectorwithrespecttotheparameters.r(n;)=r1 2["2(n;)]="(n;)r["(n;)]="(n;)ry(n)="(n;)264@"(n;) Thepartialderivativeterm@"(n;)=@aiisevaluatedas@"(n;) Similarly,thepartialderivativeterm@"(n;)=@biisevaluatedas@"(n;) FromEquation(3-9),weobtainr"(n;)=r"(n;)(3-21) Usingtheaboveequation,weobtaintheadaptivealgorithmofsteepestdescentas(n+1)=(n)(n)"(n)r"(n)(3-22)=(n)(n)"(n)r"(n;)(n)"(n)(3-23)

PAGE 37

28 wherethethirdtherm ( n ) ( n ) ontherighthandsideistheappendedperturbation noisesource. representsasingleadditiverandomsource, ( n )isthestepsizewhich decreasesoverofiterations,and ( n )istheerrorbetweenthedesiredoutputsignaland theoutputsignaloftheadaptiveIIRlter. ThedierencebetweenLMS-SASandGLMSresidesintheformoftheappending perturbationnoisesource,wherewehavemodiedtheappendingnoisesourceby multiplyingitwiththeerror.Thismodicationbringstheerrorintothenoiseterm whichisinprincipleabetterapproximationtotheTaylorseriesexpansioninEquation (3-8)thanEquation(3-9).Wecanthereforeforeseebetterresults. 3.5AnalysisofWeakConvergencetotheGlobalOptimumforLMS-SAS Inthissection,weobtainthetransitionprobabilityofescapingoutofalocal minimabysolvingapairofpartialdierentialequations,whicharecalledthe Fokker-Planckequations(diusionequation).WefollowthelinesofWong[78].Herewe canwritetheLMS-SASalgorithmasIto'sintegralas t = a + Z t a m ( s ;s ) ds + Z t a ( s ;s ) dW s (3-24) Where 8 > < > : m ( t ;t )= ( t ) ( t ;t ) r ( t ;t ) ( t ;t )= ( t ) ( t ;t ) (3-25) Let f t ;a t b g beaMarkovprocess,anddenote P ( ;t j 0 ;t 0 )= p ( t < j t 0 = 0 )(3-26) Wecall P ( ;t j 0 ;t 0 )thetransitionfunctionoftheprocess. Werstdiscussthesimplecaseofthescalar assumptionandthenthemore involvedcaseofthevector assumption. isascalar. Ifthereisafunction p ( ;t j 0 ;t 0 )sothat P ( ;t j 0 ;t 0 )= Z p ( x;t j 0 ;t 0 ) dx (3-27)

PAGE 38

29 thenwecall p ( ;t j 0 ;t 0 )thetransitiondensityfunction.Since f t ;a t b g isa Markovprocess, P ( ;t j 0 ;t 0 )satisestheChapman-Kolmogorovequations. P ( ;t j 0 ;t 0 )= Z 1 P ( x;t j z;s ) dP ( z;s j 0 ;t 0 )(3-28) Wenowassumethecrucialconditionon f t ;a t b g ,whichmakesthederivationof thediusionequationpossible.Deneforapositive M k ( ;t ; ; )= Z j y j ( y ) k dP ( y;t + j ;t ) k =0 ; 1 ; 2(3-29) M 3 ( ;t ; ; )= Z j y j ( y ) 3 dP ( y;t + j ;t )(3-30) WeassumethattheMarkovprocess f t ;a t b g satisesthefollowingconditions: 1 [1 M 0 ( ;t ; ; )] # 0 0(3-31) 1 M 1 ( ;t ; ; ) # 0 m ( ;t )(3-32) 1 M 2 ( ;t ; ; ) # 0 2 ( ;t )(3-33) 1 M 3 ( ;t ; ; ) # 0 0(3-34) Itisclearthatif1 M 0 ( ;t ; ; ) # 0 0,thenbydominatedconvergence, p ( j t + t j > )= Z 1 [1 M 0 ( ;t ; ; )] dP ( ;t ) # 0 0(3-35) Inaddition,supposethatthetransitionfunction P ( ;t j 0 ;t 0 )satisesthefollowing condition: Assumption. Foreach( ;t ) ;P ( ;t j 0 ;t 0 )isoncedierentiablein t 0 and three-timesdierentiableat 0 ,andthederivativesarecontinuousandboundedat ( 0 ;t 0 ). Kolmogorov[79]hasderivedtheFokker-Planckequation @ @t p ( ;t j 0 ;t 0 )= 1 2 @ 2 @ 2 [ ( ;t ) p ( ;t j 0 ;t 0 )] @ @ [ m ( ;t ) p ( ;t j 0 ;t 0 )] b>t>t 0 >a (3-36)

PAGE 39

TheinitialconditiontobeimposedisZ1f()p(;tj0;t0)d#0!f(0)8f2S(3-37) thatisp(;tj0;t0)=(0).SubstitutingEquation(3-24)intotheFokker-Planckequations,weget@ @tp(;t)=1 2@2 @[(t)r(t;t)p(;t)](3-38) Ifp(;t)isaproductp(;t)=g(t)W()'()reectingtheindependenceamongthequantities,thenwehaveW()'()dg(t) df1 2d d["()W()'()]r()W()'()g)(3-39) LetW()beanypositivesolutionoftheequation1 2d d["()W()]=r()W()(3-40) thenW()'()dg(t) 2(d d["()W()d'() Therefore1 2(d d["()W()d'() Thetwosides,beingfunctionsofdierentvariables,mustbeconstantinorderfortheequalitytohold.Setthisconstantas,then1 Where'()satisestheSturm-Liouvilleequations.1 2d d["()W()d'()

PAGE 40

Underrathergeneralconditions,itcanbeshownthateverysolutionp(;t)canberepresentedasalinearcombinationofproducts.Sincep(;tj0;t0)isafunctionoft;t0;;0,itmusthavetheformofp(;tj0;t0)=W()ZeRtt0(s)ds'()'(0)d(3-47) where'(0)isconjugatecomplexof'(0).Herewewanttoknowthetransitionprobabilityoftheprocessescapingfromthesteady-statesolution,inwhichr()=0.FromEquation(3-40),weobtain"()W()=c(3-48) wherecisaconstant.TheSturm-Liouvilleequationbecomes1 2d d2'()+ "()'()=0(3-49) Let "()=1 22then'()=ejaretheboundedsolutions.AndweknowthatZ1ej1 22 22"()T(3-50) WhereT=Rtt0(s)ds,bytheinversionformulaoftheFourierintegral,weobtain1 22 2Z1e1 22"()Tejd(3-51) FromEquation(3-47),wegetthetransitionprobabilitiesoftheprocessescapingoutofthevalleyasp(;tj;t0)=1 2()2 whereG(;2)isaGaussianfunctionwithzeromeanandvariance2.

PAGE 41

@tp(;t)=1 2r2[(t)"()p(;t)]r[(t)r(t;t)p(;t)](3-53) Similarly,wewanttoknowthetransitionprobabilityofescapingfromthesteady-statesolution,inwhichr()=0.Equation(3-53)willbecome@ @tp(;t)=1 2r2[(t)"()p(;t)](3-54) Imposingstrictconstraintthatp(;t)isaproductp(;t)=g(t)'()=g(t)'1(1)'2(2)'N+M1(N+M1)(3-55) thenwehave1 2'()r2'()(3-56) Thetwosides,beingfunctionofdierentvariables,mustbeconstant,setthisconstantas,then1 2'()r2'()=(3-58)

PAGE 42

33 Similarly,Equation(3-58)canbepresentedas ( ) 2 1 ( 1 ) r 2 1 1 ( 1 )= 1 ( ) 2 2 ( 2 ) r 2 2 2 ( 2 )= 2 . ( ) 2 N + M 1 ( N + M 1 ) r 2 N + M 1 N + M 1 ( N + M 1 )= N + M 1 (3-59) where P N + M 1 i =1 i = Let i ( ) = 1 2 2 i then i ( i )= e j i i for i =1 ; 2 ; ;N + M 1arethebounded solutions.FromEquation(3-47),wegetthetransitionprobabilitiesoftheprocess escapingoutofthevalleyas p ( ;t j ;t 0 )= N + M 1 Y i =1 Z 1 e 1 2 2 i ( ) R t t 0 ( s ) ds e j i i d i N + M 1 Y i =1 G ( ;" ( ) Z t t 0 ( s ) ds )(3-60) Undertheconstraintoffactorizationof ( n ),thesameargumentsforthescalarcase willholdforthevectorcase.However i ( i )for i =1 ; 2 ; ;N + M 1arenot, ingeneral,independentofeachother, ( n )mustalsoincludethecorrelatedterms besidetheindependenttermofproduct.Thereforetheactualtransitionprobability p ( ;t j ;t 0 )islargerthanEquation(3-60).Inthemorerealisticcaseofdependence, theFokker-Planckwillbecomeverycomplicated.Thusitisnoteasytondoutthe transitionfunctionfromasteadystatepoint. 3.6NormalizedLMSAlgorithm Becauseinpracticeweusetheinstantaneousgradientinsteadofthetheoretical gradient,anestimationerrornaturallyoccurs.Thegradienterrorcanbeusedtoactas theappendingperturbingnoise.AfterreviewingtheNormalizedLMSalgorithm[2],we showthattheglobaloptimizationbehavioroftheNLMSalgorithmissimilartothatof theLMS-SASalgorithmbecauseofthenoisyestimategradient.Asaresult,theNLMS algorithmcanalsobeusedforglobaloptimization.

PAGE 43

ConsidertheproblemofminimizingthesquaredEuclideannormof(n+1)=(n+1)(n);(3-61) subjecttotheconstraintT(n+1)ry(n)=d(n)(3-62) Tosolvethisconstrainedoptimizationproblem,weusethemethodofLagrangemultipliers.Thesquarenormof(n+1)isjj(n+1)jj2=T(n+1)(n+1)=[(n+1)(n)]T[(n+1)(n)]=NXk=0jk(n+1)k(n)j2(3-63) TheconstraintofEquation(3-62)canberepresentedasNXk=0k(n+1)ryk(n)=d(n)(3-64) ThecostfunctionJ(n)fortheoptimizationproblemisformulatedbycombiningEquation(3-63)and(3-64)asJ(n)=NXk=0jk(n+1)k(n)j2+[d(n)NXk=0k(n+1)ryk(n)](3-65) whereisaLagrangemultiplier.AfterwedierentiatethecostfunctionJ(n)withrespecttotheparametersandthensettheresultstozero,weobtain2[(n+1)(n)]=ryk(n);k=0;1;;N(3-66) Bymultiplyingbothsidesoftheaboveequationbyryk(n)andsummingoverfromk=0toN,weobtain=2

PAGE 44

SubstitutingbacktheconstraintofEquation(3-62)intoEquation(3-67),weobtain=2 Denetheerror"(n)=d(n)T(n)ry(n).Wefurthersimplifyas=2 BysubstitutingaboveequationintoEquation(3-66),weobtaink(n+1)=2 FortheadaptiveIIRltering,theaboveequationcanbeformulatedas(n+1)= orequivalently,wemaywriteas(n+1)=(n)+ ThisisthesocalledNLMSalgorithmsummarizedinTable3-1,wheretheinitialconditionsarerandomlychosen.

PAGE 45

36 3.7RelationshipbetweenLMS-SASandNLMSAlgorithms Inthissection,weshowthatthebehavioroftheNLMSalgorithmissimilartothat oftheLMS-SASalgorithmfromaglobaloptimizationperspective.Herewefollowthe linesofWidrowetal.[1]andassumethatthealgorithmwillconvergetothevicinityof asteady-statepoint. FromEquation(3-18),weknowthattheestimatedgradientvectoris: ~ r ( ( n ))= ( n ) r y ( n )(3-73) DeneN(n)asavectorofthegradientestimationnoiseinthe n th iterationand r ( ( n ))asthetruegradientvector.Thus ~ r ( ( n ))= r ( ( n ))+N( n ) N( n )= ~ r ( ( n )) r ( ( n ))(3-74) IfweassumethattheNLMSalgorithmhasconvergedtothevicinityofalocal steady-statepoint ,then r ( ( n ))willbeclosetozero.Thereforethegradient estimationnoisewillbe N( n )= ~ r ( ( n ))= ( n ) r y ( n )(3-75) Thecovarianceofthenoiseisgivenby cov[N( n )]= E [N( n )N T ( n )]= E [ 2 ( n ) r y ( n ) r y T ( n )](3-76) Weassumethat 2 ( n )isapproximatelyuncorrelatedwith r y ( n )(thesameassumption as[1]),thusnearthelocalminimum cov[N( n )]= E [ 2 ( n )] E [ r y ( n ) r y T ( n )](3-77) WerewritetheNLMSalgorithmas ( n +1)= ( n )+ ( n ) jjr y ( n ) jj 2 ~ r ( ( n ))(3-78)

PAGE 46

SubstitutingEquation(3-74)intotheaboveequation,weobtain(n+1)=(n)+(n) wherethelasttermistheappendingperturbingnoise.Itscovariance,fromEquation(3-77),iscov[N(n) whereisanunitnormmatrix.ThustheNLMSalgorithmnearanylocalorglobalminimahasthevarianceoftheperturbingrandomnoisedeterminedsolelybyboth(n)and"(n).ThisbehaviorisverydierentfromtheconventionalLMSalgorithmwithmonotonicdecreasingstepsizewheretheperturbationnoiseisdeterminedby(n),"(n)andry(n).Therefore,intheLMSalgorithmthevariancenearthesteadystatepointissmallbecauseofry(n)0.HencetheLMSalgorithmhassmallprobabilityofescapingoutofanylocalminimabecauseofthesmallvarianceofthenoisygradient. Ontheotherhand,noticethatthevarianceoftheperturbingrandomnoiseintheLMS-SASalgorithmis(n)"(n)whichisalsoindependentofthegradientandcontrolledbyboth(n)and"(n).Therefore,wecananticipatethattheglobaloptimizationbehavioroftheNLMSalgorithmnearlocalminimaissimilartothatoftheLMS-SASalgorithm.Farawayfromlocalminima,thebehaviorofLMS-SASandNLMSisexpectedtoberatherdierentfromeachother.3.8SimulationResults

PAGE 47

Numberofhits GlobalminimumLocalminimumMethodf0:906;0:311gf0:519;0:114g willidentifythefollowingunknownsystemH(z)=0:050:4z1 byareducedorderIIRadaptivelteroftheformH(z)=b Themaingoalistodeterminethevaluesofthecoecientsfa;bgoftheaboveequation,suchthattheMSEisminimizedtotheglobalminimum.TheexcitationsignalischosentoberandomGaussiannoisewithzeromeanandunitvariance.ThereexisttwominimaoftheMSEcriterionperformancesurfacewiththelocalminimumatfa;bg=f0:519;0:114gandtheglobalminimumatfa;bg=f0:906;0:311g.Hereweusethreetypesofannealingscheduleforthestepsize(seeFigure3-2whichshowsthatoneislinear,oneissublinearandtheotheroneissupralinear),8>>>><>>>>:1(n)=0:1cos(n=2nmax)2(n)=0:10:1n=nmaxn
PAGE 48

39 Figure3-2:Stepsize ( n )forSASalgorithm. initialconditionsof ateachrun.Theconvergencecharacteristicsof towardthe globalminimumfortheGLMS,LMS-SAS,andNLMSalgorithmareshowninFigure 3-3,3-4,and3-5,respectively.Theadaptationprocesswith approachingtowardthe localminimumfortheLMS,GLMS,andLMS-SAS,algorithmarealsodepictedin Figure3-6,3-7,and3-8,respectively,where isinitializedtothepointnearthelocal minimum.Basedonthesimulationresults,wecansummarizeperformanceasfollows: Figure3-6androw1,2inTable3-2showthattheLMSalgorithmislikelyto convergetothelocalminimum. Figure3-3,3-7androw3inTable3-2showthattheGLMSalgorithmmightjumpto theglobalminimumvalleyandconvergetotheglobalminimum,butitalsocanjump backtothelocalminimumvalleyandthenconvergetothelocalminimum.Srinivasan [56]claimsthattheGLMSalgorithmcouldconvergetotheglobalminimumw.p.1 bycarefullychoosingthecoolingschedule ( n ).Thecoolingscheduleisacrucial parameter,butitisdiculttobedeterminedsuchthatglobaloptimizationwillbe guarantee. Figure3-4,3-8androw4,5inTable3-2showthattheLMS-SASalgorithmarelikely toconvergetotheglobalminimumwithproperstepsize.EventhoughtheLMS-SAS

PAGE 49

Figure3-3:GlobalconvergenceofintheGLMSalgorithm.A)Weight;B)Contourof. Figure3-4:GlobalconvergenceofintheLMS-SASalgorithm.A)Weight;B)Contourof.algorithmstaysmostofitstimeneartheglobalminimum,itstillhasprobabilityofconvergingtothelocalminimum.

PAGE 50

Figure3-5:GlobalconvergenceofintheNLMSalgorithm.A)Weight;B)Contourof. Figure3-6:LocalconvergenceofintheLMSalgorithm.A)Weight;B)Contourof. Figure3-7:LocalconvergenceofintheGLMSalgorithm.A)Weight;B)Contourof.

PAGE 51

Figure3-8:LocalconvergenceofintheLMS-SASalgorithm.A)Weight;B)Contourof. Ontheotherhand,theNLMSalgorithmis(n+1)=(n)(n) TheLMS-SASalgorithmaddsaperturbingnoisetoavoidconvergingtothelocalminima,whiletheNLMSalgorithmusestheinherentestimategradientnoisetoavoidconvergingtothelocalminima.Twodierenttypesofstepsize(n)and(n)=jry(n)j2areusedbyLMS-SASandNLMS,respectively.Therefore,weneedtofairlycomparetheperformanceofbothalgorithmsintermsofglobaloptimization,sowesetupthethreefollowingexperiments. Hereweusethesamesystemidenticationscheme,i.e.,weidentifythreeunknownsystemsExampleI:HI(z)=0:050:4z1 byareducedorderadaptivelteroftheformH(z)=b

PAGE 52

Figure3-9:ContourofMSE.A)ExampleI;B)ExampleII;C)ExampleIII. Figure3-10:Weight(top)andkry(n)k(bottom)inA)ExampleI;B)ExampleII;C)ExampleIII. Themaingoalistodeterminethevaluesofthecoecientsfa;bgoftheaboveequation,suchthattheMSEisminimized(toglobalminimum).TheexcitationinputischosentoberandomGaussiannoisewithzeromeanandunitvariance.Figure3-9depictsthecontouroftheMSEcriterionperformancesurfaceinexampleI,IIandIII.Here,thestepsizefortheNLMSalgorithmischosentobealineardecreasingfunctionofNLMS(n)=0:1(12:5105n).StepsizesfortheLMS-SASalgorithmareafamilyoflineardecreasingfunctionsofLMSSAS=k(12:5105n)k=[0:01;0:02;0:03;0:04;0:05;0:06;0:07;0:08;0:09;0:1;0:2;0:3;0:4;0:5](3-91) wherewevarythestepsizek,butpreservethesameannealingrate.

PAGE 53

Numberofhits GlobalminimumLocalminimumMethodf0:906;0:311gf0:519;0:114g Table3-3,3-4,3-5showsthesimulationresultsofglobalandlocalminimumhitsbyLMS,LMS-SAS,andNLMSalgorithms.Thevalueofkry(n)kisdepictedinFigure3-10.Thelargerkry(n)k,thesmallerincrementsareusedbythealgorithm,i.e.thelessprobabilityofthealgorithmescapingoutfromthesteadystatepoint.IncasesofexampleIandII,theglobalminimumvalleyhassharperslopethanthelocalvalley.Therefore,Table3-2and3-4showthatNLMSalgorithmhashigherprobabilityinobtainingtheglobalminimumthantheotheralgorithmsincasesofexampleIandII.InexampleIII,thelocalminimumvalleyhassharperslopethantheglobalvalley.Therefore,Table3-5showsthattheNLMSalgorithmhaslessprobabilityinobtainingtheglobalminimumthantheotheralgorithmsinexampleIIIcase.3.10Conclusion

PAGE 54

Numberofhits MethodGlobalminimumLocalminimum LMSwithconstant2080NLMSwithNLMS(n)8911LMS-SASwithLMSSAS(n)andk=0:012080LMS-SASwithLMSSAS(n)andk=0:02991LMS-SASwithLMSSAS(n)andk=0:04298LMS-SASwithLMSSAS(n)andk=0:06199LMS-SASwithLMSSAS(n)andk=0:08199LMS-SASwithLMSSAS(n)andk=0:09199LMS-SASwithLMSSAS(n)andk=0:1199LMS-SASwithLMSSAS(n)andk=0:2298LMS-SASwithLMSSAS(n)andk=0:3199LMS-SASwithLMSSAS(n)andk=0:4199LMS-SASwithLMSSAS(n)andk=0:5298 Table3-5:ExampleIIIforsystemidentication Numberofhits MethodGlobalminimumLocalminimum LMSwithconstant928NLMSwithNLMS(n)9010LMS-SASwithLMSSAS(n)andk=0:011000LMS-SASwithLMSSAS(n)andk=0:021000LMS-SASwithLMSSAS(n)andk=0:041000LMS-SASwithLMSSAS(n)andk=0:061000LMS-SASwithLMSSAS(n)andk=0:081000LMS-SASwithLMSSAS(n)andk=0:091000LMS-SASwithLMSSAS(n)andk=0:11000LMS-SASwithLMSSAS(n)andk=0:21000LMS-SASwithLMSSAS(n)andk=0:31000LMS-SASwithLMSSAS(n)andk=0:41000LMS-SASwithLMSSAS(n)andk=0:51000

PAGE 55

Fromthediusionequation,wehavederivedthetransitionprobabilityoftheLMS-SASalgorithmescapingfromasteadystatepoint.Sincetheglobalminimumisalwayssmallerthanthelocalminimum,thetransitionprobabilityofthealgorithmescapingoutfromthelocalminimumisalwayslargerthantheonefromtheglobalminimum.Hence,thealgorithmwillstaymostofthetimeneartheglobalminimumandeventuallyconvergetotheglobalminimum. Sinceweusetheinstantaneous(stochastic)gradientinsteadoftheexpectedvalueofthegradient,anestimationerrornaturallyoccurs.Thisgradientestimationerror,whenproperlynormalized,canbeusedtoactastheperturbingnoise.WehaveshownthatthebehavioroftheNLMSalgorithmwithdecreasingstepsizenearaminimaissimilartothatoftheLMS-SASalgorithmfromaglobaloptimizationperspective. TheglobaloptimizationperformanceofLMS-SASandNLMSalgorithmtotallydependontheshapeofthecostfunction.Thesharperthelocalminima,thelesslikelytheNLMSalgorithmisofescapingoutfromthissteadystatepoint.Ontheotherhand,thebroaderthevalleyaroundlocalminima,themoredicultitisforthealgorithmtoescapeoutfromthisvalley.

PAGE 56

CHAPTER4 INFORMATIONTHEORETICLEARNING 4.1Introduction Themeansquareerrorcriterionhasbeenextensivelyusedintheeldofadaptive systems[80].Thatisbecauseofitsitsanalyticalsimplicityandtheassumption ofGaussiandistributionfortheerror.SincetheGaussiandistributionistotally characterizedbyitsrstandsecondorderstatistics,theMSEcriterioncanextract allinformationfromasetofdata.However,theassumptionofGausssiandistribution isnotalwaystrue.Therefore,acriterionwhichconsidershigher-orderstatisticsis necessaryforthetrainingofadaptivesystems.Shannon[81]rstintroducedaentropy ofagivenprobabilitydistributionfunctionwhichprovidesameasureoftheaverage informationinthedistribution.ByusingtheParzenwindowestimator[82],wecan estimatethepdfdirectlyfromasetofdata.Itisquitestraightforwardtoapplythe entropycriteriontothesystemidenticationframework[6,5].Thepdfoftheerror signalbetweenthedesiredsignalandtheoutputsignalofadaptiveltersmustbeas closeaspossibletoadeltadistribution, ( ).Hence,thesupervisedtrainingproblem becomesanentropyminimizationproblem,assuggestedbyErdogmusetal.[6]. ThekernelsizeoftheParzenwindowestimatorisanimportantparameterinthe globaloptimizationprocedure.ItwasconjecturedbyErdogmusetal.[6]thatfora sucientlylargekernelsize,thelocalminimaoftheerrorentropycriterioncanbe eliminated.Itwassuggestedthatstartingwithalargekernelsize,andthenslowly decreasingthisparametertoapredeterminedsuitablevalue,thetrainingalgorithm canconvergetotheglobalminimumofthecostfunction.Theerrorentropycriterion consideredbyErdogmusetal.[6],however,doesnotconsiderthemeanoftheerror signal,sinceentropyisinvarianttotranslation.Inthisdissertation,weproposea modicationtotheerrorentropycriterion,inordertotakethispointintoaccount. 47

PAGE 57

48 Theproposedcriterionwithannealingofthekernelsizeisthenshowntoexhibitthe conjecturedglobaloptimizationbehaviorinthetrainingofIIRlters. 4.2EntropyandMutualInformation Shannon[81]denedtheentropyofaprobabilitydistribution P = f p 1 ;p 1 ; ;p N g as H s ( P )= N X k =1 p k log( 1 p k ) N X k =1 p k =1 ;p k 0(4-1) whichmeasurestheaverageamountofinformationcontainedinarandomvariable X withprobabilities p k = P ( x = x k ) ;k =1 ; 2 ; ;N atthevaluesof x 1 ;x 2 ; ;x N Amessagecontainsnoinformation,ifitiscompletelyknown.Thelargerinformation itcontains,thelesspredictableitis.Informationtheoryhasbroadapplicationinthe eldofcommunicationsystems[83].Butentropycanbedenedinamoregeneral form.AccordingtoRenyi[58],themeanoftherealnumber x 1 ;x 2 ; ;x N withpositive weighting p 1 ;p 2 ; ;p N hastheformas x = 1 ( N X k =1 p k ( x k ))(4-2) where ( x )isaKolmovov-Nagumofunction,whichisanarbitrarycontinuousand strictlymonotonicfunction. Anentropymeasure H generallyobeysthefollowingformula: H = 1 ( N X k =1 p k ( I ( p k )))(4-3) where I ( p k )= log( p k )istheHartley'sinformationmeasure[84]. Inordertosatisfytheadditivitycondition,the ( )canbeeither ( x )= x or ( x )=2 (1 ) x .When ( x )= x theentropymeasurebecomeasShannon'sentropy. When ( x )=2 (1 ) x ,theentropymeasurebecomeRenyi'sentropyoforder ,whichis denotedas H R = 1 1 log( N X k =1 p k ) ;> 0and 6=1(4-4)

PAGE 58

49 ThewellknownrelationshipbetweenShannon'sandRenyi'sentropyis H R H s H R 1 >> 0and > 1(4-5) lim 1 H R = H s (4-6) InordertofurtherrelateRenyi'sandShannon'sentropy,thedistanceof P = ( p 1 ;p 2 ; ;p N )totheoriginalof P =(0 ; 0 ; ; 0)isdenedas V = N X k =1 p k = k P k (4-7) where V iscalledthe -normoftheprobabilitydistribution[85]. TheRenyi'sentropyinthetermof V isas H R = 1 1 log( V )(4-8) TheRenyi'sentropyoforder meansadierent -norm.Shannon'sentropycanbe viewedasthelimitingcase 1oftheprobabilitydistributionnorm.Renyi'sentropy isessentiallyamonotonicfunctionofthedistanceoftheprobabilitytotheoriginal.The H R 2 = log P N k =1 p 2 k iscalledthequadraticentropy,becauseofthequadraticformon theprobability. Wecanfurtherextendtheentropydenitiontoacontinuousrandomvariable Y withpdf f y ( y )as[58]: H R = 1 1 log( Z 1 f y ( z ) dz )(4-9) H R 2 = log( Z 1 f y ( z ) 2 dz )(4-10) ItisimportanttomentionthatRenyi'squadraticentropyinvolvestheuseofthesquare ofthepdf. BecausetheShannonentropyisdenedasweightedsumofthelogarithmof thepdf,itisdiculttodirectlyusetheinformationtheoreticcriterion.Sincewe cannotdirectlyusethepdf(unlessitsformispriorknown),weusethenonparametric estimators.Hence,theParzenwindowmethod[82]isusedinthisdissertation.The

PAGE 59

50 Parzenwindowestimatorisakernel-basedestimatorwith ^ f Y ( z;y )= 1 N N X i =1 ( z y i )(4-11) where y i 2 R M aretheobservedsignal. ( )isakernelfunction.TheParzenwindow estimatorcanbeviewedasaconvolutionofthekernelfunctionwiththeobservedsignal. ThekernelfunctioninthisdissertationischosenofGaussianfunctionas ( z )= G ( z; 2 )= 1 (2 2 ) M= 2 exp( z T z 2 2 )(4-12) Here,wewillfurtherdevelopanITLcriteriontoestimatethemutualinformationamongrandomvariables.Mutualinformationisabletoquantifytheentropy betweenpairsofrandomvariables.Hencemutualinformationisalsoveryimportantto engineeringproblems. MutualinformationisdenedinShannon'sentropytermas I ( x;y )= H ( y ) H ( y j x ),whichisnoteasilyestimatedfromsamples.Analternativeestimated mutualinformationbetweentwoprobabilitydensityfunction(pdf) f ( x )and g ( x )is Kullback-Leibler(KL)divergence[86],whichisdenedas K ( f;g )= Z f ( x )log f ( x ) g ( x ) dx (4-13) SimilarlyRenyi'sdivergencemeasurewithorder fortwopdf f ( x )and g ( x )is H R ( f;g )= 1 ( 1) log Z f ( x ) 2 g ( x ) 1 dx (4-14) TherelationbetweenKLdivergenceandRenyi'sdivergencemeasuresisas lim 1 H R ( f;g )= ( f;g )(4-15) TheKLdivergencemeasurebetweentworandomvariables Y 1 and Y 2 essentially estimatesthedivergencebetweenthejointpdfandthemarginalpdfs.Thatis I s ( Y 1 ;Y 2 )= KL ( f Y 1 Y 2 ( z 1 ;z 2 ) ;f Y 1 ( z 1 ) f Y 2 ( z 2 )) = ZZ f Y 1 Y 2 ( z 1 ;z 2 )log f Y 1 Y 2 ( z 1 ;z 2 ) f Y 1 ( z 1 ) f Y 2 ( z 2 ) dz 1 dz 2 (4-16)

PAGE 60

wherefY1Y2(z1;z2)isthejointpdf,fY1(z1)andfY2(z2)aremarginalpdfs.Becausethosedivergencemeasuresmentionedabovearenon-quadraticinthepdfterm,theycannoteasilybeestimatedwiththeinformationpotential.Thefollowingdistancemeasuresbetweentwopdfs,whichcontainsonlyquadratictermsofpdf,aremorepractical. UsingtheCauchySchwartzinequality,thedistancemeasurebetweentwopdfsf(x)andg(x)isasICS(f;g)=log(Rf(x)2dx)(Rg(x)2dx) (Rf(x)g(x)dx)2(4-19) ItisobviousthatICS(f;g)0andtheequalityholdstrueifandonlyiff(x)=g(x). Similarly,usingtheEuclideandistance,thedistancemeasurebetweentwopdfsf(x)andg(x)isasIED(f;g)=Z(f(x)g(x))2dx=Zf(x)2dx+Zg(x)2dx2Zf(x)g(x)dx(4-20) ItisalsoobviousthatIED(f;g)0andtheequalityholdstrueifandonlyiff(x)=g(x)4.3AdaptiveIIRFilterwithEuclideanDistanceCriterion Theerrorsignale(n)isthedierencebetweendesiredsignald(n)andtheoutputsignaly(n)oftheadaptiveIIRlter,whichise(n)=d(n)y(n)(4-22)

PAGE 61

52 Itisobviousthatthegoalofthealgorithmistoadjusttheweightssuchthattheerror pdf f e isascloseaspossibletodeltadistribution ( ).Hence,theEuclideandistance criterionfortheadaptiveIIRltersisdenedas I ED ( f e )= Z 1 ( f e ( ) ( )) 2 d" = Z 1 f e ( ) 2 d" 2 f e (0)+ c (4-23) where c standsfortheportionsofthisEuclideandistancemeasurethatdonotdepend ontheweightsoftheadaptivesystem.Noticethat,theintegralofthesquareofthe errorpdfappearsexactlyasinthedenitionofRenyi'squadraticentropy.Therefore, itcanbeestimateddirectlyfromits N samplesbyaParzenwindowestimatorwith Gaussiankernelofvariance 2 exactlyasdescribedin[6,5]. ^ f e ( )= 1 N N X i =1 ( e i ; 2 )(4-24) If N !1 ,then ^ f e ( )= f e ( ) ( "; 2 ),where denotestheconvolutionoperator. Thus,usingaParzenwindowestimatorfortheerrorpdfisequivalenttoaddingan independentrandomnoisewiththepdf ( "; 2 )totheerror.Theerror,withthe additivenoise,becomes d y + n =( d + n ) y .Thisissimilartoinjectingarandom noisetothedesiredsignalassuggestedbyWangetal.in[87].Theadvantageofour approachisthatwedonotexplicitlygeneratenoisesamples.Wesimpletakeadvantage oftheestimationnoiseproducedbytheParzenestimator,whichasdemonstratedabove, worksasanadditive,independentnoisesource.Thekernelsize,whichcontrolsthe varianceofthehypotheticalnoiseterm,shouldbeannealedduringtheadaptation, justlikethevarianceoftheinjectednoisebyWangetal.[87].Fromtheinjectednoise pointofview,thealgorithmbehavessimilartothewell-knownstochasticannealing algorithm;thenoisewhichisaddedtothedesiredsignalbackpropagatesthroughthe errorgradient,resultinginperturbationsintheweightupdatesproportionaltothe weightsensitivity.However,sinceouralgorithmdoesnotexplicitlyuseanoisesignal,its operationismoresimilartoconvolutionalsmoothing.Forasucientlylargekernelsize,

PAGE 62

BysubstitutingtheParzenwindowestimatorfortheerrorpdfintheintegralofEquation(4-23),andrecognizingthattheconvolutionoftwoGaussianfunctionsisalsoaGaussian,weobtaintheITLcriterionas(afterdroppingallthetermsthatareindependentoftheweights):IED(fe)=1 Thegradientvector@IED(fez)=@tobeusedinthesteepestdescentalgorithmisobtainedas@IED(fe) 2N22NXi=1NXj=1[(eiej)(eiej;22)(@y(ni) wherethegradient@y=@isgivenby@y(n) and(n)=[y(i1);y(i2);;y(iN);x(i);x(i1);;x(iM)]T.4.4ParzenWindowEstimatorandConvolutionSmoothingFunction4.4.1Similarity

PAGE 63

)ispiecewisedierentiablewithrespecttox. ThekernelfunctioninthisthesisischosenofGaussianfunctionas(x)=G(x;2)=1 (22)n=2exp(xTx Itisobviousthat(x)=1 ),lim!0(x)=(x),and(x)isaGaussianpdf.Hence(x)satisesthepropertiesofsmoothingfunction. Theobjectiveoftheconvolutionsmoothingfunctionistosmooththenonconvexcostfunction.Theparametercontrolsthedispersionofh(x),whichcontrolsthedegreeofcostfunctionsmoothing.Inthebeginningstageoftheoptimization,theissettobelargesuchthath(x)cansmoothoutallthelocalminimumofthecostfunction.Sincetheglobalminimumofthesmoothedcostfunctiondoesnotcoincidewiththeglobalminimumoftheactualoriginalcostfunction.Theisslowlydecreasedtozero.Asaresult,thesmoothedcostfunctioncangraduallyreturntotheoriginalcostfunctionandthealgorithmcanconvergetotheglobalminimumoftheactualcostfunction. Thereforethe(x)hasthesameroleofh(x)insmoothingthenonconvexcostfunction.Theparametercontrolsthedispersionof(x),whichcancontrolthedegreeofthecostfunctionsmoothing.Similarly,theparameterissettobelargeandthenslowlydecreasestozero.ThereforetheITLalgorithmwiththeproperparametercanconvergetotheglobalminimum.

PAGE 64

whichistheexpectationwithrespecttotherandomvariable".Thestandarddeviationof"iscontrolledby.Hence,thesmoothedcostfunctioncanberegardedasanaverageversionofactualcostfunction. FortheITLalgorithm,wechangetheshapeofthepdfbyParzenwindowestimatorateachparticularpointof.Thuswechangethecostfunctionateachpointof.Theestimatedcostfunctionisas^V;=Zfe+(e;)de(4-30) whereisaGaussiannoisewithzeromeanandvariance.WeconcludethattheSASmethodaddsanadditionalnoisetotheweightinordertoforcethealgorithmtoconvergetotheglobalminimum,whiletheITLalgorithmaddsanadditivenoisetotheerrorinordertoforcethealgorithmtoconvergetotheglobalminimum.Theadditivenoiseaddedtoerroraectsthevarianceoftheweightupdatesproportionallytothesensitivityofeachweight,@e=@i.Thismeansthatasinglenoisesourceistranslatedinternallyontodierentnoisestrengthsforeachweight.4.5AnalysisofWeakConvergencetotheGlobalOptimumforITL

PAGE 65

whereNistheadditivenoise.Herethegradientofthecostfunctionusedinthesteepestalgorithmis@J @"@^" @=@J @"(@" @+N())=@J @"@" @+N()@J @"(4-32) whereJisthecostfunction.FortheITLalgorithm,thecostfunctionisJ=IED(fe)=Z1(fe(")("))2d"(4-33) Therefore@J @"=(fe(")("))2(4-34) HerewewritetheITLalgorithmasIto'sintegralast=a+Ztam(s;s)ds+Zta(s;s)dWs(4-35) Where8><>:m(t;t)=(t)@IED WiththesimilarderivationofEquation(3-52)fortheLMS-SASalgorithm,weobtainthetransitionprobabilityoftheITLalgorithmescapingoutalocalminimumforthescalarcaseasp(;tj;t0)=1 2()2

PAGE 66

IftheinputsignalissettohaveaGaussiandistribution,N(x;2x),thenthedesiredsignalwillalsobeGaussian,N(d;2d).TheoutputsignaloftheadaptivelterwillbeaGaussianaswell,N(y;2y).HerewewanttocalculatetheanalyticalexpressionoftheEuclideandistanceinthesimulationexampleofthesystemidenticationframeworkfortheunknownsystemofH(z)=b1+b2z1 byreducedorderadaptivelterofHa(z)=b Herethedesiredoutputsignalisrealizedasd(i)=b1x(i)+b2x(i1)+a1d(i1)+a2d(i2)(4-41) Thend=b1+b2 TakingvarianceonbothsidesofEquation(4-41),weobtainthatRd(0)=(b21+b22+2b1b2a1)Rx(0)+(a21+a22)Rd(0)+2a1a2Rd(1)(4-43) whereRd(t)andRx(t)arethevarianceofdesiredoutputsignalandinputsignal,respectively.RightshiftingoneunitofEquation(4-41),weobtainthatd(i+1)a1d(i)=a2d(i1)+b1x(i+1)+b2x(i)(4-44)

PAGE 67

TakingvarianceofEquation(4-41)and(4-44),weobtainthatRd(1)a1Rd(0)=(b1b2+b1b2a2)Rx(0)+a1a2Rd(0)+a22Rd(1)(4-45) FromEquation(4-43)and(4-45),wecanobtainthatRd(0)=(b21+b22+2b1b2a1)(1a2)+2b1b2a1a2 Similarly,wecancalculatethe(y;2y)oftheoutputsignaloftheadaptivelterasy=b Takingvarianceofaboveequation,weobtainthatRy(0)=b2Rx(0)+a2Ry(0)(4-49) SothatRy(0)=b2 Wealsocancalculatethecovarianceofdesiredoutputsignalandtheoutputsignaloftheadaptivelterasfollowing.TakingcovarianceofEquation(4-41)and(4-49),weobtainthatRdy(0)=(b1b+b2ab)Rx(0)+a1aRdy(0)+a2aRdy(1)(4-51) TakingcovarianceofEquation(4-41)andy(i1)=bx(i1)+ay(i2),weobtainthatRdy(1)=(b2b+a1b1b)Rx(0)+a1aRdy(1)+a2aRdy(0)(4-52) FromEquation(4-51)and(4-52),weobtainthatRdy(0)=(b1+b2a)(1a1a)a2a(b2+b1a1) (1a1a)2+(a2a)2bRx(0)(4-53)

PAGE 68

Finally,wecanobtainthate=dy(4-54)2e=R2d(0)+R2y(0)2Rdy(0)+2(4-55) where2eincreasesby2,whichiscorrespondingtotheGaussiankernelfunctionoftheParzenwindowestimator.TheEuclideandistanceiscalculatedasIED(fe)=1 Figure4-2showsthecontoursoftheanalyticalexpressionfortheITLcriterion(forcomparisonFigure4-3showsthecontoursoftheanalyticalexpressionfortheEntropyR1f2(")d"criterion).TheconvergencecharacteristicsoftheadaptationprocessfortheltercoecientstowardstheglobaloptimumisshowninFigure4-1.Inthebeginningoftheadaptationprocess,theestimatederrorvariance2eislargebecauseofthesignicantlylargevalueofthekernelsize,2,intheGaussiankernelfunctionoftheParzenwindowestimator.Therefore,thersttermoftherighthandsideofEquation(4-56)isconsiderablysmallerthanthesecondterm.Thuscanbeneglectedinthebeginningstageoftheadaptationprocess.Weobservethatthesecondtermconcentratesmoretightlyarounde=dy=0associatedwiththeincreasing2e,i.e.,theincreasing2.ThestraightlineinFigure4-1b.isthelineofe=dy=0.Itisclearfromthisgure,Figure4-1,thattheweight-trackoftheITLalgorithmconvergestowardsthelineofe=dy=0aswepredictedinthetheoreticalanalysisgivenabove.Whenthesize,2,oftheGaussiankernelfunctionslowlydecreasesduringadaptation,theITLcostfunctionwillgraduallyconvergebacktotheoriginalone,whichmightexhibitlocalminima.4.7SimulationResults

PAGE 69

Figure4-1:ConvergencecharacteristicsofweightforExampleIbyITL.A)Weight;B)Contourofweight. Figure4-2:EuclideandistanceofExampleIinA)2=0;B)2=1;C)2=2;D)2=3.

PAGE 70

Figure4-3:EntropyR1f2(")d"ofExampleIinA)2=1;B)2=2;C)2=3;D)2=4.E)2=5;F)2=6;G)2=7;H)2=8;I)2=9.

PAGE 71

bythefollowingreducedorderadaptiveIIRlterHa(z)=b Themaingoalistodeterminethevaluesofthecoecientsfa;bg,suchthattheEuclideandistancecriterionisminimized.IfweassumetheerrorpdfoffeisGaussianasfe(e;w)=1 22e(4-60) Then,wecanderivetheestimatedEuclideandistanceas^IED(fe)=1 Thusweplot,experimentally,thecontouroftheEuclideandistancecriterionperformancesurfacesindierentforExampleIandIIinFigure4-2and4-4,respectively.ItshowsthatthelocalminimaoftheEuclideandistancecriterionperformancesurfacehavedisappearedwithlargekernelsize.Thus,bycarefullycontrollingthekernelsize,thealgorithmcanconvergetotheglobalminimum. TheinputsignalisarandomGaussiannoisewithzeromeanandunitvariance.ThereexistseveralminimumsontheEuclideandistancecriterionperformancesurfacewithsmallkernelsizeonbothexamples.However,thereexistasoleglobalminimumofEuclideandistancecriterionsurfacewithasucientlargekernelsize.Inthissimulation,thekernelsizeischosentobesucientlargeinthestartstage,andthenslowlydecreasedtoapredeterminedsmallvalue,whichisthetrade-obetweenlowbiasandlowvariance.Inthisway,thealgorithmcanconvergetotheglobalminimum.Thestepsizeforthealgorithmisaconstantvalueof0.002.Thesimulationresultsarebasedon100MonteCarlorunsalongwithrandomlyinitialconditionofweightateachMonteCarlorun.Itshowsfromthesimulationresultsthatthealgorithmconvergestotheglobalminimumwith100%ofthetimeforbothexamples.Theconvergence

PAGE 72

Figure4-4:EuclideandistanceofExampleIIinA)2=0;B)2=1;C)2=2;D)2=3.

PAGE 73

Figure4-5:ConvergencecharacteristicsofweightforExampleIIbyITL.A)Weight;B)Contourofweight.characteristicsoftheadaptationprocesswiththeweightapproachingtotheglobalminimumareshowninFigure4-1and4-5,respectively,whereinitialweightarechosentoapointnearthelocalminimum.4.8ComparisonofNLMSandITLAlgorithms Hereweusethesamesystemidenticationscheme,i.e.,weidentifytheunknownsystemofExampleI:HI(z)=0:050:4z1 byreducedorderadaptivelterofH(z)=b

PAGE 74

Numberofhits(global/local) MethodExampleIExampleIIExampleIII LMS36/6420/8092/8LMS-SAS96/41/99100/0NLMS100/089/1190/10ITL100/0100/0100/0 Themaingoalistodeterminethevaluesofthecoecientsfa;bgoftheaboveequation,suchthattheMSEisminimized(globalminimum).TheinputsignalischosentoberandomGaussiannoisewithzeromeanandunitvariance.ThestepsizeoftheLMS-SASandNLMSalgorithmsischosentobealineardecreasingfunctionof(n)=0:1(15105n)andconstantstepsize=0:001fortheLMSandITLalgorithm.Thekernelsizeischosentobealineardecreasingfunctionof2=0:3(15105)+0:5fortheITLalgorithm. Table4-1showsthecomparisonofthenumberofglobalandlocalminimumhitsbybothNLMSandITLalgorithms.Theresultsaregivenby100MonteCarlosimulationswithrandominitialconditionsofateachrun.ItisclearfromTable4-1thattheITLalgorithmismoresuccessfulinobtainingtheglobalminimumthanotheralgorithms. InordertounderstandthebehavioroftheITLsolution,weinvestigatetheLpnormsoftheimpulseresponseerrorvectorsbetweentheoptimalsolutionsobtainedbytheMSEandtheITLcriteria.Assumingtheinniteimpulseresponseoftheunknownsystem,givenbyhi,i=0,...,1andtheinniteimpulseresponseofthetrainedadaptivelter,givenbyhai,i=0,...,1canbothbetruncatedatM,yetpreservemostofthepowercontainedwithin,weconsiderthefollowingimpulseresponseerrornormcriterion:ImpulseResponseCriterionLp=pvuut Table4-2showstheimpulseresponseLperrornormsfortheadaptiveIIRlterstrainedwithMSEandITLcriteria.WeseefromtheseresultsthattheITLcriterionismoreofaminimax-typealgorithm,asitprovidesasmallerL1normfortheimpulseresponseerrorcomparedtoMSE,whichyieldsanL2normerrorminimization.

PAGE 75

p123451010010001 IftheMSEsolutionisderived,eithertheNLMSischosen,orifamorerobustsearchisderived,theITLcanbeused.However,afterITLconverged,theLMSalgorithmshouldbeusedtostartfromtheITLsolutionandseektheglobaloptimumofMSE.Asdemonstrated,theITLandMSEglobalminimumareclosetoeachother.4.9Conclusion ThesolutionoftheITLisdierentfromtheMSEoptimization.Howevertheirminimaareinthesameregionofweightspace.Thereforeformorerobustglobalsearch,werecommendtouseITLandwhenitconverges,switchtotheMSEcostusingasinitialconditionstheweightvaluesfoundwithITL.

PAGE 76

Inordertodemonstratetheeectivenessofproposedglobaloptimizations,proposedglobaloptimizationareappliedtotwopracticalexamples;systemidenticationwithKautzlterandnonlinearequalization.5.1SystemIdenticationwithKautzFilter @k=e(n)'k(n)(5-1)4=@E @=e(n)dXk=0k@'k(n) @=e(n)dXk=0k@'k(n) whereisastepsize.Thegradientvector@'k=@and@'k=@aregivenby@'0(n)

PAGE 77

68 @' 1 ( n ) @ = 1 p 2 ( p 1 ( 2 + 2 ) p (1 ) 2 + 2 p (1 ) 2 + 2 p 1 ( 2 + 2 ) )( u ( n 1)+ u ( n )) +2 @' 1 ( n 1) @ ( 2 + 2 ) @' 1 ( n 2) @ 2 1 ( n 2)(5-7) and @' k ( n ) @ =2 @' k ( n 1) @ ( 2 + 2 ) @' k ( n 2) @ +( 2 + 2 ) @' k 2 ( n ) @ 2 @' k 2 ( n 1) @ + @' k 2 ( n 2) @ +2 k ( n 1) 2 k ( n 2)+2 k 2 ( n ) 2 k 2 ( n 1)(5-8) @' k ( n ) @ =2 @' k ( n 1) @ ( 2 + 2 ) @' k ( n 2) @ +( 2 + 2 ) @' k 2 ( n ) @ 2 @' k 2 ( n 1) @ + @' k 2 ( n 2) @ 2 k ( n 2)+2 k 2 ( n )(5-9) Here r y ( n )=[ T ( n ) ; d X k =0 k @' k ( n ) @ + j d X k =0 k @' k ( n ) @ ](5-10) Hence,theNLMSalgorithmbecomes 4 d = jr y ( n ) j 2 e ( n ) k ( n )(5-11) 4 = jr y ( n ) j 2 e ( n ) d X k =0 k @' k ( n ) @ (5-12) 4 = jr y ( n ) j 2 e ( n ) d X k =0 k @' k ( n ) @ (5-13) Here,considerthesystemidenticationexamplebySilva[88],whichusesthe referencetransferfunctiondescribedas H ( z )= 0 : 0890 0 : 2199 z 1 +0 : 2866 z 2 0 : 2199 z 3 +0 : 0890 z 4 1 2 : 6918 z 1 +3 : 5992 z 2 2 : 4466 z 3 +0 : 8288 z 4 (5-14) Theinputsignalisacolorednoisewhichisgeneratedbypassingawhitenoise,with mean0andvariance1,througharst-orderlterwithadecayfactorof0.8.We

PAGE 78

69 Table5-1:SystemidenticationofKautzltermodel Numberofhits MethodGlobalminimumLocalminimum ITL1000 NLMS991 LMS-SAS5842 LMS4852 considerthenormalizedleast-errorcriterion(NMSE) NMSE=10log 10 P N n =1 ( y ( n ) ^ y ( n; )) 2 P N n =1 y ( n ) 2 (5-15) where^ y istheestimatedoutputoftheKautzlter.Theglobaloptimumforthe objectivefunctionisat 0 : 6212+ j 0 : 5790,whichhasanormalizedcriterionof 12 : 5 dB lessthanthatintheFIRlter( =0).ThisagreewiththeresultbySilva[88]. Thestepsizeischosentobealinearlydecreasingfunctionof ( n )=0 : 4(1 5 10 5 n ) forbothLMS-SASandNLMSalgorithms,andconstantat0.002forbothITLandLMS algorithms.ThekernelsizefortheITLalgorithmischosentobealinearlydecreasing functionofiterations, 2 =3(1 2 : 5 10 5 n )+0 : 5.Table5-1showsthecomparison ofthenumberofglobalandlocalminimumhitsbyITL,NLMS,LMS-SASandLMS algorithms.Theresultsaregivenby100MonteCarlosimulationswithrandominitial conditionsof and ateachrun.ItisclearfromTable5-1thattheITLalgorithmis moresuccessfulinobtainingtheglobalminimumcomparedwiththeotheralgorithms. Singlecharacteristicweighttracksrepresentativeofeachalgorithm,LMS,LMS-SAS, NLMS,andITL,areshowninFigure5-1,5-2,5-3,and5-4,respectively.Figure5-5 depictstheclosenessbetweentheimpulseresponseofunknownsystemandtheimpulse responseoftheoptimizedKautzlterdeterminedwithMSEandITLcriterions. InordertounderstandbetterthemeaningoftheITLsolution,weinvestigate the L p normsoftheimpulseresponseerrorvectorsbetweentheoptimalsolutions obtainedbytheMSEandtheITLcriteria.Assumingtheinniteimpulseresponse oftheunknownsystem,givenby h i ;i =0 ;:::; 1 andtheinniteimpulseresponseof thetrainedadaptivelter,givenby h ai ;i =0 ;:::; 1 canbothbetruncatedat M ,yet

PAGE 79

Figure5-1:ConvergencecharacteristicsofweightforKautzlterbyLMSalgorithm.A)Weight;B)Weight(+j). Figure5-2:ConvergencecharacteristicsofweightforKautzlterbyLMS-SASalgo-rithm.A)Weight;B)Weight(+j). Figure5-3:ConvergencecharacteristicsofweightforKautzlterbyNLMSalgorithm.A)Weight;B)Weight(+j).

PAGE 80

Figure5-4:ConvergencecharacteristicsofweightforKautzlterbyITLalgorithm.A)Weight;B)Weight(+j). Figure5-5:Impulseresponse.

PAGE 81

p12341010010001 Figure5-6:Channelequalizationsystem.preservemostofthepowercontainedwithin,weconsiderthefollowingimpulseresponseerrornormcriterion:ImpulseResponseCriterionLp=pvuut Table5-2showstheimpulseresponseLperrornormsfortheKautzlterstrainedwithMSEandITLcriteriaaftersuccessfulconvergence.WeseefromtheseresultsthattheITLcriterionismoreofaminimax-typealgorithm,asitprovidesasmallerL1normfortheimpulseresponseerrorcomparedtoMSE,whichyieldsanL2normerrorminimization.5.2NonlinearEqualization

PAGE 82

73 describedas x i = n c X k =0 h k s i k + e i (5-17) wherethetransmittedsymbolsequence s i isanequiprobablebinarysequence f 1 g h i arethechannelcoecients,and e i isGaussiannoisewithzeromeanandvariance 2 n Theequalizerestimatesthevalueofatransmittedsymbolas ^ s i d = sgn ( y i )= sgn ( w T x i )(5-18) where y i = w T x istheoutputoftheequalizer, w =[ w 0 ; ;w m 1 ] T istheequalizer coecients,and x =[ x i ; ;x i m +1 ] T isthevectorofobservations. Theoutputoftheequalizerusingmultilayerperceptron(MLP)withonehidden layerwith n neuronsisgivenby y i = w T 2 tanh( W 1 x + b 1 )+ b 2 (5-19) where W 1 is n m matrixconnectingtheinputlayerwithhiddenlayer, b 1 is n 1vector ofbiasesforthehiddenneurons, w 2 is n 1vectorofweightsconnectingthehidden layertotheoutputneuron,and b 2 isabiasfortheoutputneuron. ConsidertheexamplebySantamariaetal.[89],wherethenonlinearchannelis composedofalinearchannelfollowedbyamemorylessnonlinearity.Thelinearchannel consideredis H ( z )=0 : 3482+0 : 8704 z 1 +0 : 3482 z 2 ,andthestaticnonlinearfunctionis z = x +0 : 2 x 2 0 : 1 x 3 ,where x isthelinearchanneloutput.Thenonlinearequalizerisan MLPwith7neuronsintheinputlayerand3neuronsinthehiddenlayer[MLP(7,3,1)], andtheequalizationdelayis d =4.Ashortwindowof N w =5errorsamplesisusedto minimizetheerrorcriterion. Thegradient @J @ = @J @" @" @ isusedforthebackpropagationalgorithmofthenonlinear equalizertraining,wheretheterm @" @ isdeterminedbythetopologyandtheterm @J @" isdeterminedbytheerrorsignal.Thereforetheproposedglobaloptimization techniquescanbeusedinthisnonlinearequalization,whicharereferredtostochastic gradient(SA),StochasticgradientwithSAS(SG-SAS),normalizedstochasticgradient (NSG),andITLalgorithms,respectively.Thestepsizeischosentobeaconstant

PAGE 83

Figure5-7:Convergencecharacteristicsofadaptivealgorithmsforanonlinearequalizer.of0.2forSG,SG-SASandITLalgorithms,andalinearlydecreasingfunctionof(n)=0:2(1n=nmax)fortheNSGalgorithm,wherenmaxisthemaximumnumberofiteration.Alineardecreasingfunctionof2=3(1n=nmax)+0:1ischosenforthekernelsizeoftheITLalgorithm. Figure5-7depictstheconvergenceoftheMSEevaluatedovertheslidingwindowforthealgorithms,andweconcludethattheITLalgorithmprovidesthefastestconvergence.Figure5-8depictstheperformancecomparisonofSG,SG-SAS,NSG,andITLalgorithmsforthenonlinearequalizerin100MonteCarlorunsforthenalsolutions.ThisgureshowsthatbothNSGandITLalgorithmshavesucceededinobtainingtheglobalminimum.Figure5-9showstheaveragebiterrorrate(BER)curves.TheBERwasevaluatedbycountingerrorversusseveralsignaltonoiserates(SNR)aftertransmittingsymbols.Thisgureshowsthatallalgorithmsprovidethesameresultfortheadequatesolutions,howevertheNSGalgorithmprovidesbestresultsfortheworsesolutions.

PAGE 84

Figure5-8:Performancecomparisonofglobaloptimizationsfornonlinearequalizer.5.3Conclusion TheperformanceofthisITLalgorithmwascomparedwiththemoretraditionalLMSvariants,whichareknowntoexhibitimprovedprobabilityofavoidinglocalminimainpreviouschapter.Nevertheless,noneofthemwereassuccessfulasITLinachievingtheglobalsolution.AninterestingobservationwasthattheITLcriterionyieldsa

PAGE 85

Figure5-9:AverageBERforanonlinearequalizer,A)overthewhole100MonteCarloruns;B)overthe10bestsolutionsofMSE;C)overthe10medialsolutionsofMSE;D.)overthe10worsesolutionsofMSE.

PAGE 86

Theproposedglobaloptimizationsalgorithmshavealsosuccessfullyappliedtoanotherpracticalexample,nonlinearequalization.ThesimulationresultsshowthatITLalgorithmachievesbetterperformancethantheothers.

PAGE 87

Srinivasanetal.haveusedastochasticapproximationfortheconvolutionsmoothingtechniqueinordertoobtainaglobaloptimizationalgorithmforadaptiveIIRltering.TheyshowedthatsmoothingcanbeachievedbytheadditionofavariableperturbingnoisesourcetotheLMSalgorithm.Wehavemodiedthisperturbingnoisebymultiplyingitwithitscostfunction.Themodiedalgorithm,whichisreferredtoastheLMS-SASalgorithm,resultsinbetterperformanceinglobaloptimizationthantheoriginalalgorithm. Fromthediusionequation,wehavederivedthetransitionprobabilityoftheLMS-SASalgorithm,forthesingleparametercase,escapelocalminimum.Sincetheglobalminimumisalwayssmallerthantheotherlocalminimum,thetransitionprobabilityofthealgorithmescapingoutfromthelocalminimumisalwayslargerthantheonefromtheglobalminimum.Thus,thealgorithmwillstaymostofitstimeneartheglobalminimumandeventuallyconvergetotheglobalminimum. Sinceweusetheinstantaneous(stochastic)gradientinsteadoftheexpectedvalueofthegradient,errorinestimatingthegradientnaturallyoccurs.Thisgradientestimationerrorcanbeusedtoactastheperturbingnoise.WehaveshownthatthebehavioroftheNLMSalgorithmwithdecreasingstepsizeissimilartotheoneoftheLMS-SASalgorithmfromaglobaloptimizationperspective. GlobaloptimizationperformanceofLMS-SASandNLMSalgorithmtotallydependontheshapeofthecostfunctionsurface.Thesharperthelocalminima,thelesslikely78

PAGE 88

79 theNLMSalgorithmisescapingoutfromthissteadystatepoint.Ontheotherhand, thelargercoverrangeofthesteadystatepointvalley,themoredicultthealgorithm willescapeoutfromthissteadystatepointvalley. Wehaveinvestigatedanothercostfunctionbasedonentropyndtheglobal optimumofIIRlters.Basedonapreviousconjecturethatannealingthekernelsize inthenon-parametricestimatorofRenyi'sentropytoachieveglobaloptimization,we havedesignedtheproposedinformationtheoreticlearningalgorithm,whichisshownto convergetotheglobalminimumoftheperformancesurfaceforvariousadaptivelter topologies.Theproposedalgorithmsuccessfullyadaptedthelterpolesavoidinglocal minima100%ofthetimeandwithoutcausinginstability.Thisbehaviorhasbeen foundinmanyexamples. TheperformanceofthisITLalgorithmwascomparedwiththemoretraditional LMSvariants,whichareknowntoexhibitimprovedprobabilityofavoidinglocal minima.Nevertheless,noneofthemwereassuccessfulasITLinachievingtheglobal solution.AninterestingobservationwasthattheITLcriterionyieldsasmaller L 1 errornormbetweentheimpulseresponsesoftheadaptiveandthereferenceIIRlters, whereasMSEtriestominimizethe L 2 errornorm.Ifthedesignerrequiresaminimum L 2 errornormbetweentheimpulseresponses,itispossibletouseITLadaptationto convergetothevicinityofthissolutionandthenswitchtoNLMStoachieve L 2 error normminimization. OneofthemajordrawbacksinadaptiveIIRlteringisthestabilityissue.We useKautzlters,becausetheirstabilityiseasilytobeguaranteedifpolesofthe Kautzltersarelocatedwithintheunitcircle.Inthisdissertation,weproposedthe combinationofKautzltersandanalternativeinformationtheoreticadaptation criterionbasedonRenyi'squadraticentropy.Kautzltershavebeenusedinthepast forsystemidentication[90]ofARMAmodels,butthepoleshavebeenkeptxed duringadaptation.TheproposedITLcriterionandkernelannealingapproachallowed stableadaptationofthepolestotheirglobaloptimalvalues.

PAGE 89

80 6.2FutureResearch Inthisdissertation,wehaveanalyzedtheweakglobaloptimalconvergenceof algorithmswithMSEcriterionbylookingattheirtransitionfunctionoftheprocess, assumingthattheweight, ,isascalar.Weneedmoreworksonthetransitionfunction oftheprocessingeneralcase,assumingthat isavector,inordertocompletethe analysisoftheweakglobaloptimalconvergenceofalgorithmswithMSEcriterion. WehaveobservedthattheITLcriterionyieldsasmaller L 1 errornormbetween theimpulseresponsesoftheadaptiveandthereferenceIIRlters,whereasMSEtries tominimizethe L 2 errornorm.This"minimax"propertyoftheproposedITLcriterion deservesfurtherresearch. Anotherobservationisthatlinearschedulingofthekernelsizehelpsachieve globalminima.Inannealing-basedglobaloptimizationalgorithms,schedulingofthe parameterstobeannealedisamajorissue.Instochasticannealing,itisknownthat exponentialannealing(atasucientlyslowrate)guaranteesglobalconvergence.In IIRlteradaptationusingITL,weusedlinearannealingofthekernelsizeandinall examples,successfulglobaloptimizationresultswereobtained.Moreworkisrequiredin theITLalgorithmtoselectaappropriatelythesmallestkernelsize,whichwashereset withtheruleofthumbproperties[91]. TheITLadaptationusedabatchapproach,butwebelievethattheonlineversions discussedbyErdogmusetal.[92]couldalsodisplaythesameglobaloptimization properties.TheonlineversionsofITLadaptationneedfurtherstudied. Inaddition,ageneralanalyticalproofthatexplainsthe100%globaloptimization capabilityoftheproposedalgorithmisnecessaryinordertocompletethetheoretical work.This,however,standsasachallengingfutureresearchproject.

PAGE 90

B.WidrowandS.D.Stearns,AdaptiveSignalProcessing,Prentice-Hall,EnglewoodClis,NJ,1985.[2] S.S.Haykin,AdaptiveFilterTheory,Prentice-Hall,EnglewoodClis,NJ,1986.[3] M.A.StyblinskiandT.S.Tang,\Experimentsinnonconvexoptimization:Stochasticapproximationwithfunctionsmoothingandsimulatedannealing,"NeuralNetworks,vol.3,pp.467{4833,1990.[4] J.C.PrincipeandD.Erdogmus,\Fromadaptivelineartoinformationltering,"inProceedingsofSymposium2000onAdaptiveSystemsforSignalProcessing,Communications,andControl,LakeLouise,Alberta,Canada,Oct.2000,pp.99{104.[5] D.Erdogmus,K.Hild,andJ.C.Principe,\BlindsourceseparationusingRenyi'smutualinformation,"IEEESignalProcessingLetters,vol.8,no.6,pp.174{176,June2001.[6] D.ErdogmusandJ.C.Principe,\Generalizedinformationpotentialcriterionforadaptivesystemtraining,"IEEETransactionsonNeuralNetworks,(toappear)September2002.[7] K.J.AstromandP.Eykho,\Systemidentication{Asurvey,"Automatica,vol.AC-27,no.4,pp.123{162,Aug.1971.[8] B.Friedlander,\Systemidenticationtechniquesforadaptivesignalprocessing,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.ASSP-30,no.2,pp.240{246,Apr.1982.[9] L.Ljung,SystemIdenticationTheoryfortheUser,Prentice-Hall,EnglewoodClis,NJ,1987.[10] T.Soderstrom,L.Ljung,andI.Guatasson,\Atheoreticalanalysisofrecursiveidenticationmethods,"Autoimica,vol.14,no.3,pp.193{197,May1978.[11] C.R.Johnson,\AdaptiveIIRltering:Currentresultsandopenissues,"IEEETransactionsonInformationTheory,vol.IT-30,no.2,pp.237{250,Mar.1984.[12] S.S.Shynk,\AdaptiveIIRltering,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.6,no.2,pp.4{21,Apr.1989.[13] S.GeeandM.Rupp,\AcomparisonofadaptiveIIRechocancellerhybrids,"Proceedings.InternationalConferenceAcoustics,Speech,andSignalProcessing,1991.81

PAGE 91

S.L.Netto,P.S.Diniz,andP.Agathoklis,\AdaptiveIIRlteralgorithmsforsystemidentication:Ageneralframework,"IEEETransactionsonEducation,vol.38,pp.54{66,Feb1995.[15] P.A.Regalia,AdaptiveIIRFilteringinSignalProcessingandcontrol,MarcelDekker,NewYork,NY,1995.[16] M.Dentimo,J.M.McCool,andB.Widrow,\Adaptivelteringinthefrequencydomain,"ProceedingsIEE,vol.66,no.12,pp.1658{1659,Dec.1978.[17] E.R.Fervara,\FastimplementationofLMSadaptivelters,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.ASSP-28,no.4,pp.474{475,Aug.1980.[18] T.K.Woo,\HRLS:amoreecientRLSalgorithmforadaptiveFIR,"IEEECommunicationLetters,vol.5,no.3,pp.81{84,March2001.[19] D.F.Marshall,W.K.Jenkins,andJ.J.Murphy,\Theuseoforthogonaltransformsforimprovingperformanceofadaptivelters,"IEEETransactionsonCircuitandSystem,vol.36,no.4,pp.474{484,Apr.1989.[20] S.S.NarayanandA.M.Peterson,\Frequencydomainleast-mean-squarealgorithm,"ProceedingsIEEE,vol.69,no.1,pp.124{126,Jan.1981.[21] S.A.White,\Anadaptiverecursivedigitallter,"inProceedings9thAsilomarconferenceCircuitSystemComputer,pp.21{25,1975.[22] R.A.David,\Anadaptiverecursivedigitallter,"inProceedings15thAsilomarconferenceCircuitSystemComputer,pp.175{179,1981.[23] B.D.Rao,\AdaptiveIIRlteringusingcascadestructures,"inProceedings27thAsilomarconferenceonSignalSystemComputer,vol.1,pp.185{188,1993.[24] J.K.Juan,J.G.Harris,andJ.C.Principe,\Locallyrecurrentnetworkwithmultipletime-scales,"IEEEproceedingsonNeuralNetworkforsignalprocessing,vol.VII,pp.645{653,1997.[25] P.A.Regalia,\StableandecientlatticealgorithmsforadaptiveIIRltering,"IEEETransactionsonSignalProceeding,vol.40,no.2,pp.375{388,Feb.1992.[26] R.L.ValcarceandF.P.Gonales,\Adaptivelatticelteringrevisitedconvergenceissuesandnewalgorithmswithimprovedstabilityproperties,"IEEETransactionsonSignalProcessing,vol.49,no.4,pp.811{821,April2001.[27] J.J.Shynk,\AdaptiveIIRlteringusingparallel-formrealization,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.37,no.4,pp.519{533,Apr.1989.[28] J.E.CousseauandP.S.R.Diniz,\AlternativeparallelrealizationforadaptiveIIRlters,"inProceedingsInternationalSymposiumCircuitsSystem,pp.1927{1930,1990.

PAGE 92

J.J.ShynkandR.P.Gouch,\Frequencydomainadaptivepole-zerolters,"ProceedingsIEEE,vol.73,no.10,pp.1526{1528,Oct.1985.[30] B.E.UsevitchandW.K.Jenkin,\AcascadeimplementationofanewIIRadaptivedigitallterwithglobalconvergenceandimprovedconvergencerates,"inProceedingsInternationalSymposiumCircuitsSystem,pp.2140{2143,1989.[31] D.G.Luenberger,IntroductiontoLinearandNonlinearProgramming,Wiley,MA:Addison,1973.[32] J.LinandR.Unbehauen,\Bias-remedyleastmeansquareequationerroralgorithmforIIRparametersrecursiveestimation,"IEEETransactionsonSignalProcessing,vol.40,pp.62{69,Jan1992.[33] H.FanandW.K.Jekins,\AnewadaptiveIIRlters,"IEEETransactionsonCircuitandSystem,vol.CAS-33,no.10,pp.939{947,Oct.1986.[34] H.FanandMilosDoroslvacki,\OnglobalconvergenceofSteiglitz-McBrideadaptivealgorithm,"IEEETransactionsonCircuitandSystem,vol.40,no.2,pp.73{87,Feb.1993.[35] K.SteiglitzandL.E.McBride,\Atechniquefortheidenticationoflinearsystems,"IEEETransactionsonAutomaticControl,vol.AC-10,pp.461{464,1965.[36] S.L.NettoandP.Agathoklis,\AnewcompositeadaptiveIIRalgorithm,"inProceedings28thAsilomarconferenceonSignalSystemComputer,vol.2,pp.1506{1510,1994.[37] J.E.Cousseau,L.Salama,L.Donale,andS.L.Netto,\OrthonormaladaptiveIIRlterwithpolyphaserealization,"inProceedingsofICIES'99Electronics,CircuitandSystems,vol.2,pp.835{838,1999.[38] M.RadenkovicandT.Bose,\GlobalstabilityofadaptiveIIRltersbasedtheoutputerrormethod,"inProceeingsofICIES'99Electronics,CircuitandSystems,vol.1,pp.663{667,1999.[39] P.L.Hsu,T.Y.Tsai,andF.C.Lee,\ApplicationsofavariablestepsizealgorithmtoQCEEadaptiveIIRlters,"IEEETransactionsonSignalProcessing,vol.46,no.6,pp.1685{1688,Jun.1999.[40] W.J.SongandH.C.Shin,\Bias-freeadaptiveIIRltering,"inProceedingIEEEInternationalConferenceonAcoustics,Speech,andSignalProceeding,vol.1,pp.109{112,2000.[41] K.C.HoandY.T.Chan,\Biasremovalinequation-erroradaptiveIIRlters,"IEEETransactionsonSignalProcessing,vol.43,pp.51{62,Jan.1995.[42] M.C.HallandP.M.Hughes,\Themaster-slaveIIRlteradaptationalgorithm,"inProceedingIEEEInternationalSymposiumonCircuit,System,vol.3,pp.2145{2148,1988.

PAGE 93

J.R.Treichler,C.R.Johnson,andM.G.Larimore,TheoryandDesignofAdaptiveFilters,Wiley,NewYork,1987.[44] I.O.Bohachevsky,M.E.Hohnson,andM.L.Stein,\Generalizedsimulatedannealingforfunctionoptimization,"AmericanstatisticalassociationandtheAmericansocietyforqualitycontrol,vol.28,pp.209{217,Aug.1986.[45] S.C.Ng,S.H.Leung,C.Y.Chung,A.Luk,andW.H.Lau,\Thegeneticsearchapproach-AnewlearningalgorithmforadaptiveIIRltering,"IEEESignalProcessingMagazine,pp.39{46,Nov.1996.[46] J.A.NelderandR.Mead,\Controlledrandomsearchalgorithm,"ComputerJournal,vol.7,pp.308{313,1965.[47] P.P.KhargonekarandA.Yoon,\Randomsearchbasedoptimizationalgorithmincontrolanalysisanddesign,"inProceedingoftheAmericanControlConference,Jun.1999,pp.383{387.[48] Q.Duan,S.Sorooshian,andV.Gupta,\Shuedcomplexevolutionalgorithm,"WaterResourcesResearch,vol.28,pp.1015{1031,1992.[49] Z.B.Tang,\Adaptivepartitionedrandomsearchtoglobaloptimization,"IEEETransactionsonAutomaticControl,vol.39,pp.2235{2244,Nov.1994.[50] K.H.Yim,J.B.Kim,T.P.Lee,andD.S.Ahn,\GeneticadaptiveIIRlteringalgorithmforactivenoisecontrol,"inIEEEInternationalFuzzySystemsConferenceProceedings,Aug.1999,pp.III1723{1728.[51] B.W.WahandT.Wang,\Constrainedsimulatedannealingwithapplicationsinnonlinearcontinuousconstrainedglobaloptimization,"inProceeding11thIEEEInternationalConferenceonToolswithArticialIntelligence,Nov.1999,pp.381{388.[52] J.L.MaryakandD.C.Chin,\Aconjectureonglobaloptimizationusinggradient-freestochasticapproximation,"inProceedingofthe1998IEEEISIC/CIRA/ISASJointConference,Sep.1998,pp.441{445.[53] N.K.TreadgoldandT.D.Gedeon,\Simulatedannealingandweightdecayinadaptivelearning:TheSARPROPalgorithm,"IEEETransactionsonNeuralNetwork,vol.9,pp.662{668,July1998.[54] G.H.Staus,L.T.Biegler,andB.E.Ydstie,\Globaloptimizationforidentication,"inProceedingofthe36thConferenceonDecisionandControl,Dec.1997,pp.3010{3015.[55] T.Fujita,T.Watanabe,K.Yasuda,andR.Yokoyama,\Globaloptimizationmethodusingintermittencychaos,"inProceedingofthe36thConferenceonDecisionandControl,Dec.1997,pp.1508{1509.[56] W.Edmonson,J.Principe,K.Srinivasan,andC.Wang,\AgloballeastsquarealgorithmforadaptiveIIRltering,"IEEETransactionsonCircuitandSystem,vol.45,pp.379{383,Mar.1998.

PAGE 94

J.M.Thomas,J.P.Reilly,andQ.Wu,\Realtimeanalogglobaloptimizationwithconstraints:Applicationtothedirectionofarrivalestimationproblem,"IEEETransactionsonCircuitandSystem,vol.42,pp.233{243,Mar.1995.[58] A.Renyi,\Somefundamentalquestionsofinformationtheory-selectedpapersofAlfredRenyi,"AkademiaKiado,Budapest,vol.2,pp.565{580,1976.[59] A.Renyi,ADiaryonInformationTheory,Wily,N.Y.,1987.[60] C.F.CowanandP.M.Grant,AdaptiveFilters,Prentice-Hall,1985.[61] B.Widrow,J.M.McCool,M.G.Larimore,andC.R.Johnson,\StationaryandnonstationarylearningcharacteristicsoftheLMSadaptivelter,"ProceedingsIEEE,vol.64,pp.1151{1162,Aug.1976.[62] J.M.Mendel,LessoninDigitalEstimationTheory,Prentice-Hall,EnglewoodClis,NJ,1987.[63] E.I.Jury,TheoryandApplicationsoftheZ-TransformMethod,Wiley,NewYork,1964.[64] T.C.Hsia,\Asimpliedadaptiverecursivelterdesign,"ProceedingsIEEE,vol.69,no.9,pp.1153{1155,Sept1981.[65] G.C.GoodwinandK.S.Sin,AdaptiveFilteringPredictionandControl,Prentice-Hall,EnglewoodClis,NJ,1984.[66] T.Soderstrom,\Ontheuniquenessofmaximumlikelihoodidentication,"Automatica,vol.14,no.3,pp.231{244,Mar.1975.[67] M.Nayeri,H.Fan,andW.K.Jenkins,\SomecharacteristicsoferrorsurfacesforinsucientorderadaptiveIIRlters,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.38,no.7,pp.1222{1227,July1990.[68] T.SoderstromandP.Stoica,\Somepropertiesoftheoutputerrormethod,"Automatica,vol.18,pp.1692{1716,Dec.1982.[69] M.Nayeri,\UniquenessofmsoeestimatesinIIRadaptiveltering;asearchfornecessaryconditions,"inInternationalConferenceAcoustics,Speech,andSignalProcessing,1989,pp.1047{1050.[70] S.D.Stearns,\Errorsurfacesofrecursiveadaptivelters,"IEEETransactionsonAcoustics,Speech,andSignalProcessing,vol.ASSP-29,no.4,pp.763{766,June1981.[71] F.HongandM.Nayeri,\OntheerrorsurfaceofsucientorderadaptiveIIRlters:Proofsandcounterexamplestoaunimodalityconjecture,"ProceedingsIEEETransactiononAcoustics,Speech,andSignalProcessing,vol.37,pp.1436{1442,Sep.1989.[72] R.RobertsandC.Mullis,DigitalSignalProcessing,Addison-Wesley,1987.

PAGE 95

W.H.Kautz,\Transientsynthesisinthetimedomain,"IRETransactionsonCircuitTheory,vol.1,pp.22{39,Sept.1954.[74] P.W.Broome,\Discreteorthonormalsequences,"J.Assoc.Comput.Machinery,vol.12,no.2,pp.151{168,Dec.1965.[75] G.A.WilliamsonandS.Zimmermann,\GloballyconvergentadaptiveIIRlterbasedonxedpolelocations,"IEEETransactionsonSignalProcessing,vol.44,pp.1418{1427,Jun.1996.[76] P.M.PardalosandR.Horst,IntroductiontoGlobalOptimization,Norwood,MA:Kluwer,1989.[77] H.RobinsandS.Monro,\Astochasticapproximationmethod,"AnnalsofMathematicalStatistics,vol.22,pp.400{407,1951.[78] E.WongandB.Hajek,StochasticProcessesinEngineeringSystems,Springer,1985.[79] A.N.Kolmogorov,\Uberdieanalytischemethodeninderwahrscheinlichkeits-rechnung,"AnnalsofMathematicalStatistics,vol.104,pp.415{458,1931.[80] S.Haykin,IntroductiontoAdaptivefilters,MacMillan,NY,1984.[81] C.E.Shannon,\Amathematicaltheoryofcommunication,"BellSystemTechnicalJournal,vol.27,pp.379{423,623{653,1984.[82] E.Parzen,\Ontheestimationofaprobabilitydensityfunctionandthemode,"AnnalsofMathematicalStatistics,vol.33,pp.1065,1962.[83] T.CoverandJ.Thomas,ElementsofInformationTheory,Wiley,1991.[84] R.V.Hartley,\Transmissionofinformation,"BellSystemTechnicalJournal,vol.7,1928.[85] G.GolubandF.VanLoan,MatrixComputation,JohnHopkinsPress,1989.[86] S.Kullback,InformationTheoryandStatistics,DoverPublicationsInc.,NewYork,1968.[87] C.WangandJ.C.Principe,\Trainingneuralnetworkswithadditivenoiseinthedesiredsignal,"IEEETransactionsonNeuralNetworks,vol.10,no.6,pp.1511{1517,Nov.1999.[88] T.O.Silva,\Optimalityconditionsfortruncatedkautznetworkswithtwoperiodicallyrepeatingcomplexconjugatespoles,"IEEETransactionsonAutomaticControl,vol.40,pp.342{346,Feb1995.[89] I.Santamara,D.Erdogmus,andJ.C.Principe,\Entropyminimizationforsuperviseddigitalcommunicationchannelequalization,"IEEETransactionsonSignalProcessing,vol.50,no.5,pp.1184{1192,May2002.

PAGE 96

B.Wahlberg,\SystemidenticationusingKautzmodels,"IEEETransactionsonAutomaticControl,vol.39,no.6,pp.1276{1282,Jun.1994.[91] D.ErdogmusandJ.C.Principe,\Anerror-entropyminimizationalgorithmforsupervisedtrainingofnonlinearadaptivesystems,"IEEETransactionsonSignalProcessing,vol.50,no.7,pp.1780{1786,July2002.[92] D.ErdogmusandJ.C.Principe,\Anon-lineadaptationalgorithmforadaptivesystemtrainingwithminimumerrorentropy:stochasticinformationgradient,"inInternationalConferenceonICAandSignalSeparation,SanDiego,CA,Dec.2001,pp.7{12.


Permanent Link: http://ufdc.ufl.edu/UFE0000558/00001

Material Information

Title: Global pptimization algorithms for adaptive infinite impulse response filters
Physical Description: Mixed Material
Creator: Lai, Ching-An ( Author, Primary )
Publication Date: 2002
Copyright Date: 2002

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000558:00001

Permanent Link: http://ufdc.ufl.edu/UFE0000558/00001

Material Information

Title: Global pptimization algorithms for adaptive infinite impulse response filters
Physical Description: Mixed Material
Creator: Lai, Ching-An ( Author, Primary )
Publication Date: 2002
Copyright Date: 2002

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000558:00001


This item has the following downloads:


Full Text










GLOBAL OPTIMIZATION ALGORITHMS FOR ADAPTIVE INFINITE IMPULSE
RESPONSE FILTERS

















By

CHING-AN LAI


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2002














ACKNOWLEDGMENTS

First and foremost, I wish to acknowledge my advisor, Dr. Jos6 C. Principe for

providing excellent guidance throughout the development of this dissertation. I also

wish to thank Deniz Erdogmus for the invaluable discussion on information theory.

I also wish to thank members of my committee, Dr. Haniph A. Latchman,

Dr. John M. M. Anderson, Dr. Yuguang Fang, and Dr. Murali Rao for their insightful

comments on this dissertation. I would also like to thank my former advisor

Dr. William W. Edmonson for his kind support of my study.














TABLE OF CONTENTS

page

ACKNOW LEDGMENTS ................................ ii

LIST OF TABLES . . . . . . . . . v

LIST OF FIGURES ...................... ........... vi

ABSTRACT .. .. .. .. .. ... .. .. .. ... .. .. .. .. ... .. .. .. viii

CHAPTER

1 INTRODUCTION ................................. 1

1.1 M otivation . . . . . . . . 1
1.2 Literature Survey .............................. 2
1.2.1 Adaptive Filtering ........... .............. 2
1.2.2 Optimization Method .......... .............. 4
1.2.3 Proposed Optimization Method ........ .......... 6
1.3 Outline .................................... 7

2 ADAPTIVE IIR FILTERING .................. .. 9

2.1 Introduction . . . ............. 9
2.2 System Identification with the Adaptive IIR Filter . . .... 12
2.3 System Identification with Kautz Filter ..... . . ..... 17

3 STOCHASTIC APPROXIMATION WITH CONVOLUTION SMOOTHING 20

3.1 Introduction ............... . . .. 20
3.2 Convolution Function Smoothing ................ .... .. 21
3.3 Derivation of the Gradient Estimate ........... ... .. 24
3.4 LMS-SAS Algorithm ...... ...... .... ........ 26
3.5 Analysis of Weak Convergence to the Global Optimum for LMS-SAS .28
3.6 Normalized LMS Algorithm . . . .... ..... 33
3.7 Relationship between LMS-SAS and NLMS Algorithms . ... 36
3.8 Simulation Results ......... . . . .... 37
3.9 Comparison of LMS-SAS and NLMS Algorithm . . 40
3.10 Conclusion ................... . . .... 44

4 INFORMATION THEORETIC LEARNING .... . . 47

4.1 Introduction .................. ......... ..47
4.2 Entropy and Mutual Information . . ........... .... 48
4.3 Adaptive IIR Filter with Euclidean Distance Criterion . ... 51







4.4 Parzen Window Estimator and Convolution Smoothing Function .
4.4.1 Sim ilarity . . . . . . . .


4.4.2 Difference .. ............
Analysis of Weak Convergence to the Global
Contour of Euclidean Distance Criterion .
Simulation Results .. ...........
Comparison of NLMS and ITL Algorithms .
Conclusion . . . . .


Optimum for ITL


5 R E SU LT S . . . . . . . . .

5.1 System Identification with Kautz Filter ................
5.2 Nonlinear Equalization .. ......................
5.3 Conclusion ......... .. .. .. ... .. .. ... .......

6 CONCLUSION AND FUTURE RESEARCH .. ...............

6.1 C conclusion . . . . . . . . .
6.2 Future Research ................. ...........

REFERENCES ......................................

BIOGRAPHICAL SKETCH .. ..........................














LIST OF TABLES


Table

3-1 NLM S algorithm . .................

3-2 System identification of reduced order model . .

3-3 Example I for system identification . .......

3-4 Example II for system identification .. ..........

3-5 Example III for system identification .. .........

4-1 System identification of adaptive IIR filter by NLMS and

4-2 Lp for both MSE and ITL criterion .. ..........

5-1 System identification of Kautz filter model ........

5-2 Lp for both MSE and ITL criteria in the Kautz example


page

35

38

44


ITL algorithm














LIST OF FIGURES


Figure

2-1 Adaptive filter model . ..............

2-2 Block diagram of the system identification configuration

2-3 Kautz filter model . ................

3-1 Smoothed function using Gaussian pdf . ....

3-2 Step size p(n) for SAS algorithm . ........

3-3 Global convergence of 0 in the GLMS algorithm . .

3-4 Global convergence of 0 in the LMS-SAS algorithm. .

3-5 Global convergence of 0 in the NLMS algorithm . .

3-6 Local convergence of 0 in the LMS algorithm . .

3-7 Local convergence of 0 in the GLMS algorithm . .

3-8 Local convergence of 0 in the LMS-SAS algorithm. .


page

9

12

19

23

39

40

40

41

41

41

42

43

43

60

60

61

63


3-9 Contour of MSE


3-10

4-1

4-2

4-3

4-4

4-5

5-1

5-2

5-3

5-4

5-5

5-6


Weight (top) and |V0oy(n)|| (bottom)

Convergence characteristics of weight

Euclidean distance of Example I .

Entropy f_ f2(a)dE of Example I

Euclidean distance of Example II .

Convergence characteristics of weight

Convergence characteristics of weight

Convergence characteristics of weight

Convergence characteristics of weight

Convergence characteristics of weight

Impulse response .. .........

C'!i i,,! I equalization system .....


for Example


by ITL .


for Example II by ITL . . .

for Kautz filter by LMS algorithm .

for Kautz filter by LMS-SAS algorithm .

for Kautz filter by NLMS algorithm .

for Kautz filter by ITL algorithm .


64








5-7 Convergence characteristics of adaptive algorithms for a nonlinear equalizer 74

5-8 Performance comparison of global optimizations for nonlinear equalizer . 75

5-9 Average BER for a nonlinear equalizer .................. ..... 76














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

GLOBAL OPTIMIZATION ALGORITHMS FOR ADAPTIVE INFINITE IMPULSE
RESPONSE FILTERS

By

Chmi;--An Lai

December 2002

C'!I ,i: Jos6 C. Principe
Major Department: Electrical and Computer Engineering

The major goal of this dissertation is to develop global optimization algorithms for

adaptive IIR filters. Since the performance surface of adaptive IIR filters is nonconvex

with respect to the filter coefficients, conventional gradient-based algorithms can easily

be trapped at an unacceptable local optimum. We need to exploit global optimization

methods in adaptive IIR filtering and overcome the problem of converging to the local

minima, preserving stability throughout adaptation.

One approach for adaptive IIR filtering ilr-.- --I a stochastic approximation with

convolution smoothing (SAS). We modify the perturbing noise by multiplying it with

its cost function. The modified algorithm results in better performance when compared

to the original algorithm. We also analyze the global optimization behavior of the

proposal algorithm by analyzing the transition probability density of escaping from a

local minimum.

A gradient estimation error can be used to act as the perturbing noise, provided

it is properly normalized. Consequently, another approach for global IIR filter

optimization is the normalized LMS (NLMS) algorithm. The behavior of the NLMS

algorithm with decreasing step size is similar to that of the LMS-SAS algorithm from a

global optimization perspective.







Another novel approach for global optimization arises from using an entropy

criterion for the training of adaptive systems. Our approach uses Renyi's entropy

associated with the Parzen window estimator to estimate the pdf directly from a set of

samples. The kernel size of the Parzen window estimator is an important parameter in

the global optimization procedure. We propose to start the training with a large kernel

size, and then slowly decrease this parameter to a predetermined suitable value. We

show that the finite sample size in the estimation works as an additive uncorrelated

white noise source that allows the training algorithm to converge to the global minimum

of the cost function.

One issue in the identification of the autoregressive moving average (ARMA)

system is that filter structures are used to avoid instabilities during training. Here we

use the class of orthogonal filters called the Kautz filters for ARMA modeling. The

proposed global optimization algorithms have been applied to system identification

together with Kautz filters and nonlinear equalization to show the global optimum

search capability.














CHAPTER 1
INTRODUCTION

1.1 Motivation

The objective of this dissertation is to develop global optimization algorithms for

adaptive infinite impulse response (IIR) filtering by using the stochastic approximation

with convolution smoothing function (SAS) and information theoretic learning (ITL).

This work is particularly motivated by the following facts.
* Adaptive filtering has wide application in the digital signal processing, communication,
and control fields. A finite impulse response (FIR) filter [1, 2] is a simple structure
for adaptive filtering and has been extensively developed. Recently researchers have
attempted to use IIR structures because they perform better than FIR structures
with the same number of coefficients. However, some 1 i, Pr drawbacks inherent
to adaptive IIR structures are slow convergence, possible convergence to a bias or
unacceptable suboptimal solutions, and the need for stability monitoring.

* Stochastic approximation methods [3] have the property of converging to the global
optimum with a probability of one, as the number of iterations tends to infinity.
These methods are based on a random perturbation to find the absolute optimum
of the cost function. In particular, the method of stochastic approximation with
convolution smoothing has been successful in several applications. It has been
empirically proven to be efficient in converging to the global optimum in terms of
computation and accuracy. The convolution smoothing function can "smooth out"
a nonconvex objective function by convolving it with a suitable probability density
function (pdf). In the beginning of adaptation, the variance of the pdf is set to a
sufficient large value, such that the convolution smoothing function can "smooth out"
the nonconvex objective function into a convex function. Then the variance is slowly
reduced to zero, whereby the smoothed objective function returns to the original
objective function, as the algorithm converges to the global optimum. Such variance
is determined by a cooling schedule parameter. This cooling schedule is a critical
factor in global optimization, because it affects the performance of the global search
capability.

* Convolution smoothing has been used exclusively with the mean square error (\!Si;)
criterion. MSE has been used extensively in the theory of adaptive systems because
of its analytical simplicity and the common assumption of Gaussian distributed
error. However, recently more sophisticated applications (such as independent
component a, i1 ,-i and blind source separation) require a criterion that considers
higher-order statistics for the training of adaptive systems. The computational neural
engineering laboratory studied entropy cost function [4]. Shannon first introduced







entropy of a given probability distribution function, which provides a measure of the
average information in that distribution. By using the Parzen window estimator,
we can estimate the pdf directly from a set of samples. It is quite straightforward
to apply the entropy criterion to the system identification framework [5]. As shown
in this thesis, the kernel size of the Parzen window estimator becomes an important
parameter in the global optimization procedure. Deniz et al. [6] conjectured that
for a sufficiently large kernel size, the local minima of the error entropy criterion
can be eliminated. It was -ii--:. -1.. that starting with a large kernel size, and then
slowly decreasing this parameter to a predetermined suitable value, the training
algorithm can converge to the global minimum of the cost function. The error entropy
criterion considered by Deniz et al. [6], however, does not consider the mean of the
error signal, since entropy is invariant to translation. Here we modify the criterion
and study the reason why annealing the kernel size produces global optimization
algorithms.

1.2 Literature Survey

We surveyed the literature in the areas of adaptive filtering, optimization method,

and mathematics used in the analysis of the algorithm.

1.2.1 Adaptive Filtering

Numerous algorithms of adaptive filtering are proposed in the literature [7, 8],

especially for system identification [9, 10]. Some valuable general papers on the topic

of adaptive filtering are presented by Johnson [11], Shynk [12], Gee et al. [13] and

Netto [14]. Johnson's paper focused on the common theoretical basis between adaptive

filtering and system identification. Shynk's paper dealt with various algorithms of

adaptive IIR filtering for their error formula and realization. Neto's paper presented

the characteristics of the most commonly used algorithms for adaptive IIR filtering in a

simple and unified framework. Recently a full book was published on IIR filters [15].

The major goal of an adaptive filtering algorithm is to adjust the adaptive filter

coefficients in order to minimize a given performance criterion. Literature about

adaptive filtering can be classified into three categories: adaptive filter structures,

adaptive algorithms, and applications.
* Adaptive filter structure. The choice of the adaptive filter structures affect the
computational complexity and the convergence speed. Basically, there are two kind of
adaptive filter structures.
-Adaptive FIR filter structure. The most commonly used adaptive FIR filter
structure is the transversal filter which implements an all-zero filter with a canonic
direct form (without any feedback). For this adaptive FIR filter structure, the







output is a linear combination of the adaptive filter coefficients. The performance
surface of the objective cost function is quadratic [1] which yields a single optimal
point. Alternative adaptive FIR filter structures [16] improve performance in terms
of computational complexity [17, 18] and convergence speed [19, 20].

-Adaptive IIR filter structure. White [21] first presented an implementation
of an adaptive IIR filter structure. Later, many articles were published in this
area. For simple implementation and easy analysis, most adaptive IIR filter
structures use the canonic direct form realization. Some other realizations are
also presented to overcome some drawbacks of canonic direct form realization, like
slow convergence rate and the need for stable monitoring [22]. Commonly used
realizations are cascade [23, 24], lattice [25, 26], and parallel [27, 28] realizations.
Other realizations have also been presented recently by Shynk et al. [29] and Jenkin
et al. [30].

SAlgorithm. An algorithm is a procedure used to adjust adaptive filter coefficients
in order to minimize the cost function. The algorithm determines several important
features of the whole adaptive procedure, such as computational complexity,
convergence to suboptimal solutions, biased solutions, objective cost function
and error signal. Early local adaptive filter algorithms were Newton method,
QO ,-i-N. .-ton method, and gradient method. Newton's method seeks the minimum
of a second-order approximation of the objective cost function. OQ ,-i-N. .-ton is a
simple version of the Newton method using a recursively calculated estimate of the
inverse of a second-order matrix. The gradient method searches the minimum of
the objective cost function by tracking the opposite direction of the gradient vector
of the objective function [31]. It is well known that the step size controls stability,
convergence speed, and misadjustment [1]. For FIR adaptive fill, iiir local methods
were sufficient since the optimization was linear in the weights. However in IIR
adaptive filtering this is no longer the case. The most commonly known approaches
for adaptive IIR filtering are equation error algorithm [32], output error algorithm
[12, 11], and composite algorithms [33, 34] such as the Steiglitz-McBride algorithm
[35].
-The main characteristics of the equation error algorithm are unimodality of the
Mean-Square-Error (\!S1') performance surface because of the linear relationship
of the signal and the adaptive filter coefficients, good convergence, and guaranteed
stability. However, it comes along with a biased solution in the presence of noise.

-The main characteristics of the output-error algorithm are the possible existence of
the multiple local minima, which affect the convergence speed, an unbiased global
optimal solution even in the presence of noise, and the requirement of stability
checking during the adaptive processing.

-The composite error algorithm attempts to combine the good individual
characteristics of both output error algorithm and equation error algorithm
[36]. Consequently, many papers were written to overcome the problem mentioned
above.







-Cousseau et al. [37] proposed an orthogonal filter to overcome the instability
problem of adaptive IIR filters, while Radenkovic et al. [38] used an output error
method to avoid it.

-The quadratic constraint equation error method [39] was proposed to remove the
biased solutions for the equation-error adaptive IIR filters [40, 41]. New composite
adaptive IIR algorithms are presented in literature [42, 36].

SApplication. Adaptive filtering has been successful in many applications, such
as echo cancellation, noise cancellation, signal detection, system identification,
channel equalization, and control. Some useful information about adaptive filtering
application appears in the literature [1, 2, 43].

In this dissertation, we focus on adaptive IIR filter algorithms for system

identification.

1.2.2 Optimization Method

There are two adaptation methodologies for IIR filters: gradient descent and global

optimization. The most commonly used method is the gradient descent method, such

as least mean square (LMS) [1]. These methods are well established for the adaptation

of FIR filters and have the advantage of being less computationally expensive. The

problem with gradient descent methods is that they might converge to any local

minima. The local minima normally imply poor performance. This problem can be

overcome through global optimization methods. Such global optimization algorithms

include simulated annealing (SA) [44], genetic algorithm [45], random method [46], and

stochastic approximation [3]. However, global optimization methods have the problem of

computational complexity, especially for high order adaptive filter.

Several recent researchers have modified global optimization algorithms to improve

their performance. Khargonekar [47] used an adaptive random search algorithm for

the global optimization of control systems. This type of global optimization algorithm

propagates a collection or a simplex of points but uses more geometrically intuitive

heuristics. The most commonly used direct search method for optimization is the

Nelder-Mead algorithm [46]. Despite the popularity of the Nelder-Mead algorithm, it

does not provide any guarantee of convergence or performance. Recent studies relied

on numerical results to determine the effectiveness of the algorithm. Duan proposed





5

the shuffled complex evolution algorithm [48], which uses several Nelder-Mead simplex

algorithms running in parallel (that also share information with each other). Tang [49]

proposed a random search that partitions the search region of the objective function into

a certain number of subregions. Tang [49] showed that the adaptive partitioned random

search in general can provide a better-than-average solution within a modest number of

function evaluations.

Yim [50] used a genetic algorithm in his adaptive IIR filtering algorithm for active

noise control. He showed that genetic algorithms overcome the problem of converging to

the local minimum for gradient decent algorithms. Wah [51] improved constrained

simulated .inir. lii a discrete global minimization algorithm with .,-i- i! 1' .1 ic

convergence to discrete constrained global minima with a probability of one. The

algorithm is based on the necessary and sufficient conditions for discrete constrained

local minima in the theory of discrete Lagrange multipliers. He extended this algorithm

to solve nonlinear continuous constrained optimization problems. Maryak [52] injected

extra noise terms into the recursive algorithm, which may allow the algorithm to

escape the local optimum points, and ensure global convergence. The amplitude of the

injected noise is decreased over time (a process called annealing), so that the algorithm

can finally converge to the global optimum point. He argues that, in some cases, the

naturally occurring error in the gradient approximation effectively introduced injected

noise that promotes convergence of the algorithm to the global optimum. Treadgold [53]

combined gradient descent and the global optimization technique of simulated annealing

(SA). This combination escapes local minima and can improve training time. Staus [54]

used spatial branch and bound methodology to solve the global optimization problem.

The spatial branch and bound technique is not practical for identification. Advances

in convex algorithm design using interior point methods, exploitation of structure, and

faster computing speeds have altered this picture. Large problems, including interesting

classes of identification problems can be solved efficiently. Fujita [55] proposed a method

(taking advantage of chaotic behavior of the nonlinear dissipation system) that has

inertia and nonlinear damping terms. The time history of the system, whose energy







function corresponds to the objective function of the unconstrained optimization

problem, converges at the global minima of energy function of the system by means of

appropriate control of parameters dominating occurrence of chaos. However none of

these global optimization techniques can reveal gradient descent in terms of efficiency

in number of computations. therefore in this thesis we revisit the problem of stochastic

gradient descent for IIR filtering.

1.2.3 Proposed Optimization Method

The proposed global optimization methods in this dissertation are based on

stochastic approximation methods on the MSE cost function and in information

theoretic learning. The stochastic approximation represents a simple approach to

minimizing a nonconvex function, which is based on a randomly distributed process

in evaluating the search space [56]. In particular, two methods were investigated. The

first method [57] is implemented by adding random perturbations estimate of the

system's dynamic equation. Variance of the random fluctuation must decay according

to a specific annealing schedule, which can ensure convergence to a global optimum.

The goal of the early large perturbations is to allow the system to quickly escape

from the local minima. The second method is based on stochastic approximation with

convolution smoothing [56]. The objective of convolution smoothing is to smooth out

the nonconvex objective function by convoluting it with a noise probability density

function (pdf). Also in this method, the variance of the pdf must decay according to

a cooling schedule. The amount of smoothing is proportional to the variance of the

noise pdf. The idea of this method is to create a sufficient amount of smoothing in the

beginning of the optimization process so that the outcome is a convex performance

surface. When the variance of the noise pdf is gradually reduced to zero, the

performance surface gradually converges to the original nonconvex form. Both of

these methods use the MSE cost function.

We also propose annealing the kernel size in entropy optimization. Entropy

can be estimated directly from data using the Parzen estimation if Renyi's entropy

definitions are issued [58, 59]. It is possible to also derive a gradient-based algorithm to







search the minimum of this new cost function. Recently, Erdogmus [4, 5] used ITL in

adaptive signal processing. We developed a global optimization algorithm for entropy

minimization by annealing kernel size (similar to the stochastic approximation with

convolution smoothing method for MSE criterion). We showed that this is equivalent

to adding an additive noise source to the theoretical cost function. However the two

methods differ since the kernel function smooths the entropy cost function.

1.3 Outline

In C'i Ilpter 2, the basic idea of an adaptive filter and adaptive algorithm is

reviewed. Especially, we reviewed the LMS algorithm for adaptive IIR filtering, which

is the basic form of our proposal algorithms. Since we focus on global optimization

algorithms for adaptive IIR filtering, some important properties on global optimization

for system identification are reviewed. The system identification framework with Kautz

filters is also presented.

In ('!i I'ter 3, we introduce the stochastic approximation with convolution

smoothing (SAS) technique and apply it to adaptive IIR filtering. Similar to the

GLMS algorithm by Srinivasan [56], we derive the LMS-SAS algorithm. The global

optimization behavior of the LMS-SAS algorithm has been analyzed by evaluating the

transition probability density of escaping out from a steady state point for the scalar

case. Because of the noisy estimate gradient, the behavior of the NLMS algorithm with

decreasing step size is shown to be similar to that of the LMS-SAS algorithm from a

global optimization perspective. The global search capability of LMS-SAS and NLMS

algorithms are then compared.

In C'!i Ilter 4, the entropy criterion is proposed as an alternative to MSE for

adaptive IIR filtering. The definition of entropy (mutual information) is first reviewed.

By using the Parzen window estimator for the error pdf, the steepest descent algorithm

(ITL algorithm) with the entropy criterion for the system identification framework of

adaptive filtering is derived. The weak global optimal convergence of ITL algorithm in

simulation examples is given. Finally, we compare the performance of the ITL algorithm

with that of LMS-SAS and NLMS algorithms in terms of global optimization capability.





8

In ('!i ipter 5, the associated LMS, LMS-SAS, NLMS, and ITL algorithms for

the Kautz filter are first derived. Similarly, we compare the global optimization

performance of proposed global optimization algorithms for the Kautz filters. Finally,

the associated algorithms are applied to nonlinear equalization. In ('!i Ilter 6, we

conclude the dissertation and outline future work.













CHAPTER 2
ADAPTIVE IIR FILTERING

2.1 Introduction

Figure 2-1 shows the basic block diagram of an adaptive filter. At each iteration,

a sampled input signal x(n) is passed through an adaptive filter to generate the output

signal y(n). This output signal is compared to a desired signal d(n) to generate the

error signal E(n). Finally, an adaptive algorithm uses this error signal to adjust the

adaptive filter coefficients in order to minimize a given objective function. The most

widely used filter is the finite impulse response (FIR) filter structure.

In recent years, active research has attempted to extend the FIR filter into the

more general infinite impulse response configuration that offers potential performance

improvements and less computational cost than equivalent FIR filters [60]. However,

some practical problems still exist in the use of adaptive IIR filters. As the error

surface of IIR filters is usually multimodal with respect to the filter coefficients, learning

algorithms for IIR filters can easily be trapped at local minima and be unable to

converge to the global optimum [1]. One of the common learning algorithms for

adaptive filtering is the gradient-based algorithm, for instance the least-mean-square

algorithm (LMS) [61]. The algorithm aims to find the minimum point of the error


d(n)


x(n)


Figure 2-1: Adaptive filter model.







surface by moving in the direction of the negative gradient. Like most of the steepest

descent algorithms, it may lead the filter to a local minimum when the error surface

is multimodal. In addition, the convergence behavior of the LMS algorithm depends

heavily on the choices of step size and the initial values of filter coefficients.

Learning algorithms such as maximum likelihood [62], LMS [1], least-square [2],

and recursive-least- square [2] are well established for the adaptation of FIR filters.

In particular, the gradient-descent algorithms (such as LMS) are very suitable for

adaptive FIR filtering, if the error surface is unimodal and quadratic. Generally, LMS

is the best choice for many applications of adaptive signal processing [1], because of its

simplicity, its ease of computation, and the fact that it does not require off-line gradient

estimations of data. It is also possible to extend the LMS algorithm to adaptive IIR

filters; however, it may face the local minimum problem when the error surface is

multimodal. The LMS algorithm adapts the weight (filter coefficients) vector along the

negative gradient of the mean-square-error performance surface until the minimum of

the MSE is reached. In the following, we will present the formulation of the IIR-LMS

algorithm. The IIR filter kernel in direct form is constructed as
L M
y(n) aix(n i) + by(n j) (2-1)
i=0 j=1

Let the weight vector 0, X(n) be defined as


0 [ao, ,aL, bl, ,bM]T (2-2)

X(n) = [x(n), ,x(n L), y(n ), ,y(n M) (2-3)

and d(n) is the desired output. The output is


y(n) = oT(n)X(n) (2-4)

We can write the error E as


(n) = d(n) y(n) = d(n) OT(n)X(n)


(2-5)







So the gradient is


2E(n)[ (
Oao '

2E(n)[ a0)
Oao '


V0
aE(n) aE(n)
' aL ab1
a'y(a' by(
SaL ab1


Let us define


Voy(u) [
ao


ay(n ay(n
a9L ab9


aE2 2E
aO a0

1,t. J

]y T



OyM) T
f, J

o1 ,'


From Equation (2-1), obtain


M
+[ bj o(n
j 1


Voy(n) = [x(n), ,x(n L),y(n 1), ,y(n M)T
M .- M n M .
" Y, by b r1 b b ]1
j=1 j=1 j=1
M
X(n) + bjVoy(n -j)
= 1


(2-10)


(2-11)


Where the gradient estimate is given by


Vo = -2E(n)Voy(n)


(2-12)


Based on the gradient descent algorithm, the coefficients update is


0(n + 1) = 0(n) pVo


(2-13)


Therefore, in IIR-LMS, the coefficient update becomes


0(n + 1) = 0(n) + 2p/[d(n) y(n)]Voy()


(2-14)


where 2p is a constant step size.

For each value of n, Equation (2-4) produces the filter output and Equation (2-10)

and (2-14) are then used to compute the next set of coefficients 0(n + 1). Regarding

the computational complexity, the IIR-LMS algorithm as described in Equation (2-4)

through (2-14) requires approximately (L + M)(L + 2) calculations for each iteration

while the FIR-LMS requires only 2N calculations for each iteration (with filter length


(2-6)

(2-7)

(2-8)


(2-9)

























Figure 2-2: Block diagram of the system identification configuration.


= N). Being one of the gradient-descent algorithms, the LMS algorithm may lead the

filter to a local minimum when error surface is multimodal, and the performance of the

LMS algorithm will depend heavily on the initial choices of step size and weight vector.

Stability check. Jury's stability test [63] was used in this thesis. This stability

test ensure that all roots lie inside the unit circle. Since the test does not reveal which

poles are unstable, the polynomial must be factored to obtain this information. If the

polynomial order is larger then 2 (M > 2), the test becomes computationally expensive.

If this was done, any unstable set of weights could easily be projected back into the unit

circle. The difficulty of the stability check is polynomial factorization.

To simplify the stability check, one may use the cascade of first- or second-order

sections instead of the canonical direct form. In particular, the stability of the Kautz

filter, a structure of cascades of second-order sections with complex poles, is easily

checked.

2.2 System Identification with the Adaptive IIR Filter

In the system identification configuration, the adaptive algorithm adapts the

coefficients of the filter such that the adaptive filter matches the unknown system

as closely as possible. Figure 2-2 is a general block diagram of the adaptive system







identification configuration, where the unknown is described as

B(z-1)
y(n) = [ Bz ]x(n) + v(n) (2-15)
A(- 1)

where A(z-1)= 1 aiz- and B(z-1) = 1, by -j are polynomials, and x(n>) and

v(n) are the input signal and the perturbation noise, respectively. The adaptive filter is

described as
B(z-1)
A(z-1)
where A(z-1) 1- t az and B(z-1) = b byz- The issues in system

identification with adaptive filters are usually divided into the following:
* Adaptive filter order:
-insufficient order: n* < 0;

-strictly sufficient order n* = 0;

-more than sufficient order n* > 0;
where n* = min[(na ha); (nb nb)]. In many cases, features (b) and (c) are grouped
in one class, called sufficient order, where n* > 0.

* Identification type
-without additional noise;

-with additional noise correlated with the input signal;

-with additional noise uncorrelated with the input signal;

The basic objective function of the adaptive filter is to adapt the coefficients of the

adaptive filter such that it describes the unknown system in an equivalent form. The

equivalence is usually determined by an objective function W(n) of the input, available

unknown system output, and the adaptive filter output signals. The objective function

W(n) must satisfy the following properties in order to fit the consistent definition:
* Nonnegativity W(n) > 0.

* Optimality W(n) = 0.

There are many v--,- to describe an objective function that satisfies the optimality and

nonnegativity properties. The following forms of the objective function are the most

commonly used in deriving the adaptive algorithm:







* Mean square error (\!iSKl) W[E(n)] = E[2(n)].
* Least square (LS) W[e(n)] = -ii :1 2n i)
* Instantaneous square error (ISV) W[(n)] = 2(n).

In a strict sense, MSE is a theoretical value that is not easy estimated. In practice,
it can be approximated by the other two objective functions. In general, ISV is easily
implemented but it is heavily affected by perturbation noise. Later we present the
entropy of the error as another objective function, but first we must discuss MSE.
The adaptive algorithm attempts to minimize the mean square value of the output
error signal, where the output error is given by the difference between the unknown
system and the adaptive filter output signal.That is,

B(z-1) B(z-1)
E(n) = ]x() ) + v(n) (2-17)
A(z- 1) A'z1)

The gradient of the objective function estimate with respect to the adaptive filter
coefficients is given as

V[E2(n)] = 2E(n)V[E(n)] = 2E(n)V7[y(n)] (2-18)

with
0(/ -- i+ (n ) 1 \((n-k)
Vo[y(n) I y + )] a, -(n) (2-19)
x(n 4+ z a y(n- k)
[5"7k --k b) 1 E ( a)bj bj-lj(n)
where 0 is the adaptive filter coefficient vector.
This equation requires a relatively large memory allocation to store data. In
practice, a small step approximation that considers the adaptive filter coefficients
slowly varying can overcome this problem [64]. Therefore, by using the small step
approximation, the adaptive algorithm is described as

0(n + 1) 0 + p(n)Q(n) (2-20)

where O(n) = {y(n -i) x(n j)}T for i = 1, a j = 1, rb ,and p is a small step
size that satisfies the following property. The adaptive algorithm is characterized by the
following properties:







Property 1 [65] The Euclidean square-norm of the error parameter vector 1, f,' I by

|0|(n) 0(n) I is convergent if p .,ri .

0 < p < (2-21)

Property 2 [31, 66, 67] The stat.:- ',r,, points of the MSE performance -, if,,: are given
by

A(z1 ,n)B(z-1) A(z-1 n)B(z-1) B~(z-l, n)
E{[ ]()}{[ II ]x(n- j)}= 0 (2-22)
A(z-l, n)A(z-l, n) A2 -1, n
A(z-, n)B(z-1) -A(z-1, n)B(z-) 0 (2-23)
A(z-1l,n)A(z-l,n) A(z-1l,n)

In practice, only the stable station., ,1 points, so called equilibria, are of interest and
/, ill.] these points are I-,-..l; W as

Degenerated point: The degenerated points are the equilibrium points where

SB(z-1, n)=0 : b < na
(2-24)
B(z-1, n) L(z-1)A(z-1, n) : nb > ha

where L(z-1) = oa Ik -k.
Nondegenerated points: All the equilibria that are not degenerated points.
The equilibrium points that influence the form of the error performance surface
have the following property.
Property 3 [12] If n* > 0, all ,1l. l',l minima of the MSE performance -, if'. : are given

by

A*(-l) A(z-1)C(z-1) (2-25)
B*(z-l)=B(z-1)C(z- )

where C(z-1) = 0 Ck -k. It means that all ,1, l' minimum solutions have included
the i" ,;,;. ,,,,.:,,l' describing the unknown system plus a comm factor C(z-1) present in the
numerator and denominator ', ./;;/,,.. 'l,,:l~ of the adaptive filter.







Property 4 [68] Ifn* > 0, all equilibrium points that i.-i:fy the strictly positive

realness condition
A*(z-1)
Re[ ]> 0 : )> z 1 (2-26)
A(z-1)
are l.' '.1rl minima.

Property 5 [68] Let the input -.:,i,.i x(n) be given by x(nr) = [ )]w(n), where

F(z-1) = YE o fk- and G(z-1) 1- E 1 gk-k are coprime j'. *;,,.,,.:,l,- and w(n)

is a white noise. Then if

n* > nf
(2-27)
hb ha + 1 > ng

all equilibrium points are gl. l,'.l minima.

This property is actually the most common used result for the unimodality of the MSE

performance surface in cases of identification with sufficient order models. It has two

important facts which are

If ha = na = 1 and ib > nb > 1, then there is only one equilibrium point, which is

the global minimum.

If x(n) is white noise (nf = ng = 0), and the orders of the adaptive filter are

strictly sufficient( a = na and ub = nb, and ub na + 1 > 0), then there is only

one equilibrium point, which is the global minimum.

N i, il [69] further investigated this property and he obtained a less restrictive

sufficient condition to guarantee unimodality of the adaptive algorithm, when the input

signal is a white noise and the order of the adaptive filter exactly match the unknown

system. The result is given as

Property 6 [69] Ifx(n) is a white noise sequence (nf = ng = 0) the orders of the

adaptive filter are strictly sufficient (ha = na and fb = nb, and ib na + 2 > 0), then

there is only one equilibrium, which is the 1l.. /,.l minimum.

There is another important property which is







Property 7 [67] All degenerated equilibrium points are saddle points and their existence

implies r,,,ll.:,,:' I/,.;l/i (existence of stable local minimum) of the performance -, ifl;., if
either ha > nb = 0 or a = 1.

This property is also valid for the insufficient order cases.

In 1981, Stearns [70] conjectured that if n* > 0 and the input signal x(n) is white

noise, then the performance surface defined by MSE objective function is unimodal.

This conjecture -1 l, 1 d valid until Fan offered numerical counterexamples for it in 1989

[71].

The most important characteristic of IIR adaptation is the possible existence

of multiple local minima which can affect the overall convergence. Moreover, global

minimum solution is unbiased by the presence of zero-mean perturbation noise in the

unknown system output signal. Another important characteristic of IIR adaptation

is the requirement for stability checking during the adaptive process. This stability

checking requirement can be simplified by choosing an appropriate adaptive filter

realization.

2.3 System Identification with Kautz Filter

One of the in i Pr drawbacks in adaptive IIR filtering is the stability issue. Since

the filter parameters are changing during adaptation, a practical approach is to use

cascades of first and second order ARMA sections, where stability can still be checked

simply and locally. A principled way to achieve the expansion of general ARMA

systems is through orthogonal filter structures [72]. Here we uses Kautz filters, because

they are very versatile (cascades of second order sections with complex poles but

still with a reasonable number of parameters). The Kautz filter, which can be traced

back to the original work of Kautz [73], is based on the discrete time Kautz basis

functions. The Kautz filter is a generalized feedforward filter which produces an output

y(n) = p(n, ()T0, where 0 is set of weights and the entries of p(n, () are the outputs of

first order IIR filters with a complex pole at ( [74]. Stability of the Kautz filter is easily

guaranteed if the pole is located within the unit circle (that is |(| < 1). Although the







adaptation is linear in Oi, it is nonlinear in the poles, yielding a nonconvex optimization
problem with local minima.
The continuous time Kautz basis functions are the Laplace transform of continuous
time orthonormal exponential functions which can be traced back to the original works
of Kautz [73]. The discrete time Kautz basis functions are the Z-transforms of discrete
time orthonormal exponential functions [74]. The discrete time Kautz basis functions
are described as

,1( -ck k-' 1
4'2k (Zk,k) l k+ k
2 (1 6k-1)( 1 -

-1 (1 )(1 () (2-28)
10 (2-28)

1 ok k -1
2k+1(Zk, ) 1 kk I 1k
k-1 (_1 1)(1 (1)

H1 ) (2-29)
1-o (t- (z-)(1 (1z_1)

where (k ak + j3k, ((k(k) are the kth pair of complex conjugate poles, and (k < 1
because of its stability, and k is .i'. il,- even.
The orthonormality of the discrete time Kautz basis functions is represented as

1 dz
J (z, (,k)4q (l/Z, p,q (2-30)

where the integral unit circle tour is analytic in the exterior of the circle.
All pairs of complex conjugate poles can be integrated in real second order sections
to reduce the degrees of freedom. The resulting basis functions can be describes as
discrete-time 2-pole Kautz basis functions. The discrete-time Kautz basis functions can
be simplified as Figure 2-3, where


y(n) = (n)T0 (2-31)

p(n) [kco(n), ,d c ()]T (2-32)

K2k(z, )= K2k-2(z, )A(z, ) (2-33)

K2k+1(z, K) 2k- 1(z, )A(z, ) (2-34)
















Figure 2-3: Kautz filter model.


Ko(z, )

Ki(z,


'(1

S(1


A(z- () (z-1 + *)
(A( ^=


(1 -( z-1)(1 (*Z-1)
1 -((*
Ko t1 + 2
F2'


t1 -((*
4 =|1 2
F2'


Here ( is a complex conjugate pole (that is ( = a + jp).


z-1- 1
(z- )(1 -*-1)
z-1 +1
(z- 1)(1 -<(*z- 1)


(2-35)

(2-36)


(2-37)

(2-38)

(2-39)


,~ >














CHAPTER 3
STOCHASTIC APPROXIMATION WITH CONVOLUTION SMOOTHING

3.1 Introduction

Adaptive filtering has become a ii i"r research area in digital signal processing,

communication and control, with many applications, such as adaptive noise cancellation,

echo cancellation, and adaptive equalization and system identification [1, 2]. For

simplicity, finite impulse response (FIR) structures are used for adaptive filtering and

have many mature practical implementations. However, infinite impulse response

structures can reduce computational complexity and increase accuracy. Unfortunately,

IIR filtering has some drawbacks, such as slow convergence, possible convergence to a

bias or unacceptable suboptimal solutions, and the need for stability monitoring. The

i1 i i"r issue is that the objective function of the IIR filtering with respect to the filter

coefficients is usually multimodal. The traditional gradient search method may converge

to a local minimum depending on its initial conditions. The other unresolved problems

of adaptive IIR filtering are discussed by Johnson [11] and Regalia [15].

Several methods have been proposed for the global optimization of the adaptive

IIR filtering [75, 45, 76]. Srinivasan et al. [56] used stochastic approximation with

convolution smoothing (SAS) in the global optimization algorithm [3, 76, 77] for

adaptive IIR filtering. They showed that the smoothing behavior can be achieved by

appending a variable perturbing noise source to the error signal. Here, we modify this

perturbing noise by multiplying it with its cost function. The modified algorithm,

which is referred to as the LMS-SAS algorithm in this dissertation, results in better

performance in global optimization than the original algorithm by Srinivasan et al.

We have also analyzed the global optimization algorithm behavior by looking at their

transition probability density of escaping out from a steady state point.







Since we use the instantaneous (stochastic) gradient instead of the expected

value of the gradient, error in estimating the gradient naturally occurs. This gradient

estimation error, when properly normalized, can be used to act as the perturbing noise.

Consequently, another approach in global IIR filter optimization is the normalized LMS

(NLMS) algorithm. The behavior of the NLMS algorithm with decreasing step size is

similar to that of the LMS-SAS algorithm from a global optimization perspective.

3.2 Convolution Function Smoothing

According to Styblinski [3], a multi-optimal function f(0) E R1, 0 E R" can be

represented as a superposition of a convex function (i.e., having just one minimum)

and other multi-optimal functions that add some i ,~.-" to the convex function. The

objective of convolution smoothing can be viewed as iliI, ing out" the noise and

performing minimization on the "smoothed" convex function (or on a family of these

function), in order to reach the global optimum. Since the optimum of the smoothed

convex function does not, in general, coincide with the global function minimum, a

sequence of optimization steps are required with the amount of smoothing eventually

reduced to zero in the neighborhood of the global optimum. The smoothing process

is performed by averaging f(0) over some region of the parameter space R" using the

proper weighting (or smoothing) function h(0) defined below. Formally, let us introduce

a vector of random perturbation TI E R", and add TI to 0, thus creating the convolution

function.

f(TI, j hq,3f(0 6- =)d j h(0 -T,)f3(T)d (3-1)
JR, ) n JRn

Hence,

f (0, p) =E, [f (- )] (3-2)

where f(0, 3) is the smoothed approximation to the original multi-optimal function

f(0), and the kernel function h(qr, 3) is the pdf used to sample Tl. Note that f(0, 3) can
be regarded as an averaged version of f(0) weighted by h(qr, 3).

The parameter 3 controls the dispersion of h, i.e., the degree of f(0) smoothing

(e.g.,3 can control the standard deviation of Tll I rn). E,[f(0 r1)] is the expectation







with respect to the random variable Tr. Therefore, an unbiased estimator f(0, 3) is the

average:
N
f(O,/3) f(O ) (3-3)
i= 1
where rl is sampled with the pdf h(i,/3).

The kernel function h(r, )3 should have the following properties:
* h(I,3) = h(g) is piecewise differentiable with respect to T1.

* limpn o h(r,/) = 6(rI) (Dirac's delta functional).

* h(rq, 3) is a pdf.

Under these conditions limp 0o f(T, ) = Jn 6(q)f ( q)dr = f(0 0) = f(0).

Numerous pdf's satisfy above conditions, e.g., the Gaussian, uniform ,or C 1 !I

pdf's. Let us consider the function of f(x) = x4 16x2 + 5x, which is continuous

and differentiable, and it has two separated minima. Figure 3-1 shows the smoothed

function, which is the convolution between f(x) and a Gaussian pdf.

Observations. Smoothing is able to eliminate the local minima of f(0, 3), if 3 is

sufficiently large. When 3 -- 0, then f(0, 3) f(0): this should actually happen at the

end of optimization to provide convergence of the true function minimum. Our objective

now is to solve the optimization problem of minimizing the smoothed functional f(0, 3)

as -i 0. In general, the modified optimization can be viewed as min f(0, 3) as 3 -- 0.
OCR"
Similarity with simulated annealing algorithms. Development of simulated

annealing method was motivated by the behavior of mechanical systems with a very

large number of degrees of freedom. According to the general principles of physics,

any such system will, given the necessary freedom, tend toward the state of minimum

energy. Therefore, a mathematical model of the behavior of such a system will contain

a method for minimizing a certain function, namely the total energy of the system.

Simulated annealing is a convenient way to find the global minimum of a function that

has many minima. The method is a biased random walk that samples the objective

function in the space of the independent variables. It is executed in the following

manner. Starting at a random chosen initial point, the corresponding value of the































Smoothed function f(x,3)


1400



1200
-



1000



800-
-


600



400
-


200



0O



-200
-5


-4 -3 -2 -1 0 1 2 3
x


Figure 3-1: Smoothed function using Gaussian pdf.


4 5







objective function is calculated. Next, a random point is chosen on the surface of the

unit n-dimensional objective function, the new corresponding value of the objective

function is also calculated. If the step is beneficial, the new corresponding objective

function is smaller than the previous one, the new point is unconditional accepted. If

the step is detrimental in terms of the cost, the new corresponding objective function

is larger than the previous one, the new point is accepted according to a temperature

associated function. This temperature associated function has the following property;

the lower the temperature, the smaller the probability of transition to a higher energy

state. Therefore, the simulated annealing method is often viewed in terms of the energy

p article" at any given temperature. Lowering the temperature also reduces the particle

energy. Let us consider the similar interpretation to the convolution function smoothing.

Perturbing 0 can be viewed as adding some noise energy to the particle. The larger the

3, the larger the energy is. Thus, reducing 3 for the convolution function smoothing is

similar to lowering temperature in the simulated annealing algorithm.

3.3 Derivation of the Gradient Estimate

When the SAS technique is applied to the IIR-LMS algorithm, we require a

gradient operation of the functional f(0, 3) (that is Vof(0, 3)). Under the assumption

that the gradient of functional f(0, 3) is known, the unbiased single-sided gradient

estimate of the smoothed functional f(0, 3) can be represented as

SN
Vof (0, 3) Vof(0 -/3t) (3-4)
i=1

where the reflected value is substituted by the empirical average. Likewise. the unbiased

double-sided gradient estimate of the smoothed functional f(0, 3) can be represented as

N
VjO(0, 3) 2- 2 -[Vof(0 + 43) + V,/(0 s3l)] (3-5)
i=1

In order to implement either Equation (3-4) or (3-5) we would used to evaluate the

gradient at many points in the neighborhood of the operating point 0, yielding

effectively an off-line iterative global optimization algorithm. We will combine the







concept of the SAS gradient estimate with the LMS optimization procedure to develop

an on-line iterative global optimization algorithm.

The key to implementing a practical algorithm for adaptive IIR filters is to develop

an on-line gradient estimate Vo0(0), where e(0) is the error between the derived signal

and the output of the adaptive IIR filter. Here we use the SAS derived single-sided

gradient estimate together with the LMS algorithm, where the gradient estimate is


V,0 ((O,) ( Vo0 3- 3) (3-6)
i=1
A major characteristic of the LMS algorithm is its simplicity. We hold to this attribute

by setting N = 1 in Equation (3-6) and substitute the neighborhood averaging by the

sequential presentation of data as done in the LMS algorithm. Hence, we obtain the

one-sample gradient estimate as


Vo (0,43) -Vo(O t) (3-7)

This equation is iterated for each input sample. Theoretically, Equation (3-7) shows

that the on-line version of the SAS is given by the gradient value at the randomly-

selected neighborhood of the present operating point. The variance of the neighborhood

is controlled by 3, which decreases along with the adaptation procedure. Implementing

Equation (3-7) requires two filters; one for computing the input-output relationship

and the other for computing the gradient estimate at the perturbing point (0 3r).

For large-order systems, this requirement is impractical. We investigate the following

simplification, which involves the representation of the gradient estimate at (0 /3r) as a

Taylor series around the operating point. That is


V0(0 3r) = [e'(0) + 4r "(0) + .("'(0) + ...] (3-8)

Under this equation, we can use the same filter to compute both the input-output

relationship and the gradient estimate. As a first approximation, we only keep the

first two terms and assume a diagonal Hessian. This results in the following gradient







estimate

V0 (0 O3,) E'(0) (3-9)

This extreme approximation assumes that the second derivative of the gradient vector

is independent of 0 so that its variance is constant throughout the adaptation process.

The second term /3r of the right hand side of the above equation can be interpreted

as a perturbing noise, which is the important term to avoid convergence to the local

minimum.

Recall that the GLMS algorithm is


0(n + 1)= 0(n) p(n)E(n)Ve(n, 0) 3(n)r (3-10)

where the appending perturbation noise source is P(n)rI.

3.4 LMS-SAS Algorithm

Srinivasan used Equation (3-9) to estimate the gradient in the Global LMS (GLMS)

algorithm of Equation (3-10) [56]. Similar to the GLMS algorithm, we derive now the

novel LMS-SAS algorithm. The adaptive IIR filtering based on the gradient search

essentially minimizes the mean-square difference between a desired sequence d(n)

and the output of the adaptive filter y(n). The development of GLMS and LMS-SAS

algorithms involve evaluating the MSE objective function. The MSE objective function

can be described as
1 1
(() = -E{E2(0} = tE{[d(n) y(n)2 (3-11)
2 2

where E is the statistical expectation. The output signal of the adaptive IIR filters,

represented a direct-form realization of a linear system, is


y(n) = aox(n) + + an-N+ix(n N + 1)

+bly(n 1) + .. + bn-M+ly(n M + 1) (3-12)

Which can be rewritten as


y(n) 0T(n)Iin)


(3-13)







where 0(n) is the parameter vector and 4(n) is the input vector.

0(n) [ao(n), aN-l(n),bi(n), bM-l()T (3-14)

4(n) = [x(n), x(n N + 1), y(n ), Y ,y M ] + 1)1T (3-15)

The MSE objective function is


(n, 0) = tE{[d(n) T(n)0(n)]2} (3-16)
2

Now we use the instantaneous value as the expectation of E{E2(n)} ~ E2(n) such that

1 1
I(n, 0) 2(n, 0) [d(n) 0T(n)(n)]2 (3-17)
2 2

Considering the LMS algorithm, we must estimate the gradient vector with respect to

the parameters 0.

1
VT(n, 8) = Vo E2 (, 8) = t(n, Q) 0}/Q(n, 0)


-c(n, 0)Voy(n) -F(n, o) &a (3-18)
BE(n,0)


The partial derivative term a (n, 0)/Oai is evaluated as
N-1
a (n,0) [bk: k ] + x(n i) (3-19)
kO0

Similarly, the partial derivative term O (n, O)/ M is evaluated as

(n,-) = -{ [bk -( k ]+Y(n-i)} (3-20)
k-0
From Equation (3-9), we obtain

VOe(n, 0 r) = Voe(n, 0) 3r (3-21)

Using the above equation, we obtain the adaptive algorithm of steepest descent as


0(n + 1) = 0(n)- p(n)E(n)V7 (n r) (3-22)

S0(n) (n)>(n)7Vo (n, 0) P(n)E(n)>p (3-23)







where the third therm p(n)E(n)3'r on the right hand side is the appended perturbation

noise source. rl represents a single additive random source, p(n) is the step size which

decreases over of iterations, and E(n) is the error between the desired output signal and

the output signal of the adaptive IIR filter.

The difference between LMS-SAS and GLMS resides in the form of the appending

perturbation noise source, where we have modified the appending noise source by

multiplying it with the error. This modification brings the error into the noise term

which is in principle a better approximation to the Taylor series expansion in Equation

(3-8) than Equation (3-9). We can therefore foresee better results.

3.5 Analysis of Weak Convergence to the Global Optimum for LMS-SAS

In this section, we obtain the transition probability of escaping out of a local

minima by solving a pair of partial differential equations, which are called the

Fokker-Planck equations (diffusion equation). We follow the lines of Wong [78]. Here we

can write the LMS-SAS algorithm as Ito's integral as

/t /t
o ot 0+ j.m(0, s)ds + J a(0, s)dW, (3-24)


Where
h (0t,,) = (t)e0t, t)V(0t, t) (3-25)
(3-25)
7 (0t, t) (t)F(0t, t)

Let {Ot, a < t < b} be a Markov process, and denote


P(0, t 0o, to) = p(St < 0 oto = 0o) (3-26)


We call P(0, t 0o, to) the transition function of the process.

We first discuss the simple case of the scalar 0 assumption and then the more

involved case of the vector 0 assumption.

0 is a scalar.

If there is a function p(O, t 0o, to) so that

P(
P(0,t 0o, to)= p(x, t0o, to)dx (3-27)
J-0O







then we call p(0, t 0o, to) the transition density function. Since {Ot, a < t < b} is a

Markov process, P(0, t 0o,to) satisfies the C '! 1p, iii-Kolmogorov equations.


P(0, t o, to) = P(x, tlz, s)dP(z, s 0o, to) (3-28)
--oo

We now assume the crucial condition on {Ot, a < t < b}, which makes the derivation of

the diffusion equation possible. Define for a positive e,


(0, t; ,A)- f (y 0)kdP(y, t + A 10, t)
Jly-el<
k =0,1,2 (3-29)

S(0, t; C, A)- (y )3dP(y, t + Ae0, t) (3-30)

We assume that the Markov process {Ot, a < t < b} satisfies the following conditions:

1 go
[1 4(0, t; c, A)] A 0 (3-31)
AA (3-3t)

-M(0, t; e, A) m(0,t) (3-32)
1(o, t; ,A) 0a2,t) (3-33)
1 -o

S11 (0,t; ,A) 0 (3-34)

It is clear that if 1 1,,(0, t; c, A) 0, then by dominated convergence,

/oo
p(t+A t >0= I [i- 11,(, t;c, A)l]dP(e,t) 0 (3-35)

In addition, suppose that the transition function P(0, t\00, to) satisfies the following

condition:

Assumption. For each (0, t), P(0, tl0o, to) is once differentiable in to and

three-times differentiable at 00, and the derivatives are continuous and bounded at

(0o, to).
Kolmogorov [79] has derived the Fokker-Planck equation

o 1 02
p(0, tlo, to) [0(0, t)p(0, tl ,,to)

a[m(0, t)p(0,t o, to)] b > t > to > a (3-36)
Of0







The initial condition to be imposed is


/oO
-00


that is p(O, tl80, to)
equations, we get


atP (0' t)


If p(0, t) is a product p(0, t)
quantities, then we have


6(0 0o). Substituting Equation (3-24) into the Fokker-Planck


1 02
2 2 [(t) Op (0)(0, t)]


g(t)W(0)p(O0) reflecting the independence among the


w ) dg(t)
W(0)(0) dg(t)
dt


d d)]
g(t)P(t)( { [E(0)W(0)p(0)]
dO 2 dO


(3-39)


-ve(o)w(M)l()O f

Let W(O) be any positive solution of the equation


ld
td d (0)W (0)]
2 dO


V(0)W(0)


dg(t)
W (0)(0) dg(t)
dt

1 dg(t)
g(t)p(t) dt


d dc(0)
= g(t)j i(t) ( [E (0) W (0)])
2 dO dO

1 1 d dc(0)
E((0)W(0) ])
W(o)(0() 2 dO ) dO


The two sides, being functions of different variables, must be constant in order for the
equality to hold. Set this constant as -A, then


1 dg(t) A
g(t)p(t) dt
g(t)= eX- Af(s)ds

P(0, t) = e-X fp ( 0ds()


(3-43)

(3-44)

(3-45)


Where A\(0) satisfies the Sturm-Liouville equations.


((e)w() d()] + Aw(O)(0)
dO


0 (3-46)


f(0)p(0, tl0, to)d0 e f(Oo)


VfC S


(3-37)


[ (ta V((0s, t)p(0, t)]
a 1 4 V O l O A I 0


(3-38)


then


Therefore


(3-40)


(3-41)


(3-42)


l d
2 dO







Under rather general conditions, it can be shown that every solution p(O, t) can be
represented as a linear combination of products. Since p(0, t Oo, to) is a function of
t, to, 0, 0o, it must have the form of

p(O, tlOo,to) = W(o) eCf (')" (o) (Bo)dA (3-47)

where j(Oo) is conjugate complex of \ (0o). Here we want to know the transition
probability of the process escaping from the steady-state solution 0*, in which V(0O*) =
0. From Equation (3-40), we obtain

(0*)W(O*) = c (3-48)

where c is a constant. The Sturm-Liouville equation becomes

Sd A
2 0p(0) + (0) ( 0 (3-49)

Let ( 1= 2 then p(0) e-jd are the bounded solutions. And we know that
(9* )1e 0 =E(- *)T (3-50)
bC 2CrT (0*) 2 TF(O*) dO 2- (3-50)

Where T ft p(s)ds, by the inversion formula of the Fourier integral, we obtain

1 1 02 1 f/O 1 2(O*)TJ
C 2 T = *) e--2 (O*)T eOdv (3-51)
2FT E(0*) 27 _

From Equation (3-47), we get the transition probabilities of the process escaping out of
the valley as

1 1 (0 0*)2
p(0, tl*, to) exp(- 1 )
/27r (*) fjos)ds 2 (0*) jt Io(s)d

=G( 0*, (0*) ()ds) (3-52)

where G(0, o-2) is a Gaussian function with zero mean and variance a2.
Summary. Equation (3-52) is the final transition probability of the process
escaping out from the steady-state 0*. The conditional p(0, t 0*, to) is determined by
0 0*, p(n), and E(0*). Because we use a monotonically decreasing p(n), the algorithm







will decrease the probability of the process jumping out the valley over iterations.

From Equation (3-52) the transition probability of the process escaping out from
the local minimum 0* is larger than the one from the global minimum 0* because of

I(0S)\ < I(0k)|. Thus, the algorithm will stay most of its time near the global valley
and will eventually converge to the global minimum. Equation (3-52) also shows that
the larger the 0 0* is, i.e., the larger the valley around the steady state point 0*, the

less probable is the process from escaping out from this steady state point 0*.
0 is a vector.

Returning to the original case, in which 0 is a vector, we must solve the following
Fokker-Planck equation:

S1 2
p(, ) V 1(0))p(, t)] Vo[/(t)V(t, t)p(O, t)] (3-53)

Similarly, we want to know the transition probability of escaping from the steady-state

solution 0*, in which V(0*) = 0. Equation (3-53) will become


p(0, t)=t)E0)t))] (3-54)
at 2

Imposing strict constraint that p(O, t) is a product


p(0,t) g(t)(0) = g(t)1(01)y2(02) *... I f-1(ON+M-1) (3-55)

then we have

1 dg(t) E(0*) 72 (3-56)
V p(0) (3-56)
g(t)p(t) dt 2 p(0)

The two sides, being function of different variables, must be constant, set this constant
as -A, then

1 dg(t)
S dg(t -A (3-57)
g(t)p(t) dt )

V 7p(0) = -A (3-58)
2p(0)







Similarly, Equation (3-58) can be presented as

F(0*) V721(01) A1
2 V1) (01) -A,
F(0*) 77
2(02) 22(02) = -A2



-(0) V2 N ) (ON+M-) -AN+M-1 (3-59)
2 | I[-I(ON+M -1) ON+M r I[--1
2 ,N+M-1 ( 1

where zNiM 1 i = A.

Let Ai = 1i then 4fi(0) = e"0 for i =1, ... N + M 1 are the bounded

solutions. From Equation (3-47), we get the transition probabilities of the process

escaping out of the valley as
N+M-1 .
p(O,tlO*,to) = e- )-" )d id
i= 1J-00
N+M-1
J G(o 0*, (*) s)ds) (3-60)


Under the constraint of factorization of p(n), the same arguments for the scalar case

will hold for the vector case. However ij(0) for i = 1, 2, .. N + M 1 are not,

in general, independent of each other, 9p(n) must also include the correlated terms

beside the independent term of product. Therefore the actual transition probability

p(0, t\0*, to) is larger than Equation (3-60). In the more realistic case of dependence,

the Fokker-Planck will become very complicated. Thus it is not easy to find out the

transition function from a steady state point.

3.6 Normalized LMS Algorithm

Because in practice we use the instantaneous gradient instead of the theoretical

gradient, an estimation error naturally occurs. The gradient error can be used to act as

the appending perturbing noise. After reviewing the XN i in i1i. .1 LMS algorithm [2], we

show that the global optimization behavior of the NLMS algorithm is similar to that of

the LMS-SAS algorithm because of the noisy estimate gradient. As a result, the NLMS

algorithm can also be used for global optimization.







Consider the problem of minimizing the squared Euclidean norm of

60(n + 1) = 0(n + ) 0(n), (3-61)

subject to the constraint

OT(n + 1)Vy(n) = d(n) (3-62)

To solve this constrained optimization problem, we use the method of Lagrange

multipliers. The square norm of 60(n + 1) is


1160(n + 1)112 6T(n + 1)6(n + 1)

S[0(n + 1) 0(n)][0(n + 1) 0(n)]
N
Y= 0 (n + t) Ok() 2 (3-63)
k-0
The constraint of Equation (3-62) can be represented as
N
SOk(n + t)VOyk(n) d(n) (3-64)
k-0

The cost function J(n) for the optimization problem is formulated by combining

Equation (3-63) and (3-64) as
N N
J(n) = Ok(n 1) Ok(n)\2 + A[d(n) Ok(n + 1)VOyk(n)] (3-65)
k=0 k=0

where A is a Lagrange multiplier. After we differentiate the cost function J(n) with

respect to the parameters and then set the results to zero, we obtain

2[0(n + ) (n)] =AVoyk(n), k 0,,... ,N (3-66)

By multiplying both sides of the above equation by Voyk(n) and summing over from

k = 0 to N, we obtain

2N N
A [Y0k(n+1)VOYk(n) 0k(n)V7OYk(n)]
Eko Voyk(n) k o k=o
2
2Vo [0(n + 1)Voy(n) _- T(n)Voy(n)] (3-67)
11|Voy(n) 112







Table 3-1: NLMS algorithm

y(n) io aix(n i) + j 1 ', ,.,(n j)
0(n) = [ao(n), aN_-(n), bi(n), bM-1()T
S(n) = [x(n),. ,x(n- N+ ),y (n-),--- y(n-M + )]
y(n) OT(n>i(n)
F(n) = d(n) y(n)
VOy(n) 4n) + :j'1 bjVoy(n -j)
o(n + 1) O(n) + 1 (M),|_ ( )Vo(" )

Substituting back the constraint of Equation (3-62) into Equation (3-67), we obtain

2
A 2 [d(n) OT(n)Voy(n)] (3-68)

Define the error E(n) = d(n) OT(n)Voy(n). We further simplify A as

A (2 () (3-69)
Voey(n) 12

By substituting above equation into Equation (3-66), we obtain

60k(n + 1) 2 Vey(n)(n) k = 0, 1,... ,N (3-70)

For the adaptive IIR filtering, the above equation can be formulated as

60(n + 1) Voy(R)E() (3-71)

or equivalently, we may write as

(n 1+) (n) + P Voy(R)E(n) (3-72)

This is the so called NLMS algorithm summarized in Table 3-1, where the initial
conditions are randomly chosen.
Computation complexity. The computational complexity of the NLMS
algorithm is (N + M)(M + 3). Compared to the computational complexity of the
original LMS algorithm which is (M + N)(N + 2), the NLMS algorithm is almost as
simple. It only requires a little extra computation.







3.7 Relationship between LMS-SAS and NLMS Algorithms
In this section, we show that the behavior of the NLMS algorithm is similar to that
of the LMS-SAS algorithm from a global optimization perspective. Here we follow the
lines of Widrow et al. [1] and assume that the algorithm will converge to the vicinity of
a steady-state point.
From Equation (3-18), we know that the estimated gradient vector is:

VM(n)) --en)Vey( n) (3-73)

Define N(n) as a vector of the gradient estimation noise in the nth iteration and

V(0(n)) as the true gradient vector. Thus

V((0(n)) = V((n)) + N(n)

N(n) V7(0(n)) V(0(n)) (3-74)

If we assume that the NLMS algorithm has converged to the vicinity of a local
steady-state point 0*, then V((0(n)) will be close to zero. Therefore the gradient
estimation noise will be

N(n) = V(((n)) -E(n)Voy(n) (3-75)

The covariance of the noise is given by

cov[N(n)] E[N(n)NT(n)] E[2 (n)Vey(n)VOyT(n)] (3-76)

We assume that E2(n) is approximately uncorrelated with Voy(n) (the same assumption
as [1]), thus near the local minimum

cov[N(n)] = E[ 2(n)]E[Voy(n)VoyT(n)] (3-77)

We rewrite the NLMS algorithm as

O(n+ 1) e(n) + M (0(n)) (3-78)
1 OY I I'n)11







Substituting Equation (3-74) into the above equation, we obtain

(n + 1) = 0(n) + P() (V(0()) + N(n)) (3-79)
1 y ey(n)II
e(n( + t n) (( )) + N(n) (3-80)
11 VOY(n) I I' I I y(n) I3I'

where the last term is the appending perturbing noise. Its covariance, from Equation

(3-77), is

c N(n) cov[N(n)] E[E2(n)]E[Voy(n)VoyT(n)]
IVoy n)1 12 Voy(n) I2 1 lVoy(n) 2
E[E2(n)]A (3-81)

where A is an unit norm matrix. Thus the NLMS algorithm near any local or global
minima has the variance of the perturbing random noise determined solely by both p(n)

and E(n). This behavior is very different from the conventional LMS algorithm with

monotonic decreasing step size where the perturbation noise is determined by p(n), E(n)
and Voy(n). Therefore, in the LMS algorithm the variance near the steady state point
is small because of Voy(n) w 0. Hence the LMS algorithm has small probability of
escaping out of any local minima because of the small variance of the noisy gradient.

On the other hand, notice that the variance of the perturbing random noise
in the LMS-SAS algorithm is p(n)E(n)/3r which is also independent of the gradient
and controlled by both p(n) and E(n). Therefore, we can anticipate that the global

optimization behavior of the NLMS algorithm near local minima is similar to that of the

LMS-SAS algorithm. Far away from local minima, the behavior of LMS-SAS and NLMS
is expected to be rather different from each other.
3.8 Simulation Results

In this section, we compare the performances of the LMS, LMS-SAS, and NLMS
algorithms in terms of their capability to seek the global optimum of IIR filters in

a system identification framework. According to properties of adaptive algorithm
discussed in C'! lpter 2, we set up the system identification example where its MSE

criterion performance surface has one local and one global minima. In this example, we











Method

LMS with
LMS with
GLMS wit
LMS-SAS
LMS-SAS
NLMS wit
NLMS wit
NLMS wit


Table 3-2: System identification of reduced order model

Number of hits
Global minimum Local
{0.906, -0.311} {-0.5
1= 0.001 40 60
p2(n) 10 90
h 03(n) and p = 0.001 60 40
with p2(n) for nax = 20000 93 7
with p2(n) for n,,m 40000 100 0
h pi(n) 100 0
h p2(n) 99 1
h p3(n) 98 2


minimum
t19,0.114}


will identify the following unknown system


S 0.05 0.4z-1
1 1.1314z-1 + 0.25z-2

by a reduced order IIR adaptive filter of the form


H(z)


(3-82)


(3-83)


1 az-1


The main goal is to determine the values of the coefficients {a, b} of the above equation,

such that the MSE is minimized to the global minimum. The excitation signal is

chosen to be random Gaussian noise with zero mean and unit variance. There exist

two minima of the MSE criterion performance surface with the local minimum at

{a, b} = {-0.519,0.114} and the global minimum at {a, b} = {0.906, -0.311}. Here we
use three types of annealing schedule for the step size (see Figure 3-2 which shows that

one is linear, one is sub linear and the other one is supra linear),

I (n) =0.1 cos(n7/2nmax)

/p2(n) =0.1 0.1n/nax, n < na, = 20000 (3-84)

P3(n) = 2p2(n) -1 (n)

The cooling schedule parameter for GLMS algorithm is a linear decreased function of

3(n) 100/n.

Table 3-2 shows the comparison of the number of global and local minimum hits by

various algorithms. The results are given by 100 Monte Carlo simulations with random




















N 0.05

0.04

0.03

0.02 -

0.01 \\

0
0 0.5 1 1.5 2 2.5
# of iteration x 104

Figure 3-2: Step size p(n) for SAS algorithm.


initial conditions of 0 at each run. The convergence characteristics of 0 toward the

global minimum for the GLMS, LMS-SAS, and NLMS algorithm are shown in Figure

3-3, 3-4, and 3-5, respectively. The adaptation process with 0 approaching toward the

local minimum for the LMS, GLMS, and LMS-SAS, algorithm are also depicted in

Figure 3-6, 3-7, and 3-8, respectively, where 0 is initialized to the point near the local

minimum. Based on the simulation results, we can summarize performance as follows:

* Figure 3-6 and row 1, 2 in Table 3-2 show that the LMS algorithm is likely to
converge to the local minimum.

* Figure 3-3, 3-7 and row 3 in Table 3-2 show that the GLMS algorithm might jump to
the global minimum valley and converge to the global minimum, but it also can jump
back to the local minimum valley and then converge to the local minimum. Srinivasan
[56] claims that the GLMS algorithm could converge to the global minimum w.p.1
by carefully choosing the cooling schedule /(n). The cooling schedule is a crucial
parameter, but it is difficult to be determined such that global optimization will be
guarantee.

* Figure 3-4, 3-8 and row 4,5 in Table 3-2 show that the LMS-SAS algorithm are likely
to converge to the global minimum with proper step size. Even though the LMS-SAS





















0 0.5 1
# of iteration


1.5 2
x 104


-0.5 0 0.5
pole


3-3: Global convergence of 0 in the GLMS algorithm. A) Weight 0; B) Contour


2


0
-1
-2
-3
-4
-c


0 0.5 1
# of iteration


1.5 2
x 104


Figure 3-4: Global convergence of 0 in the
Contour of 0.


-0.5

-1
-0.5 0
pole


LMS-SAS algorithm. A) Weight 0; B)


algorithm stays most of its time near the global minimum, it still has probability of
converging to the local minimum.

SFigure 3-5 and row 6, 7, 8 in Table 3-2 show that the NLMS algorithm with proper
step size, similarly to the LMS-SAS algorithm, could converge to the global minimum.
Figure 3-4 and 3-5 also show that the NLMS algorithm r i,,-; much longer time in the
global minimum valley than the other algorithms. These figures also show that the
step size of the NLMS algorithm doesn't p1 iv as crucial a role as the cooling schedule
of the GLMS algorithm.

3.9 Comparison of LMS-SAS and NLMS Algorithm

Recall that the LMS-SAS algorithm is described as


0(n + 1) e0(n) p(in)E(n)VOe(n, 0) P(n)E(n) (3


Figure
of 0.


i r,-


(3-85)










1

0.5

0

-0.5

-1

-1.5
0


0.5 1
# of iteration


1.5 2
x 104


Figure 3-5: Global convergence of 0 in the
of 0.


0 0.5 1
# of iteration


1.5 2
x 104


-0.5 0 0.5
pole


NLMS algorithm. A) Weight 0; B) Contour


-0.5 0 0.5
pole


3-6: Local convergence of 0 in the LMS algorithm. A) Weight 0; B) Contour of


0.5 1
# of iteration


Figure 3-7: Local convergence
0.


1.5 2
x 104


-0.5


-1
-0.5 0 0.5


of 0 in the GLMS algorithm. A) Weight 0; B) Contour of


S~T~~F" rv'~ -~


K


Figure
0.








A B
2 1

0.5
0
0
0
-1
-0.5
-2

-3 -1
0 0.5 1 1.5 2 -0.5 0 0.5
# of iteration x 104 pole


Figure 3-8: Local convergence of 0 in the LMS-SAS algorithm. A) Weight 0; B) Contour
of 0.


On the other hand, the NLMS algorithm is


0(n+ 1) (n) -) Voy(n)E(n) (3-86)
11 VOy(n) 11'

The LMS-SAS algorithm adds a perturbing noise to avoid converging to the local

minima, while the NLMS algorithm uses the inherent estimate gradient noise to

avoid converging to the local minima. Two different types of step size p(n) and

p(n)/|Voy(n) 2 are used by LMS-SAS and NLMS, respectively. Therefore, we need

to fairly compare the performance of both algorithms in terms of global optimization, so

we set up the three following experiments.

Here we use the same system identification scheme, i.e., we identify three unknown

systems

0.05 0- .4 -1
Example I: Hi(z) 1.1 0. 2 (3-87)
1 1.1314z-1 + 0.25z-2
0.2 0.4z-1
Example II: Hii(z) 1.134z1 0.252 (3-88)
1 1.1314z-1 + 0.25z-2
0.3 0.4z-1
Example III: Hm11(z) 1.34z1 + (3-89)
1 1.1314z-1 + 0.25z-2

by a reduced order adaptive filter of the form

b


H(z) t- az-]


(3-90)

















-0.5 0 0.5 -0.5 0 0.5 -0.5 0 0.5
pole pole pole

Figure 3-9: Contour of MSE. A) Example I; B) Example II; C) Example III.

A B C


1

0.5

0

-0.5

-1
0
10 r


2000


0 2000
# of iteration

Figure 3-10: Weight (top)
Example III.


4000


4000 0 2000
# of iteration

and ||Voy(n)ll|| (bottom)


4000 0 2000 4000
# of iteration

in A) Example I; B) Example II; C)


The main goal is to determine the values of the coefficients {a, b} of the above equation,

such that the MSE is minimized (to global minimum). The excitation input is chosen

to be random Gaussian noise with zero mean and unit variance. Figure 3-9 depicts

the contour of the MSE criterion performance surface in example I, II and III. Here,

the step size for the NLMS algorithm is chosen to be a linear decreasing function of

pINLMS(n) = 0.1(1 2.5 x 10-5n). Step sizes for the LMS-SAS algorithm are a family of

linear decreasing functions of


PLMS-SAS = k(1 2.5 x 10-5n)

k = [0.01,0.02, 0.03,0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3,0.4, 0.5]


where we vary the step size k, but preserve the same annealing rate.


(3-91)











Method

LMS with pNLM
NLMS with INLi
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p
LMS-SAS with p


Table 3-3: Example I for system identification

Number of hits
Global minimum
{0.906, -0.311}
s(n) 485
Ms(n) 996
LMS-SAS(n) and k = 0.01 494
LMS-SAS(n) and k = 0.02 875
LMS-SAS(n) and k = 0.03 952
LMS-SAS(n) and k = 0.04 947
LMS-SAS(n) and k = 0.05 931
LMS-SAS(n) and k = 0.06 976
LMS-SAS(n) and k = 0.07 976
LMS-SAS(n) and k = 0.08 974
LMS-SAS(n) and k = 0.09 974
LMS-SAS(n) and k = 0.1 960
LMS-SAS(n) and k = 0.2 840
LMS-SAS(n) and k = 0.3 835
LMS-SAS(n) and k = 0.4 780
T,.LMS-SAs(n) and k = 0.5 765


Local minimum
{-0.519,0.114}
515
4
506
125
48
53
69
24
24
26
26
40
160
165
220
235


Table 3-3, 3-4, 3-5 shows the simulation results of global and local minimum hits by

LMS, LMS-SAS, and NLMS algorithms. The value of |Voy(n))| is depicted in Figure

3-10. The larger ||Voy(n) |, the smaller increments are used by the algorithm, i.e. the

less probability of the algorithm escaping out from the steady state point. In cases of

example I and II, the global minimum valley has sharper slope than the local valley.

Therefore, Table 3-2 and 3-4 show that NLMS algorithm has higher probability in

obtaining the global minimum than the other algorithms in cases of example I and

II. In example III, the local minimum valley has sharper slope than the global valley.

Therefore, Table 3-5 shows that the NLMS algorithm has less probability in obtaining

the global minimum than the other algorithms in example III case.

3.10 Conclusion

Several methods have been proposed for the global optimization of adaptive IIR

filtering. We modify the perturbing noise in GLMS algorithm by multiplying it with its

cost function. The modified algorithm, which is referred to as the LMS-SAS algorithm


.... k J












Table 3-4: Example II for system identification


Method
LMS with constant p
NLMS with INLMS(n)


LMS-SAS with


LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS


with
with
with
with
with
with
with
with
with
with


PLMS-
PLMS-
I-ILMS-
I-ILMS-
I-ILMS-
PLMS-
PLMS-
PLMS-
I-ILMS-
I-ILMS-
PLMS-


SAS(n)
-SAs(n)
-SAS(n)
-SAS(n)
-SAS(n)
-SAS(n)
-SAS(n)
-SAS(n)
-SAs(n)
-SAs(n)
-SAs(n)


Number of hits
Global minimum
20
89
and k = 0.01 20
and k = 0.02 9
and k = 0.04 2
and k = 0.06 1
and k = 0.08 1
and k = 0.09 1
and k= 0.1 1
and k = 0.2 2
and k = 0.3 1
and k = 0.4 1
and k = 0.5 2


Local minimum
80
11
80
91
98
99
99
99
99
98
99
99
98


Table 3-5: Example III for system identification

Number of hits


Method
LMS with constant p
NLMS with INLMS(n)


LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS
LMS-SAS


with
with
with
with
with
with
with
with
with
with
with


I-LMS-
IPLMS-
PLMS-
IPLMS-
I-LMS-
I-LMS-
I-LMS-
PLMS-
PLMS-
I-LMS-
ILLMS-
--Lnr
p-LM/S-
I1nr-
p-LM/S-
I1nr-


-SAs(n)
-SAS(n)
-SAS(n)
-SAS(n)
-SAs(n)
-SAs(n)
-SAS(n)
-SAs(n)
-SAs(n)
-SAS(n)
-SAS(n)


and k
and k
and k
and k
and k
and k
and k
and k
and k
and k
and k


Global minimum
92
90
0.01 100
0.02 100
0.04 100
0.06 100
0.08 100
0.09 100
0.1 100
0.2 100
0.3 100
0.4 100
0.5 100


Local minimum
8
10
0
0
0
0
0
0
0
0
0
0
0







in this dissertation, results in better performance in global optimization than the

original algorithm.

From the diffusion equation, we have derived the transition probability of the

LMS-SAS algorithm escaping from a steady state point. Since the global minimum

is alv--,v- smaller than the local minimum, the transition probability of the algorithm

escaping out from the local minimum is ah--,v- larger than the one from the global

minimum. Hence, the algorithm will stay most of the time near the global minimum

and eventually converge to the global minimum.

Since we use the instantaneous (stochastic) gradient instead of the expected value

of the gradient, an estimation error naturally occurs. This gradient estimation error,

when properly normalized, can be used to act as the perturbing noise. We have shown

that the behavior of the NLMS algorithm with decreasing step size near a minima is

similar to that of the LMS-SAS algorithm from a global optimization perspective.

The global optimization performance of LMS-SAS and NLMS algorithm totally

depend on the shape of the cost function. The sharper the local minima, the less likely

the NLMS algorithm is of escaping out from this steady state point. On the other hand,

the broader the valley around local minima, the more difficult it is for the algorithm to

escape out from this valley.














CHAPTER 4
INFORMATION THEORETIC LEARNING

4.1 Introduction

The mean square error criterion has been extensively used in the field of adaptive

systems [80]. That is because of its its analytical simplicity and the assumption

of Gaussian distribution for the error. Since the Gaussian distribution is totally

characterized by its first and second order statistics, the MSE criterion can extract

all information from a set of data. However, the assumption of Gausssian distribution

is not alv--,v- true. Therefore, a criterion which considers higher-order statistics is

necessary for the training of adaptive systems. Shannon [81] first introduced a entropy

of a given probability distribution function which provides a measure of the average

information in the distribution. By using the Parzen window estimator [82], we can

estimate the pdf directly from a set of data. It is quite straightforward to apply the

entropy criterion to the system identification framework [6, 5]. The pdf of the error

signal between the desired signal and the output signal of adaptive filters must be as

close as possible to a delta distribution, 6(.). Hence, the supervised training problem

becomes an entropy minimization problem, as -ii-:.:- -1 I by Erdogmus et al. [6].

The kernel size of the Parzen window estimator is an important parameter in the

global optimization procedure. It was conjectured by Erdogmus et al. [6] that for a

sufficiently large kernel size, the local minima of the error entropy criterion can be

eliminated. It was -,-.-.- -I '1. that starting with a large kernel size, and then slowly

decreasing this parameter to a predetermined suitable value, the training algorithm

can converge to the global minimum of the cost function. The error entropy criterion

considered by Erdogmus et al. [6], however, does not consider the mean of the error

signal, since entropy is invariant to translation. In this dissertation, we propose a

modification to the error entropy criterion, in order to take this point into account.








The proposed criterion with annealing of the kernel size is then shown to exhibit the

conjectured global optimization behavior in the training of IIR filters.

4.2 Entropy and Mutual Information

Shannon [81] defined the entropy of a probability distribution P = {pl,, Pi, ,PN}

as
N N
H,(P) = pklog( ) Pk -, Pk > 0 (4-1)
k=1 P k=1
which measures the average amount of information contained in a random variable X,

with probabilities pk P(x k), k = 1, 2, N at the values of x1, x2, XN.

A message contains no information, if it is completely known. The larger information

it contains, the less predictable it is. Information theory has broad application in the

field of communication systems [83]. But entropy can be defined in a more general

form. According to Renyi [58], the mean of the real number x1, x2," XN with positive

weighting pi, p2, PN has the form as

N
X -'-'(Xk)) (4-2)
k=1

where p(x) is a Kolmovov-Nagumo function, which is an arbitrary continuous and

strictly monotonic function.

An entropy measure H generally obeys the following formula:

N
H r -( ,(I pk))) (4-3)
k=1

where I(pk) = -log(pk) is the Hartley's information measure [84].

In order to satisfy the additivity condition, the p(-) can be either p(x) = x or

p(x) = 2(1-')x. When p(x) = x the entropy measure become as Shannon's entropy.

When p(x) = 2(1-)x0, the entropy measure become Renyi's entropy of order a, which is

denoted as
1 N
H log p), a > 0 and 3 / 1 (4-4)
1H- ca-k
k 1







The well known relationship between Shannon's and Renyi's entropy is


HR > H> HRa 1> > >0 and /3> 1 (4-5)

lim HR, = H, (4-6)
a- 1

In order to further relate Renyi's and Shannon's entropy, the distance of P

(pi,p2, ,PN) to the original of P = (0, 0, ,0) is defined as
N
V =PIIP" (4-7)
k=1

where Va is called the a-norm of the probability distribution [85].

The Renyi's entropy in the term of Va is as


HRa log(V) (4-8)

The Renyi's entropy of order a means a different a- norm. Shannon's entropy can be

viewed as the limiting case a -+ 1 of the probability distribution norm. Renyi's entropy

is essentially a monotonic function of the distance of the probability to the original. The

HR 2 og C 1 p is called the quadratic entropy, because of the quadratic form on

the probability.

We can further extend the entropy definition to a continuous random variable Y

with pdf fy(y) as [58]:

H .- log( fy(z)dz) (4-9)
HR (4-9)a

HR2 log( f(z)2dz) (4-10)


It is important to mention that Renyi's quadratic entropy involves the use of the square

of the pdf.

Because the Shannon entropy is defined as weighted sum of the logarithm of

the pdf, it is difficult to directly use the information theoretic criterion. Since we

cannot directly use the pdf (unless its form is prior known), we use the nonparametric

estimators. Hence, the Parzen window method [82] is used in this dissertation. The







Parzen window estimator is a kernel-based estimator with
SN
fy(zy) (z )(4-11)
i= 1

where yi E RM are the observed signal. K(-) is a kernel function. The Parzen window

estimator can be viewed as a convolution of the kernel function with the observed signal.
The kernel function in this dissertation is chosen of Gaussian function as

1 zXTz
(z) = G(z, a2) -2 exp(- ) (4-12)
(27a2)M/2 2j2

Here, we will further develop an ITL criterion to estimate the mutual informa-

tion among random variables. Mutual information is able to quantify the entropy
between pairs of random variables. Hence mutual information is also very important to

engineering problems.

Mutual information is defined in Shannon's entropy term as I(x, y) = H(y) -

H(ii ,), which is not easily estimated from samples. An alternative estimated
mutual information between two probability density function (pdf) f(x) and g(x) is

Kullback-Leibler (KL) divergence [86], which is defined as


f(x)

Similarly Renyi's divergence measure with order a for two pdf f(x) and g(x) is

HR(fg) )log f dx (4-14)
(a 1) g(x) l

The relation between KL divergence and Renyi's divergence measures is as

lim HR~(f, 9)= (f, g) (4-15)
a->l

The KL divergence measure between two random variables Y1 and Y2 essentially

estimates the divergence between the joint pdf and the marginal pdfs. That is

s(YI,Y2) KL(fyjy2(zi,z2) fy,(l)fy2(2))

I fyy2 (z, z2) 10g YY2 ( Z2) dzldZ2 (4-16)
fJ J ,(Z) fy (zi) f (2







where fyy (zi, z2) is the joint pdf, fy(zli) and f, (z2) are marginal pdfs. Because those

divergence measures mentioned above are non-quadratic in the pdf term, they cannot

easily be estimated with the information potential. The following distance measures

between two pdfs, which contains only quadratic terms of pdf, are more practical.
* Distance measure based on the Euclidean difference of vectors inequality is

1,11- + 12 2xTy > 0 (4-17)

* Distance measure based on the Cauchy-Schwartz inequality is

log ,1 > 0 (4-18)
(xT) -y

Using the Cauchy Schwartz inequality, the distance measure between two pdfs f(x) and

g(x) is as

Ics(f ) log (fg( x) (4-19)
(f f(x)g(x)dx)2
It is obvious that Ics(f, g) > 0 and the equality holds true if and only if f(x) = g(x).
Similarly, using the Euclidean distance, the distance measure between two pdfs f(x)

and g(x) is as


IED(f, ) (f(x) 9g(x)2dx

Sf (dx + g(x)2dx 2 f (x)g(x)dx (4-20)

It is also obvious that IED(f, g) > 0 and the equality holds true if and only if f(x)

g(x)
4.3 Adaptive IIR Filter with Euclidean Distance Criterion

The system identification scheme of adaptive IIR filter is as Figure 2-1. The output

signal of adaptive IIR filter in canonic direct form realization is as
N-1 M-1
y(n) = a,(n)y(n ) + bj(n)x(n -j) (4-21)
i=1 j=0

The error signal e(n) is the difference between desired signal d(n) and the output signal

y(n) of the adaptive IIR filter, which is


e(n) = d(n) y(n)


(4-22)







It is obvious that the goal of the algorithm is to adjust the weights such that the error

pdf fe is as close as possible to delta distribution 6(.). Hence, the Euclidean distance

criterion for the adaptive IIR filters is defined as


IED(f) fe( ) e))2d

f 2d- 2f(O) + c (4-23)
*oO

where c stands for the portions of this Euclidean distance measure that do not depend

on the weights of the adaptive system. Notice that, the integral of the square of the

error pdf appears exactly as in the definition of Renyi's quadratic entropy. Therefore,

it can be estimated directly from its N samples by a Parzen window estimator with

Gaussian kernel of variance a2 exactly as described in [6, 5].

N
fe() E 6 2) (4-24)
i=1

If N -- oo, then f,( ) = fe() K(F, o2), where denotes the convolution operator.

Thus, using a Parzen window estimator for the error pdf is equivalent to adding an

independent random noise with the pdf t(E, a2) to the error. The error, with the

additive noise, becomes d y + n = (d + n) y. This is similar to injecting a random

noise to the desired signal as -, -.-. -1.1 by Wang et al. in [87]. The advantage of our

approach is that we do not explicitly generate noise samples. We simple take advantage

of the estimation noise produced by the Parzen estimator, which as demonstrated above,

works as an additive, independent noise source. The kernel size, which controls the

variance of the hypothetical noise term, should be annealed during the adaptation,

just like the variance of the injected noise by Wang et al. [87]. From the injected noise

point of view, the algorithm behaves similar to the well-known stochastic annealing

algorithm; the noise which is added to the desired signal backpropagates through the

error gradient, resulting in perturbations in the weight updates proportional to the

weight sensitivity. However, since our algorithm does not explicitly use a noise signal, its

operation is more similar to convolutional smoothing. For a sufficiently large kernel size,







the local minima of the ITL criterion are eliminated by smoothening of the performance

surface. Thus, by starting with a large kernel size, the algorithm can approach to the

global minimum, avoiding any local minima that would have existed if the kernel size

was to be small. Since the global minimum of the error entropy criterion with large

kernel size does not, in general, coincide with the true global minimum, annealing the

kernel size is required. This is equivalent to gradually reducing the amount of the noise

injected to the desired signal to a small suitable value. At the end, the algorithm with

the small kernel size can converge to the true global minimum.

By substituting the Parzen window estimator for the error pdf in the integral of

Equation (4-23), and recognizing that the convolution of two Gaussian functions is

also a Gaussian, we obtain the ITL criterion as (after dropping all the terms that are

independent of the weights):
NN N N
IED(fe) 2 K ci e, 272) 2 Y e i 'a2) (4-25)
i=1 j 1 i=1 j 1

The gradient vector aIED(feZ)/a0 to be used in the steepest descent algorithm is

obtained as

aIED(f) 1 N N2
0 N- 2-2 e -[( ej K )(e ej, 2a)
i=1 j= 1
Oy(n i) Oy(n j) 2y(n i)
( ) 2eiK(ei, 02) (4-26)

where the gradient ay/a0 is given by
N-1 OY(n 1)
y) () + Eai(n) (4-27)
i=1

and 0(n) = [y(i 1), y(i 2), y(i- N), x(i), x(i ), ,x(i M)T.

4.4 Parzen Window Estimator and Convolution Smoothing Function

4.4.1 Similarity

In the ITL algorithm, Parzen window estimator estimates the error pdf as a

function of the weights from a set of samples. As the volume of samples tends to

infinity, the estimated pdf is equivalent to the actual pdf convoluted with the kernel







function K~(x) used by Parzen window estimator. The behavior of the ITL algorithm

is similar to the one of SAS technique in which the smoothed cost function is obtained

by convolving the cost function with a smoothing function hA(x). Recall that the

smoothing function ha(x) should has following properties
* ha(x) = h(x) is piecewise differentiable with respect to x.

* limpo ha(x) = 6(x) (Dirac's delta functional).

* ha(x) is a pdf.

The kernel function in this thesis is chosen of Gaussian function as


,(x) G(x,2) (2(2 exp(- ) (4-28)
(27a2)n/2 2j2

It is obvious that K,(x) = K(), lim,0o Kt(x) = (x), and K,(x) is a Gaussian pdf.

Hence Kz(x) satisfies the properties of smoothing function.

The objective of the convolution smoothing function is to smooth the nonconvex

cost function. The parameter 3 controls the dispersion of hA(x), which controls the

degree of cost function smoothing. In the beginning stage of the optimization, the 3

is set to be large such that hp(x) can smooth out all the local minimum of the cost

function. Since the global minimum of the smoothed cost function does not coincide

with the global minimum of the actual original cost function. The 3 is slowly decreased

to zero. As a result, the smoothed cost function can gradually return to the original

cost function and the algorithm can converge to the global minimum of the actual cost

function.

Therefore the K,(x) has the same role of hp(x) in smoothing the nonconvex cost

function. The parameter a controls the dispersion of K,(x), which can control the

degree of the cost function smoothing. Similarly, the parameter a is set to be large and

then slowly decreases to zero. Therefore the ITL algorithm with the proper parameter a

can converge to the global minimum.







4.4.2 Difference

For the SAS algorithm, the smoothed cost function can be expressed as


V, = E3[ f (e; 0 )de] (4-29)


which is the expectation with respect to the random variable e. The standard deviation

of F is controlled by 3. Hence, the smoothed cost function can be regarded as an

average version of actual cost function.

For the ITL algorithm, we change the shape of the pdf by Parzen window estimator

at each particular point of 0. Thus we change the cost function at each point of 0. The

estimated cost function is as

,= Jf o(e;0)de (4-30)

where c is a Gaussian noise with zero mean and a variance. We conclude that the

SAS method adds an additional noise to the weight in order to force the algorithm to

converge to the global minimum, while the ITL algorithm adds an additive noise to the

error in order to force the algorithm to converge to the global minimum. The additive

noise added to error affects the variance of the weight updates proportionally to the

sensitivity of each weight, e/t0i" This means that a single noise source is translated

internally onto different noise strengths for each weight.

4.5 Analysis of Weak Convergence to the Global Optimum for ITL

Similar to the ,in ii ; of weak convergence to the global optimum for LMS-SAS,

we obtain the transition probability of ITL algorithm escaping out of a local minimum

by solving a pair of Fokker-Plank equations. The f, is estimated from Equation (4-24).

If N -+ oo, then Jf,() = f,() K(E, 12), where denotes the convolution operator.

Thus, using a Parzen window estimator for the error pdf is equivalent to adding an

independent random noise with the pdf n(e, a2) to the error. Thus we define the added

error as


(n) = e(n) + N(


(4-31)







where N is the additive noise. Here the gradient of the cost function used in the
steepest algorithm is

S+ N()) O + N(O) O (4-32)
a ao a ao +N(O'Y ac ao N

where J is the cost function. For the ITL algorithm, the cost function is

J IED(fe) ( ) 6(c))2dE (4-33)
-oo

Therefore
8J
S(f() ())2 (4-34)

Here we write the ITL algorithm as Ito's integral as
t t
ot oa + j. m(0, s)ds + j.a (0, s)dW (4-35)

Where

ao (4-36)
,(0t, t)= p(t)N(0)( fe(E)- 6())2
With the similar derivation of Equation (3-52) for the LMS-SAS algorithm, we obtain
the transition probability of the ITL algorithm escaping out a local minimum for the
scalar 0 case as

p(0,t 0*,to)
1 1 (0 0*)2
V27TN(O)(f(F) 6())2 t(S)ds 2 N(0)(,L(F) 6(_))2 to p(s) ds

G(- 0*,N(0)(f(E) 6())2 (s)ds) (4-37)

Remark. Equation (4-37) is the transition probability of the ITL algorithm
escaping out from the steady-state 0*. The p(0, t 0*, to) is determined by (0 0*), p(n),

N(0), and (f,(E) 6(E))2. Because we use an annealing kernel size, i.e. an annealing
variance of N(0), the ITL algorithm will decrease the probability of jumping out the
valley over of iterations. The transition probability of the process escaping out from the
global minimum is smaller than the one from a local minimum because of the smallest








value of (f(E) 6(E))2 at the global minimum. Thus, the algorithm will stay most of its

time near the global valley and will eventually converge to the global minimum.

4.6 Contour of Euclidean Distance Criterion

The Euclidean distance criterion for the adaptive IIR filters is defined as


IED f) (fE) -( f ())2dE
/oo
-oo

f2(E)dE 2f,(0) + c (4-38)
-oo

If the input signal is set to have a Gaussian distribution, N(p, J2), then the desired

signal will also be Gaussian, N(id, ad). The output signal of the adaptive filter will be a

Gaussian as well, N(py, ao2). Here we want to calculate the analytical expression of the

Euclidean distance in the simulation example of the system identification framework for

the unknown system of
bl + b2z -1
H(z)= b+ 2z (4-39)
1 alz-1 a2z-2

by reduced order adaptive filter of


Ha() = (4-40)
1 az-1

Here the desired output signal is realized as


d(i) = bix(i) + b2x(i 1) + ad(i 1) + a2d(i 2) (4-41)


Then
b + b2
l'd = 2 Px (4-42)
1 a, a2

Taking variance on both sides of Equation (4-41), we obtain that


Rd(0) = (b2 + b2 + 2bb121)Rx(O) + (a 2+ a )Rd(0) + 2aia2Rd(l) (4-43)


where Rd(t) and Rx(t) are the variance of desired output signal and input signal,

respectively. Right shifting one unit of Equation (4-41), we obtain that


d(i + 1) ad(i) a2d(i 1) + bix(i + 1) + b2x(i)


(4-44)







Taking variance of Equation (4-41) and (4-44), we obtain that

Rd(1) alRd(0) = (bib2 + bib2a2)R,(O) + ala2Rd(O) + a2Rd(1) (4-45)

From Equation (4-43) and (4-45), we can obtain that

S(b + b2 + 2bib2ai)(l a2) + 2bib2ala2() (
Ra(0) = R-(0----- ^-R ) (4-46)
(1 + a2)(l a1 2)(l + ,1 a2)

Similarly, we can calculate the (py, o 2) of the output signal of the adaptive filter as
b
Pl = i-x (4-47)
1-a
y(i) = bx(i) + ay(i 1) (4-48)

Taking variance of above equation, we obtain that

R,(0) = b2(0) + a2R,(0) (4-49)

So that

R,(0) = R(0) (4-50)
1 a2

We also can calculate the covariance of desired output signal and the output signal of

the adaptive filter as following. Taking covariance of Equation (4-41) and (4-49), we

obtain that


Rd(O) = (bib + b2ab)R,(O) + alaRdy(O) + a2aRdy(-1) (4-51)

Taking covariance of Equation (4-41) and y(i 1) = bx(i 1) + ay(i 2), we obtain that

Rdy(1) (b2b + aibib)R,(0) + alaRdy(1) + a2aRdy(O) (4-52)

From Equation (4-51) and (4-52), we obtain that

R(b + b2a)(l ala) a2a(b2 + blal)bR()
Rd(0) ( bR) 0) (4-53)
(1 ala)2 + [a-id2







Finally, we can obtain that


le Pd Py (4-54)

2 R (0) + R() 2Rd9(0) + a2 (4-55)


where a, increases by a2, which is corresponding to the Gaussian kernel function of the

Parzen window estimator. The Euclidean distance is calculated as

1 2
IED(f) =, e- (4-56)
/47--F(7 V/2-F72

Figure 4-2 shows the contours of the analytical expression for the ITL criterion (for

comparison Figure 4-3 shows the contours of the analytical expression for the Entropy

f_ f2(E)dE criterion). The convergence characteristics of the adaptation process
for the filter coefficients towards the global optimum is shown in Figure 4-1. In the

beginning of the adaptation process, the estimated error variance a2 is large because

of the significantly large value of the kernel size, a2, in the Gaussian kernel function

of the Parzen window estimator. Therefore, the first term of the right hand side of

Equation (4-56) is considerably smaller than the second term. Thus can be neglected

in the beginning stage of the adaptation process. We observe that the second term

concentrates more tightly around p = Pd Py 0 associated with the increasing a~,

i.e., the increasing a2. The straight line in Figure 4-1 b. is the line of pe = Pd Py = 0.

It is clear from this figure, Figure 4-1, that the weight-track of the ITL algorithm

converges towards the line of e = P- d Py = 0 as we predicted in the theoretical analysis

given above. When the size, a2, of the Gaussian kernel function slowly decreases during

adaptation, the ITL cost function will gradually converge back to the original one, which

might exhibit local minima.

4.7 Simulation Results

We present simulation results using a system identification formulation of the

adaptive IIR filtering. We identify the following unknown system.

Example I:
S(Z) 0.05 0.4z-1
H (4= 1+0.25-(4-57)
1 -1.1314z-1 0.25z-2






























-0.5 0 0.5
pole


Figure 4-1: Convergence characteristics of weight for Example
Contour of weight.


I by ITL. A) Weight; B)


-0.5 0 0.5
pole


-0.5 0 0.5
pole


Figure 4-2: Euclidean distance of Example I in A)j2


0 1000 2000 3000
# of iteration


4000 5000


1.5

1

0.5
o
c, 0
N
-0.5

-1

-1.5







1.5

1

0.5
0
c, 0
N
-0.5

-1

-1.5


-0.5 0 0.5
pole


-0.5 0
pole


0; B)(2 2 1; C)72 = 2; D)a2 3.





































-0.5 0 0.5
D


-0.5 0 0.5
G


-0.5 0 0.5


-0.5 0 0.5
E


-0.5 0 0.5
H


-0.5 0 0.5


-0.5 0 0.5
F


-0.5 0 0.5
I


-0.5 0 0.5


Figure 4-3: Entropy fS f2( )de of Example I in A)a2 = 1; B)a2 = 2; C)2 = 3; D)a2
4. E)a2 5; F)a2 6; G)a2 7; H)J2 8; I)72 9.







Example II:
0.05 + 0.4z-
1- 1.1314-1 + 0.25z-2

by the following reduced order adaptive IIR filter


Ha(z) = (4-59)
1 az-1

The main goal is to determine the values of the coefficients {a, b}, such that the

Euclidean distance criterion is minimized. If we assume the error pdf of fe is Gaussian

as
1 (e- e)2
f(e; w) = r (4-60)

Then, we can derive the estimated Euclidean distance as

1 2 _-
IED(fe)= (4-61)
24-(- + a2) -F2(7 + a2)

Thus we plot, experimentally, the contour of the Euclidean distance criterion performance

surfaces in different a for Example I and II in Figure 4-2 and 4-4, respectively. It shows

that the local minima of the Euclidean distance criterion performance surface have

disappeared with large kernel size. Thus, by carefully controlling the kernel size, the

algorithm can converge to the global minimum.

The input signal is a random Gaussian noise with zero mean and unit variance.

There exist several minimums on the Euclidean distance criterion performance surface

with small kernel size on both examples. However, there exist a sole global minimum of

Euclidean distance criterion surface with a sufficient large kernel size. In this simulation,

the kernel size is chosen to be sufficient large in the start stage, and then slowly

decreased to a predetermined small value, which is the trade-off between low bias and

low variance. In this way, the algorithm can converge to the global minimum. The

step size for the algorithm is a constant value of 0.002. The simulation results are

based on 100 Monte Carlo runs along with randomly initial condition of weight at each

Monte Carlo run. It shows from the simulation results that the algorithm converges

to the global minimum with 100 of the time for both examples. The convergence


































1.5

1

0.5
0
w 0
N
-0.5

-1

-1.5


-0.5 0
pole

C


1.5

1

0.5
0
w 0
N
-0.5

-1

-1.5


-0.5 0 0.5
pole

D


-0.5 0 0.5 -0.5
pole


4-4: Euclidean distance of Example II in A)a2 = 0; B)Ja2


0 0.5
pole


- 1; C)2 = 2; D)a2


Figure

3.








A B
2
1.5
1.5
1
1 0.5
0
0.5 o 0
N
0 -0.5
-1
-0.5
-1.5
-1
0 2000 4000 6000 8000 -0.5 0 0.5
# of iteration pole

Figure 4-5: Convergence characteristics of weight for Example II by ITL. A) Weight; B)
Contour of weight.


characteristics of the adaptation process with the weight approaching to the global

minimum are shown in Figure 4-1 and 4-5, respectively, where initial weight are chosen

to a point near the local minimum.

4.8 Comparison of NLMS and ITL Algorithms

The NLMS algorithm uses the MSE criterion, while the ITL algorithm uses the

Euclidean distance (Entropy) criterion. Both algorithms achieve global optimization.

Although the two optima differ in weight space, as will be explained later. Here, we

want to compare the performance of these two algorithms in terms of the global optimal

searching capability.

Here we use the same system identification scheme, i.e., we identify the unknown

system of

0.05 0.4z-1
Example I: H,(z) = 1.1314z-1 + 0. (4-62)
1- 1.1314z-1 + 0.25z-2
0.2 0.4z-1
Example II: H.(z) 1 + 0.25- (4-63)
1 1.1314z-1 + 0.25z-2
0.3 0.4z-1
Example III: H (z) .3 1 + 0.25z=- (4-64)
1 1.1314z-1 + 0.25z-2

by reduced order adaptive filter of

b


H 1)-t- az-]


(4-65)







Table 4-1: System identification of adaptive IIR filter by NLMS and ITL algorithm

Number of hits (global/local)
Method Example I Example II Example III
LMS 36/64 20/80 92/8
LMS-SAS 96/4 1/99 100/0
NLMS 100/0 89/11 90/10
ITL 100/0 100/0 100/0


The main goal is to determine the values of the coefficients {a, b} of the above equation,

such that the MSE is minimized (global minimum). The input signal is chosen to

be random Gaussian noise with zero mean and unit variance. The step size of the

LMS-SAS and NLMS algorithms is chosen to be a linear decreasing function of p(n) =

0.1(1 -5 x 10-5 ) and constant step size p = 0.001 for the LMS and ITL algorithm. The

kernel size is chosen to be a linear decreasing function of a2 = 0.3(1 5 x 10-5) + 0.5 for

the ITL algorithm.

Table 4-1 shows the comparison of the number of global and local minimum hits by

both NLMS and ITL algorithms. The results are given by 100 Monte Carlo simulations

with random initial conditions of 0 at each run. It is clear from Table 4-1 that the ITL

algorithm is more successful in obtaining the global minimum than other algorithms.

In order to understand the behavior of the ITL solution, we investigate the Lp

norms of the impulse response error vectors between the optimal solutions obtained by

the MSE and the ITL criteria. Assuming the infinite impulse response of the unknown

system, given by hi, i=0,...,oo and the infinite impulse response of the trained adaptive

filter, given by hai, i=0,...,oo can both be truncated at M, yet preserve most of the

power contained within, we consider the following impulse response error norm criterion:

M
Impulse Response Criterion Lp = I hi- halP| (4-66)
\i=

Table 4-2 shows the impulse response Lp error norms for the adaptive IIR filters trained

with MSE and ITL criteria. We see from these results that the ITL criterion is more of

a minimax-type algorithm, as it provides a smaller L, norm for the impulse response

error compared to MSE, which yields an L2 norm error minimization.







Table 4-2: Lp for both MSE and ITL criterion

p 1 2 3 4 5 10 100 1000 oo
MSE 0.94 0.29 0.24 0.22 0.22 0.22 0.22 0.22 0.22
ITL 1.59 0.37 0.26 0.22 0.21 0.18 0.17 0.17 0.17


If the MSE solution is derived, either the NLMS is chosen, or if a more robust

search is derived, the ITL can be used. However, after ITL converged, the LMS

algorithm should be used to start from the ITL solution and seek the global optimum of

MSE. As demonstrated, the ITL and MSE global minimum are close to each other.

4.9 Conclusion

We have proposed an adaptive IIR filter training algorithm, referred to as the

ITL algorithm, which is based on minimizing Renyi's quadratic entropy by using a

non-parametric pdf estimator, Parzen windowing. By exploiting the kernel size used

in the Parzen window estimator, we force the proposed algorithm to converge to the

global minimum of the performance surface. We compare the performance of the

ITL algorithm with that of the LMS-SAS and NLMS algorithms with decreasing step

size capable of finding the global optimum and conclude in simulations that the ITL

algorithm is superior.

The solution of the ITL is different from the MSE optimization. However their

minima are in the same region of weight space. Therefore for more robust global search,

we recommend to use ITL and when it converges, switch to the MSE cost using as

initial conditions the weight values found with ITL.













CHAPTER 5
RESULTS

In order to demonstrate the effectiveness of proposed global optimizations, proposed

global optimization are applied to two practical examples; system identification with

Kautz filter and nonlinear equalization.

5.1 System Identification with Kautz Filter

It is known that the LMS algorithm update the filter gradient along the direction of

the negative gradient of the objective function. Hence, the LMS algorithm for the Kautz

filter becomes


AOk -


9E
Aa O
Oa

OE
ai = -'g


where p is a step

,-, (n)
Oaa




'-'(n)
go3



a91()
9a


OE
pe= 8(n) ck(n)

( )
S0(II
Oa
k-0


k-0
^(")E^-^


(5-1)

(5-2)


(5-3)


size. The gradient vector 98pk/9a and Opk/9O3 are given by

1 +( at ) Vatl+a)^"^
(1 + ) (a2 )(u(n 1) u(n))
2 v/(l + a)2 +32 (1 (a2 + 32)
O,,(n 1) O2 -,,' 2)
+2a (- (a + --) -2)

+2~o(n ) 2aoo(n 2) (5-4)
1 1J +(a2 2) ( 1 t)2 +
S( a + 32) (t a)2 32 )(u(n 1) u(n))
V2 v(1 + a)2+ 32 V- (a2+ 32)
,(n 1) ,,,Un 2)
+2 n (a2 + 2)" 2&o(n 2) (5-5)

S( a) V (a2 + ) a oa)2 1) ))
+ ( ( (u(n 1) + u( -))
2 (1 -a)2 +2 V1 (-2+2
a91(n 1) +2) (n 2)
+2a a a- (

+2pi(n 1) 2api(n 2) (5-6)


=







1 0/ 1- (a2 + 2) /3 ( )2 )2 -
( )(u(n 1) + u(n))
S (1- a)2 /32 1 -(a2 +32)

+2a (a2 ,2)_ 2) _2 (n 2)
'go 'go


2a Ok(- 1)
2a a
_2a k-2 (n
-2a a

-2aOk(n 2)

2a
-2m -2)
g/3
-2a k-2n
k( 2)
-23k(n 2)


S+ ) (n 2) 2)k-2 (n)
- (a2 + /32)2 + (( + 22)
1-) aOk-2(n 2)
+ + 2

+ 2aok-(2n) 2Ok-2(n 1)
(2 + 2) (n 2) + ( 2 + 2, k-2 (n)
- (a + a2) 2(/
- ) ak-2(n 2)
+ ( /3

+ 2f3 2(1)


(5-10)


d d
VO (n) (


Hence, the NLMS algorithm becomes


AOd ) e(n) (n) (5-11)
|17Vy( ) 1i



d
aC (n) Okk(n) (5-13)

Here, consider the system identification example by Silva [88], which uses the

reference transfer function described as

0.0890 0.2199z-1 + 0.2-CC ,-2 0.2199z-3 + 0.0890z-4
S 1 2.6918z-1 + 3.5992z-2 2.4466z-3 + 0 -4 (5-14)

The input signal is a colored noise which is generated by passing a white noise, with

mean 0 and variance 1, through a first-order filter with a decay factor of 0.8. We


a9P1(n)
o/3


(5-7)


a((k n)
Oa




O9Ok(n)
8 k (


Here


(5-8)


(5-9)







Table 5-1: System identification of Kautz filter model

Number of hits
Method Global minimum Local minimum
ITL 100 0
NLMS 99 1
LMS-SAS 58 42
LMS 48 52


consider the normalized least-error criterion (NMSE)


NMSE 10log,0 Y y) -)2 (5-15)
Yrz 1 /(r)2

where y is the estimated output of the Kautz filter. The global optimum for the

objective function is at ( m 0.6212 + j0.5790, which has a normalized criterion of a

12.5dB less than that in the FIR filter (( = 0). This agree with the result by Silva [88].

The step size is chosen to be a linearly decreasing function of p(n) = 0.4(1 5 x 10-5n)

for both LMS-SAS and NLMS algorithms, and constant at 0.002 for both ITL and LMS

algorithms. The kernel size for the ITL algorithm is chosen to be a linearly decreasing

function of iterations, o2 = 3(1 2.5 x 10-5K) + 0.5. Table 5-1 shows the comparison

of the number of global and local minimum hits by ITL, NLMS, LMS-SAS and LMS

algorithms. The results are given by 100 Monte Carlo simulations with random initial

conditions of 0 and ( at each run. It is clear from Table 5-1 that the ITL algorithm is

more successful in obtaining the global minimum compared with the other algorithms.

Single characteristic weight tracks representative of each algorithm, LMS, LMS-SAS,

NLMS, and ITL, are shown in Figure 5-1, 5-2, 5-3, and 5-4, respectively. Figure 5-5

depicts the closeness between the impulse response of unknown system and the impulse

response of the optimized Kautz filter determined with MSE and ITL criterions.

In order to understand better the meaning of the ITL solution, we investigate

the L, norms of the impulse response error vectors between the optimal solutions

obtained by the MSE and the ITL criteria. Assuming the infinite impulse response

of the unknown system, given by hi, i = 0,..., oo and the infinite impulse response of

the trained adaptive filter, given by hi, i = 0,..., oc can both be truncated at M, yet











0.3

0.2

0.1

0

-0.1

-0.2

-0.3

-0.4
0


1 2
# of iteration


3 4
x 105


Figure 5-1: Convergence characteristics of weight for Kautz filter
Weight 0; B) Weight ((a + jp).




A


2 3 4
# of iteration 510
X 10


by LMS algorithm. A)





B


-0.2


-U.J

-0.4
0


1 2
# of iteration


3 4
x 105


1 2 3 4
# of iteration x105


Figure
rithm.


5-2: Convergence characteristics of weight for Kautz
A) Weight 0; B) Weight ((a + j3).


0.3

0.2

0.1

0

-0.1

-0.2

-0.3

-0.4
0 1 2
# of iteration


3 4
x 105


filter by LMS-SAS algo-





B


0 1 2
# of iteration


Figure 5-3: Convergence characteristics of weight for Kautz filter by NLMS
A) Weight 0; B) Weight ((a + j3).


r **WNW" 'I


3 4
x 105


algorithm.


7,
............

4"..........


I Mftwlw







MOO


-
















0.3 1

0.2
0.8
0.1

0 0.6

-0.1 0.4

-0.2
0.2
-0.3

-0.4 0
0 1 2 3 4 0 1 2 3 4
# of iteration x 105 # of iteration x 105


Figure 5-4: Convergence characteristics of weight for Kautz filter by ITL algorithm. A)
Weight 0; B) Weight ((a +j/3).













0.15
System
MSE
ITL
0.1



0.05 -



0



-0.05



-0.1


0 10 20 30 40 50 60 70


Figure 5-5: Impulse response.


80 90 100







Table 5-2: Lp for both MSE and ITL criteria in the Kautz example

p 1 2 3 4 10 100 1000 o0
MSE 0.530 0.080 0.052 0.046 0.042 0.042 0.042 0.042
ITL 0.573 0.086 0.054 0.045 0.039 0.039 0.039 0.039


Sk xk Equalizer Y/k Decision Sk
Filter Device



Algorithm


Figure 5-6: C'!i i '., I equalization system.


preserve most of the power contained within, we consider the following impulse response

error norm criterion:

M
Impulse Response Criterion Lp = I hi hai|P (5-16)
\i=

Table 5-2 shows the impulse response Lp error norms for the Kautz filters trained with

MSE and ITL criteria after successful convergence. We see from these results that

the ITL criterion is more of a minimax-type algorithm, as it provides a smaller L,

norm for the impulse response error compared to MSE, which yields an L2 norm error

minimization.

5.2 Nonlinear Equalization

In band-limited data communication systems, each transmitted symbol is

deteriorated by the intersymbol interference (ISI) effect. Adaptive equalizers set in

the receiver are used to cope with the ISI effect. Figure 5-6 describes the channel

equalization system. When an equalizer is used in a data communication system, a

sequence of i.i.d., digital signal {sk E C} is sent by the transmitter through the channel

exhibiting nonlinear distortion thus generating the output sequence {xk}. The objective

of the equalizer is to recover by inversion the original sequence from the received

sequence {xk}. In this example, the received signal at the input of the equalizer is







described as
nc
xi = hkSi-k + Ci (5-17)
k-0
where the transmitted symbol sequence si is an equiprobable binary sequence {1}, hi

are the channel coefficients, and ei is Gaussian noise with zero mean and variance a.2

The equalizer estimates the value of a transmitted symbol as


id = sgn(yi) = gn(wT xi) (5-18)

where yi = wTx is the output of the equalizer, w = [wo, ", 1]T is the equalizer

coefficients, and x = [xi, [ x xi-m+l]T is the vector of observations.

The output of the equalizer using nuiltil-.-r perception (\ l.P) with one hidden

l-,-r with n neurons is given by


yi = wT tanh(Wix + bl) + b2 (5-19)


where Wi is n x m matrix connecting the input lIv--r with hidden 1 i. r, bl is n x 1 vector

of biases for the hidden neurons, w2 is n x 1 vector of weights connecting the hidden

l1--r to the output neuron, and b2 is a bias for the output neuron.

Consider the example by Santamaria et al. [89], where the nonlinear channel is

composed of a linear channel followed by a memoryless nonlinearity. The linear channel

considered is H(z) = 0.3482 + 0.8704z-1 + 0.3482z-2, and the static nonlinear function is

z = x+0.2x2 0.lx3, where x is the linear channel output. The nonlinear equalizer is an

MLP with 7 neurons in the input l-1 -,r and 3 neurons in the hidden -1.- r \!LP(7,3,1)],

and the equalization delay is d = 4. A short window of N, = 5 error samples is used to

minimize the error criterion.

The gradient = aj is used for the back propagation algorithm of the nonlinear

equalizer training, where the term a is determined by the topology and the term

a is determined by the error signal. Therefore the proposed global optimization

techniques can be used in this nonlinear equalization, which are referred to stochastic

gradient (SA), Stochastic gradient with SAS (SG-SAS), normalized stochastic gradient

(NSG), and ITL algorithms, respectively. The step size is chosen to be a constant








102
102 I-----------------------------------------



101


NSG
100



0 10



10-2 I



-3
10-3 ITL



10-4
0 500 1000 1500 2000 2500 3000 3500 4000
# of iteration

Figure 5-7: Convergence characteristics of adaptive algorithms for a nonlinear equalizer.


of 0.2 for SG, SG-SAS and ITL algorithms, and a linearly decreasing function of

p(n) = 0.2(1 n/nmx) for the NSG algorithm, where nrax is the maximum number

of iteration. A linear decreasing function of a2 3(1 n/nma) + 0.1 is chosen for the

kernel size of the ITL algorithm.

Figure 5-7 depicts the convergence of the MSE evaluated over the sliding window

for the algorithms, and we conclude that the ITL algorithm provides the fastest

convergence. Figure 5-8 depicts the performance comparison of SG, SG-SAS, NSG,

and ITL algorithms for the nonlinear equalizer in 100 Monte Carlo runs for the final

solutions. This figure shows that both NSG and ITL algorithms have succeeded in

obtaining the global minimum. Figure 5-9 shows the average bit error rate (BER)

curves. The BER was evaluated by counting error versus several signal to noise rates

(SNR) after transmitting symbols. This figure shows that all algorithms provide the

same result for the adequate solutions, however the NSG algorithm provides best results

for the worse solutions.








100 I
-- SG
-+- SG-SAS +-
--x- NSG 4
-*' ITL 4


10-1





U 10-2
W 2




103 X









Figure 5-8: Performance comparison of global optimizations for nonlinear equalizer.


5.3 Conclusion

We have proposed the combination of Kautz filters and an alternative information

theoretic adaptation criterion based on Renyi's quadratic entropy. The proposed ITL

criterion and kernel annealing approach allowed stable adaptation of the poles to their

global optimal values. We have also investigated the performance of the proposed

criterion and the associated steepest descent algorithm in IIR filter adaptation. We

have designed a proposed information theoretic learning algorithm, which is shown to

converge to the global minimum of the performance surface. The proposed algorithm

successfully adapted the filter poles avoiding local minima 100 of the time and

without causing instability.

The performance of this ITL algorithm was compared with the more traditional

LMS variants, which are known to exhibit improved probability of avoiding local minima

in previous chapter. Nevertheless, none of them were as successful as ITL in achieving

the global solution. An interesting observation was that the ITL criterion yields a




































10-4 N

NSG

10-6
0 5 10 15
SNR(dB)

C
100






m





10-5
0 5 10
SNR(dB)


20 25


SNR(dB)

D


0 5 10 15
SNR(dB)


Figure 5-9: Average BER for a nonlinear equalizer, A) over the whole 100 Monte Carlo
runs; B) over the 10 best solutions of MSE; C) over the 10 medial solutions of MSE; D.)
over the 10 worse solutions of MSE.


20 25





77

smaller L, error norm between the impulse responses of the adaptive and the reference

IIR filters, whereas MSE tries to minimize the L2 error norm. If the designer requires

a minimum L2 error norm between the impulse responses, it is possible to use ITL

adaptation to converge to the vicinity of this solution and then switch to NLMS to

achieve L2 error norm minimization.

The proposed global optimizations algorithms have also successfully applied to

another practical example, nonlinear equalization. The simulation results show that ITL

algorithm achieves better performance than the others.














CHAPTER 6
CONCLUSION AND FUTURE RESEARCH

6.1 Conclusion

In this study, we focus on the development of the global optimization algorithm for

adaptive IIR filtering. Both MSE and entropy error criterion have been used as the cost

function of the adaptive IIR filter training.

Srinivasan et al. have used a stochastic approximation for the convolution

smoothing technique in order to obtain a global optimization algorithm for adaptive

IIR filtering. They showed that smoothing can be achieved by the addition of a variable

perturbing noise source to the LMS algorithm. We have modified this perturbing noise

by multiplying it with its cost function. The modified algorithm, which is referred to as

the LMS-SAS algorithm, results in better performance in global optimization than the

original algorithm.

From the diffusion equation, we have derived the transition probability of the

LMS-SAS algorithm, for the single parameter case, escape local minimum. Since

the global minimum is ahv-- ~ smaller than the other local minimum, the transition

probability of the algorithm escaping out from the local minimum is ah--,i-b larger than

the one from the global minimum. Thus, the algorithm will stay most of its time near

the global minimum and eventually converge to the global minimum.

Since we use the instantaneous (stochastic) gradient instead of the expected

value of the gradient, error in estimating the gradient naturally occurs. This gradient

estimation error can be used to act as the perturbing noise. We have shown that the

behavior of the NLMS algorithm with decreasing step size is similar to the one of the

LMS-SAS algorithm from a global optimization perspective.

Global optimization performance of LMS-SAS and NLMS algorithm totally depend

on the shape of the cost function surface. The sharper the local minima, the less likely







the NLMS algorithm is escaping out from this steady state point. On the other hand,

the larger cover range of the steady state point valley, the more difficult the algorithm

will escape out from this steady state point valley.

We have investigated another cost function based on entropy find the global

optimum of IIR filters. Based on a previous conjecture that annealing the kernel size

in the non-parametric estimator of Renyi's entropy to achieve global optimization, we

have designed the proposed information theoretic learning algorithm, which is shown to

converge to the global minimum of the performance surface for various adaptive filter

topologies. The proposed algorithm successfully adapted the filter poles avoiding local

minima 100 of the time and without causing instability. This behavior has been

found in many examples.

The performance of this ITL algorithm was compared with the more traditional

LMS variants, which are known to exhibit improved probability of avoiding local

minima. Nevertheless, none of them were as successful as ITL in achieving the global

solution. An interesting observation was that the ITL criterion yields a smaller L,

error norm between the impulse responses of the adaptive and the reference IIR filters,

whereas MSE tries to minimize the L2 error norm. If the designer requires a minimum

L2 error norm between the impulse responses, it is possible to use ITL adaptation to

converge to the vicinity of this solution and then switch to NLMS to achieve L2 error

norm minimization.

One of the 1n i, ri drawbacks in adaptive IIR filtering is the stability issue. We

use Kautz filters, because their stability is easily to be guaranteed if poles of the

Kautz filters are located within the unit circle. In this dissertation, we proposed the

combination of Kautz filters and an alternative information theoretic adaptation

criterion based on Renyi's quadratic entropy. Kautz filters have been used in the past

for system identification [90] of ARMA models, but the poles have been kept fixed

during adaptation. The proposed ITL criterion and kernel annealing approach allowed

stable adaptation of the poles to their global optimal values.







6.2 Future Research

In this dissertation, we have analyzed the weak global optimal convergence of

algorithms with MSE criterion by looking at their transition function of the process,

assuming that the weight, 0, is a scalar. We need more works on the transition function

of the process in general case, assuming that 0 is a vector, in order to complete the

analysis of the weak global optimal convergence of algorithms with MSE criterion.

We have observed that the ITL criterion yields a smaller L, error norm between

the impulse responses of the adaptive and the reference IIR filters, whereas MSE tries

to minimize the L2 error norm. This "i'iiiiiii I:: property of the proposed ITL criterion

deserves further research.

Another observation is that linear scheduling of the kernel size helps achieve

global minima. In annealing-based global optimization algorithms, scheduling of the

parameters to be annealed is a in i issue. In stochastic annealing, it is known that

exponential annealing (at a sufficiently slow rate) guarantees global convergence. In

IIR filter adaptation using ITL, we used linear annealing of the kernel size and in all

examples, successful global optimization results were obtained. More work is required in

the ITL algorithm to select a appropriately the smallest kernel size, which was here set

with the rule of thumb properties [91].

The ITL adaptation used a batch approach, but we believe that the on line versions

discussed by Erdogmus et al. [92] could also di*pl .i the same global optimization

properties. The on line versions of ITL adaptation need further studied.

In addition, a general analytical proof that explains the 100 global optimization

capability of the proposed algorithm is necessary in order to complete the theoretical

work. This, however, stands as a challenging future research project.














REFERENCES


[1] B. Widrow and S. D. Stearns, Adaptive S.,,,il Processing, Prentice-Hall, Englewood
Cliffs, NJ, 1985.

[2] S. S. Haykin, Adaptive Filter The.. ,; Prentice-Hall, Englewood Cliffs, NJ, 1986.

[3] M. A. Styblinski and T. S. Tang, "Experiments in nonconvex optimization:
Stochastic approximation with function smoothing and simulated iii. ,ii,
Neural Networks, vol. 3, pp. 467-4833, 1990.

[4] J. C. Principe and D. Erdogmus, "From adaptive linear to information filtering,"
in Proceedings of Symposium 2000 on Adaptive SI .ii', for S.:j,,i] Processing,
Communications, and Control, Lake Louise, Alberta, Canada, Oct. 2000, pp.
99-104.

[5] D. Erdogmus, K. Hild, and J. C. Principe, "Blind source separation using Renyi's
mutual information," IEEE S.:jr,,] Processing Letters, vol. 8, no. 6, pp. 174-176,
June 2001.

[6] D. Erdogmus and J. C. Principe, "Generalized information potential criterion for
adaptive system training," IEEE Transactions on Neural Networks, (to appear)
September 2002.

[7] K. J. Astr6m and P. Eykhoff, "System identification-A survey," Automatica, vol.
AC-27, no. 4, pp. 123-162, Aug. 1971.

[8] B. Friedlander, "System identification techniques for adaptive signal processing,"
IEEE Transactions on Acoustics, Speech, and S.:j,,Il Processing, vol. ASSP-30, no.
2, pp. 240-246, Apr. 1982.

[9] L. Ljung, System It ,i.'/.. 'l.:,n Theory for the User, Prentice-Hall, Englewood
Cliffs, NJ, 1987.

[10] T. S6derstr6m, L. Ljung, and I. Guatasson, "A theoretical analysis of recursive
identification methods," Autoimica, vol. 14, no. 3, pp. 193-197, May 1978.

[11] C. R. Johnson, "Adaptive IR filtering: Current results and open issues," IEEE
Transactions on Information The .., vol. IT-30, no. 2, pp. 237-250, Mar. 1984.

[12] S. S. Shynk, "Adaptive IR filtering," IEEE Transactions on Acoustics, Speech, and
S.:,I,'l Processing, vol. 6, no. 2, pp. 4-21, Apr. 1989.

[13] S. Gee and M. Rupp, "A comparison of adaptive IR echo canceller hybrids,"
Proceedings. International Conference Acoustics, Speech, and S.:j,.1 Processing,
1991.







[14] S. L. Netto, P. S. Diniz, and P. Agathoklis, "Adaptive IIR filter algorithms for
system identification : A general framework," IEEE Transactions on Education,
vol. 38, pp. 54-66, Feb 1995.

[15] P. A. Regalia, Adaptive IIR Filtering in S.:',j.1 Processing and control, Marcel
Dekker, New York, NY, 1995.

[16] M. Dentimo, J. M. McCool, and B. Widrow, "Adaptive filtering in the frequency
domain," Proceedings IEE, vol. 66, no. 12, pp. 1658-1659, Dec. 1978.

[17] E. R. Fervara, I 1 implementation of LMS adaptive filters," IEEE Transactions
on Acoustics, Speech, and S.:j,.1 Processing, vol. ASSP-28, no. 4, pp. 474-475, Aug.
1980.

[18] T. K. Woo, "HRLS: a more efficient RLS algorithm for adaptiveFIR," IEEE
Communication Letters, vol. 5, no. 3, pp. 81-84, March 2001.

[19] D. F. Marshall, W. K. Jenkins, and J. J. Murphy, "The use of orthogonal
transforms for improving performance of adaptive filters," IEEE Transactions
on Circuit and System, vol. 36, no. 4, pp. 474-484, Apr. 1989.

[20] S. S. Narayan and A. M. Peterson, "Frequency domain least-mean-square
algorithm," Proceedings IEEE, vol. 69, no. 1, pp. 124-126, Jan. 1981.

[21] S. A. White, "An adaptive recursive digital filter," in Proceedings 9th Asilomar
conference Circuit System Computer, pp. 21-25, 1975.

[22] R. A. David, "An adaptive recursive digital filter," in Proceedings 15th Asilomar
conference Circuit System Computer, pp. 175-179, 1981.

[23] B. D. Rao, "Adaptive IIR filtering using cascade structures," in Proceedings 27th
Asilomar conference on S.:j,,l System Computer, vol. 1, pp. 185-188, 1993.

[24] J. K. Juan, J. G. Harris, and J. C. Principe, "Locally recurrent network with
multiple time-scales," IEEE proceedings on Neural Network for :j,,, l processing,
vol. VII, pp. 645-653, 1997.

[25] P. A. Regalia, "Stable and efficient lattice algorithms for adaptive IIR filtering,"
IEEE Transactions on S.:,j,.l Proceeding, vol. 40, no. 2, pp. 375-388, Feb. 1992.

[26] R. L. Valcarce and F. P. Gonales, "Adaptive lattice filtering revisited convergence
issues and new algorithms with improved stability properties," IEEE Transactions
on S.:j.,,1l Processing, vol. 49, no. 4, pp. 811-821, April 2001.

[27] J. J. Shynk, "Adaptive IIR filtering using parallel-form realization," IEEE
Transactions on Acoustics, Speech, and S.:j,,l Processing, vol. 37, no. 4, pp. 519
533, Apr. 1989.

[28] J. E. Cousseau and P. S. R. Diniz, "Alternative parallel realization for adaptive IIR
filters," in Proceedings International Symposium Circuits System, pp. 1927-1930,
1990.







[29] J. J. Shynk and R. P. Gouch, "Frequency domain adaptive pole-zero filters,"
Proceedings IEEE, vol. 73, no. 10, pp. 1526-1528, Oct. 1985.

[30] B. E. Usevitch and W. K. Jenkin, "A cascade implementation of a new IIR
adaptive digital filter with global convergence and improved convergence rates," in
Proceedings International Symposium Circuits System, pp. 2140-2143, 1989.

[31] D. G. Luenberger, Introduction to Linear and Nonlinear P,.. i,.in,,,,':, Wiley, MA:
Addison, 1973.

[32] J. Lin and R. Unbehauen, "Bias-remedy least mean square equation error algorithm
for IIR parameters recursive estimation," IEEE Transactions on S.: ,.'l Processing,
vol. 40, pp. 62-69, Jan 1992.

[33] H. Fan and W. K. Jekins, "A new adaptive IIR filters," IEEE Transactions on
Circuit and System, vol. CAS-33, no. 10, pp. 939-947, Oct. 1986.

[34] H. Fan and MilosDoroslvacki, "On global convergence of Steiglitz-McBride adaptive
algorithm," IEEE Transactions on Circuit and System, vol. 40, no. 2, pp. 73-87,
Feb. 1993.

[35] K. Steiglitz and L. E. McBride, "A technique for the identification of linear
systems," IEEE Transactions on Automatic Control, vol. AC-10, pp. 461-464, 1965.

[36] S. L. Netto and P. Agathoklis, "A new composite adaptive IIR algorithm," in
Proceedings .'.li, Asilomar conference on S.:,..rl System Computer, vol. 2, pp.
1506-1510, 1994.

[37] J. E. Cousseau, L. Salama, L. Donale, and S. L. Netto, "Orthonormal adaptive IIR
filter with polyphase realization," in Proceedings of ICIES'99 Electronics, Circuit
and S.,i-/ m- vol. 2, pp. 835-838, 1999.

[38] M. Radenkovic and T. Bose, "Global stability of adaptive IIR filters based the
output error method," in Proceeings of ICIES'99 Electronics, Circuit and S''/. ii,
vol. 1, pp. 663-667, 1999.

[39] P. L. Hsu, T. Y. Tsai, and F. C. Lee, "Applications of a variable step size algorithm
to QCEE adaptive IIR filters," IEEE Transactions on S.:,.irl Processing, vol. 46,
no. 6, pp. 1685-1688, Jun. 1999.

[40] W. J. Song and H. C. Shin, "Bias-free adaptive IIR filtering," in Proceeding IEEE
International Conference on Acoustics, Speech, and S.':,.il Proceeding, vol. 1, pp.
109-112, 2000.

[41] K. C. Ho and Y. T. C'! io, "Bias removal in equation-error adaptive IIR filters,"
IEEE Transactions on S.:,.irl Processing, vol. 43, pp. 51-62, Jan. 1995.

[42] M. C. Hall and P. M. Hughes, "The master-slave IIR filter adaptation algorithm,"
in Proceeding IEEE International Symposium on Circuit, System, vol. 3, pp.
2145-2148, 1988.







[43] J. R. Treichler, C. R. Johnson, and M. G. Larimore, Theory and Design of Adaptive
Filters, Wiley, New York, 1987.

[44] I. O. Bohachevsky, M. E. Hohnson, and M. L. Stein, "Generalized simulated
annealing for function optimization," American statistical association and the
American -. .. /Ii for qial,.: / control, vol. 28, pp. 209-217, Aug. 1986.

[45] S. C. Ng, S. H. Leung, C. Y. Chung, A. Luk, and W. H. Lau, "The genetic search
approach- A new learning algorithm for adaptive IIR fill, Iin-' IEEE S.,:,,il
Processing M I ,~r ..: pp. 39-46, Nov. 1996.

[46] J. A. Nelder and R. Mead, "Controlled random search algorithm," Computer
Journal, vol. 7, pp. 308-313, 1965.

[47] P. P. Khargonekar and A. Yoon, "Random search based optimization algorithm in
control analysis and design," in Proceeding of the American Control Conference,
Jun. 1999, pp. 383-387.

[48] Q. Duan, S. Sorooshian, and V. Gupta, "Shuffled complex evolution algorithm,"
Water Resources Research, vol. 28, pp. 1015-1031, 1992.

[49] Z. B. Tang, "Adaptive partitioned random search to global optimization," IEEE
Transactions on Automatic Control, vol. 39, pp. 2235-2244, Nov. 1994.

[50] K. H. Yim, J. B. Kim, T. P. Lee, and D. S. Ahn, "Genetic adaptive IIR filtering
algorithm for active noise control," in IEEE International F ..; S.i'- 1ii-
Conference Proceedings, Aug. 1999, pp. III 1723-1728.

[51] B. W. Wah and T. Wang, "Constrained simulated annealing with applications
in nonlinear continuous constrained global optimization," in Proceeding 11th
IEEE International Conference on Tools with Ar'.:,; .:.,1 Intelligence, Nov. 1999, pp.
381-388.

[52] J. L. Maryak and D. C. Chin, "A conjecture on global optimization using
gradient-free stochastic approximation," in Proceeding of the 1998 IEEE
ISIC/CIRA/ISAS Joint Conference, Sep. 1998, pp. 441-445.

[53] N. K. Treadgold and T. D. Gedeon, "Simulated annealing and weight decay in
adaptive learning : The SARPROP algorithm," IEEE Transactions on Neural
Network, vol. 9, pp. 662-668, July 1998.

[54] G. H. Staus, L. T. Biegler, and B. E. Ydstie, "Global optimization for
identification," in Proceeding of the 36th Conference on Decision and Control,
Dec. 1997, pp. 3010-3015.

[55] T. Fujita, T. Watanabe, K. Yasuda, and R. Yokoyama, "Global optimization
method using intermittency chaos," in Proceeding of the 36th Conference on
Decision and Control, Dec. 1997, pp. 1508-1509.

[56] W. Edmonson, J. Principe, K. Srinivasan, and C. Wang, "A global least square
algorithm for adaptive IIR filtering," IEEE Transactions on Circuit and System,
vol. 45, pp. 379-383, Mar. 1998.







[57] J. M. Thomas, J. P. Reilly, and Q. Wu, "Real time analog global optimization with
constraints: Application to the direction of arrival estimation problem," IEEE
Transactions on Circuit and System, vol. 42, pp. 233-243, Mar. 1995.

[58] A. Renyi, "Some fundamental questions of information theory- selected papers of
Alfred Renyi," Akademia Kiado,Budapest, vol. 2, pp. 565-580, 1976.

[59] A. Renyi, A D.:.n ,; on Information Th(.-,;' Wily, N.Y., 1987.

[60] C. F. Cowan and P. M. Grant, Adaptive Filters, Prentice-Hall, 1985.

[61] B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson, "Stationary and
nonstationary learning characteristics of the LMS adaptive filter," Proceedings
IEEE, vol. 64, pp. 1151-1162, Aug. 1976.

[62] J. M. Mendel, Lesson in D:'1:.7l Estimation Ti,.. ,; Prentice-Hall, Englewood
Cliffs,NJ, 1987.

[63] E. I. Jury, Theory and Applications of the Z-T,~r.f.,rm Method, Wiley, New York,
1964.

[64] T. C. Hsia, "A simplified adaptive recursive filter design," Proceedings IEEE, vol.
69, no. 9, pp. 1153-1155, Sept 1981.

[65] G. C. Goodwin and K. S. Sin, Adaptive Filtering Prediction and Control,
Prentice-Hall, Englewood Cliffs, NJ, 1984.

[66] T. S6derstr6m, "On the uniqueness of maximum likelihood identification,"
Automatica, vol. 14, no. 3, pp. 231-244, Mar. 1975.

[67] M. N ,l i l H. Fan, and W. K. Jenkins, "Some characteristics of error surfaces for
insufficient order adaptive IR filters," IEEE Transactions on Acoustics, Speech,
and S.:-,...l Processing, vol. 38, no. 7, pp. 1222-1227, July 1990.

[68] T. Soderstrom and P. Stoica, "Some properties of the output error method,"
Automatica, vol. 18, pp. 1692-1716, Dec. 1982.

[69] M. N iv li "Uniqueness of msoe estimates in IR adaptive filtering; a search for
necessary conditions," in International Conference Acoustics, Speech, and S.:',..,l
Processing, 1989, pp. 1047-1050.

[70] S. D. Stearns, "Error surfaces of recursive adaptive filters," IEEE Transactions on
Acoustics, Speech, and S.:g,.Il Processing, vol. ASSP-29, no. 4, pp. 763-766, June
1981.

[71] F. Hong and M. N li. ,i, "On the error surface of sufficient order adaptive IIR
filters: Proofs and counterexamples to a unimodality conjecture," Proceedings IEEE
Transaction on Acoustics, Speech, and S.:g,.1l Processing, vol. 37, pp. 1436-1442,
Sep. 1989.

[72] R. Roberts and C. Mullis, Digital S.g,:..il Processing, Addison-Wesley, 1987.







[73] W. H. Kautz, "Transient synthesis in the time domain," IRE Transactions on
Circuit Th.-,,;' vol. 1, pp. 22-39, Sept. 1954.

[74] P. W. Broome, "Discrete orthonormal sequences," J. Assoc. Comput. Machinery,
vol. 12, no. 2, pp. 151-168, Dec. 1965.

[75] G. A. Williamson and S. Zimmermann, "Globally convergent adaptive IIR filter
based on fixed pole locations," IEEE Transactions on S.:,,,Il Processing, vol. 44, pp.
1418-1427, Jun. 1996.

[76] P. M. Pardalos and R. Horst, Introduction to Global Optimization, Norwood, MA:
Kluwer, 1989.

[77] H. Robins and S. Monro, "A stochastic approximation method," Annals of
Mathematical Statistics, vol. 22, pp. 400-407, 1951.

[78] E. Wong and B. Hajek, Stochastic Processes in Engineering S;,'-/. Springer,
1985.

[79] A. N. Kolmogorov, "Uber die analytische methoden in der
wahrscheinlichkeits-rechnung," Annals of Mathematical Statistics, vol. 104, pp.
415-458, 1931.

[80] S. Haykin, Introduction to Adaptive filters, MacMillan, NY, 1984.

[81] C. E. Shannon, "A mathematical theory of communication," Bell System Technical
Journal, vol. 27, pp. 379-423,623-653, 1984.

[82] E. Parzen, "On the estimation of a probability density function and the mode,"
Annals of Mathematical Statistics, vol. 33, pp. 1065, 1962.

[83] T. Cover and J. Thomas, Elements of Information Ti, .. ,;, Wiley, 1991.

[84] R. V. Har ',-, "Transmission of information," Bell System Technical Journal, vol.
7, 1928.

[85] G. Golub and F. Van Loan, Matrix Computation, John Hopkins Press, 1989.

[86] S. Kullback, In f.r 'i, i,,l.n Theory and Statistics, Dover Publications Inc., New York,
1968.

[87] C. Wang and J. C. Principe, "Training neural networks with additive noise in
the desired signal," IEEE Transactions on Neural Networks, vol. 10, no. 6, pp.
1511-1517, Nov. 1999.

[88] T. O. Silva, "Optimality conditions for truncated kautz networks with two
periodically repeating complex conjugates poles," IEEE Transactions on Automatic
Control, vol. 40, pp. 342-346, Feb 1995.

[89] I. Santamarfa, D. Erdogmus, and J. C. Principe, "Entropy minimization for
supervised digital communication channel equalization," IEEE Transactions on
S.:,,.ul Processing, vol. 50, no. 5, pp. 1184-1192, AI ,v 2002.





87

[90] B. Wahlberg, "System identification using Kautz models," IEEE Transactions on
Automatic Control, vol. 39, no. 6, pp. 1276 1282, Jun. 1994.

[91] D. Erdogmus and J. C. Principe, "An error-entropy minimization algorithm for
supervised training of nonlinear adaptive systems," IEEE Transactions on S.:,I..rl
Processing, vol. 50, no. 7, pp. 1780-1786, July 2002.

[92] D. Erdogmus and J. C. Principe, "An on-line adaptation algorithm for adaptive
system training with minimum error entropy: stochastic information gradient," in
International Conference on ICA and S'.:i,.. Separation, San Diego, CA, Dec. 2001,
pp. 7-12.














BIOGRAPHICAL SKETCH

Ching-An Lai was born in Chia-I, Taiwan, August 2, 1963. He earned his bachelor's

degree in Physics from the Chinese Military Academy Taiwan in 1985 and his Master's

degree in Electrical Engineering from Cihuiin-C(' i1 Institute of Technology Taiwan

in 1992. He began his Ph.D. program in the Electrical and Computer Engineering

Department of University of Florida in 1995. He pursued his Ph.D degree in the field of

adaptive filters. Currently, he is an instructor in the Chinese Military Academy.